在CDH環境集成Hudi

一、Hudi簡介

Hudi是Hadoop Updates and Incrementals的縮寫,用于管理HDFS上的大型分析數據集存儲,主要目的是高效的減少入庫延時。
Hudi是一個開源Spark三方庫,支持在Hadoop上執行upserts/insert/delete操作。
Hudi數據集通過自定義的InputFormat與當前的Hadoop生態系統(Hive、parquet、spark)集成,使該框架對最終用戶來說是無縫的。

二、Hudi安裝

hudi安裝需要通過Maven對從Git下載下來的源碼進行編譯。

1、安裝maven(略)

圖片.png

2、安裝git

sudo yum install git

3、構建Hudi

cd /opt/software/
git clone https://github.com/apache/hudi.git && cd hudi
[root@dev110 hudi]# vim pom.xml

       <repository>
        <id>nexus-aliyun</id>
        <name>nexus-aliyun</name>
        <url>http://maven.aliyun.com/nexus/content/groups/public/</url>
        <releases>
            <enabled>true</enabled>
        </releases>
        <snapshots>
            <enabled>false</enabled>
        </snapshots>
    </repository>

[root@dev110 hudi]# mvn clean package -DskipTests -DskipITs

三、使用Spark-shell對接Hudi

1、啟動spark-shell

spark-shell啟動,需要指定spark-avro模塊,因為默認環境里沒有,spark-avro模塊版本還需要和spark版本對應,這里都是2.4.0。

[root@dev110 hudi]# spark-shell \
--packages org.apache.spark:spark-avro_2.11:2.4.0 \
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' \
--jars /opt/software/hudi/packaging/hudi-spark-bundle/target/hudi-spark-bundle_2.11-0.6.1-SNAPSHOT.jar 
圖片.png

2、導入包和類,設置表名

scala> import org.apache.hudi.QuickstartUtils._
import org.apache.hudi.QuickstartUtils._
 
scala> import scala.collection.JavaConversions._
import scala.collection.JavaConversions._
 
scala> import org.apache.spark.sql.SaveMode._
import org.apache.spark.sql.SaveMode._
 
scala> import org.apache.hudi.DataSourceReadOptions._
import org.apache.hudi.DataSourceReadOptions._
 
scala> import org.apache.hudi.DataSourceWriteOptions._
import org.apache.hudi.DataSourceWriteOptions._
 
scala> import org.apache.hudi.config.HoodieWriteConfig._
import org.apache.hudi.config.HoodieWriteConfig._
 
scala> val tableName = "hudi_trips_cow"
tableName: String = hudi_trips_cow
 
scala> val basePath = "/tmp/hudi_trips_cow"
basePath: String = /tmp/hudi_trips_cow
 
scala> val dataGen = new DataGenerator

3、插入數據

新增數據,生成一些數據,將其加載到DataFrame中,然后將DataFrame寫入Hudi表

    scala> val inserts = convertToStringList(dataGen.generateInserts(10))
    scala> val df = spark.read.json(spark.sparkContext.parallelize(inserts, 2))
    scala> df.write.format("hudi").
         |   options(getQuickstartWriteConfigs).
         |   option(PRECOMBINE_FIELD_OPT_KEY, "ts").
         |   option(RECORDKEY_FIELD_OPT_KEY, "uuid").
         |   option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
         |   option(TABLE_NAME, tableName).
         |   mode(Overwrite).
         |   save(basePath)

Mode(overwrite)將覆蓋重新創建表(如果已存在)。可以檢查/tmp/hudi_trps_cow 路徑下是否有數據生成。文件存儲目錄如下圖所示


圖片.png

4、查詢數據

 val tripsSnapshotDF = spark.read. format("hudi").load(basePath + "/*/*/*/*")
 tripsSnapshotDF.createOrReplaceTempView("hudi_trips_snapshot")
 spark.sql("select fare, begin_lon, begin_lat, ts from  hudi_trips_snapshot where fare > 20.0").show()

+------------------+-------------------+-------------------+---+                
|              fare|          begin_lon|          begin_lat| ts|
+------------------+-------------------+-------------------+---+
| 33.92216483948643| 0.9694586417848392| 0.1856488085068272|  0|
| 27.79478688582596| 0.6273212202489661|0.11488393157088261|  0|
| 64.27696295884016| 0.4923479652912024| 0.5731835407930634|  0|
| 93.56018115236618|0.14285051259466197|0.21624150367601136|  0|
|  43.4923811219014| 0.8779402295427752| 0.6100070562136587|  0|
|34.158284716382845|0.46157858450465483| 0.4726905879569653|  0|
| 66.62084366450246|0.03844104444445928| 0.0750588760043035|  0|
| 41.06290929046368| 0.8192868687714224|  0.651058505660742|  0|
+------------------+-------------------+-------------------+---+

scala> spark.sql("select _hoodie_commit_time, _hoodie_record_key, _hoodie_partition_path, rider, driver, fare from  hudi_trips_snapshot").show()
+-------------------+--------------------+----------------------+---------+----------+------------------+
|_hoodie_commit_time|  _hoodie_record_key|_hoodie_partition_path|    rider|    driver|              fare|
+-------------------+--------------------+----------------------+---------+----------+------------------+
|     20201012103604|be02e0a9-b16e-473...|  americas/united_s...|rider-213|driver-213| 33.92216483948643|
|     20201012103604|ce0be450-2cd1-4ec...|  americas/united_s...|rider-213|driver-213| 27.79478688582596|
|     20201012103604|bdf16b8c-a46d-4ea...|  americas/united_s...|rider-213|driver-213| 64.27696295884016|
|     20201012103604|9015e0f8-bd34-49f...|  americas/united_s...|rider-213|driver-213|19.179139106643607|
|     20201012103604|bb80978f-4908-48b...|  americas/united_s...|rider-213|driver-213| 93.56018115236618|
|     20201012103604|f7120569-ca09-4c1...|  americas/brazil/s...|rider-213|driver-213|  43.4923811219014|
|     20201012103604|f70dd1da-8372-476...|  americas/brazil/s...|rider-213|driver-213|34.158284716382845|
|     20201012103604|107b9d18-bb56-470...|  americas/brazil/s...|rider-213|driver-213| 66.62084366450246|
|     20201012103604|f18dc5a2-6b0d-4b4...|    asia/india/chennai|rider-213|driver-213| 41.06290929046368|
|     20201012103604|980b6370-7800-46a...|    asia/india/chennai|rider-213|driver-213|17.851135255091155|
+-------------------+--------------------+----------------------+---------+----------+------------------+

5、更新數據 {#updates}

這類似于插入新數據。使用數據生成器生成對現有行程的更新,加載到DataFrame中并將DataFrame寫入hudi數據集。

val updates = convertToStringList(dataGen.generateUpdates(10))
val df = spark.read.json(spark.sparkContext.parallelize(updates, 2));
df.write.format("org.apache.hudi").
    options(getQuickstartWriteConfigs).
    option(PRECOMBINE_FIELD_OPT_KEY, "ts").
    option(RECORDKEY_FIELD_OPT_KEY, "uuid").
    option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
    option(TABLE_NAME, tableName).
    mode(Append).
    save(basePath);

6、再次查詢

spark.sql("select _hoodie_commit_time, _hoodie_record_key, _hoodie_partition_path, rider, driver, fare from  hudi_trips_snapshot").show()
+-------------------+--------------------+----------------------+---------+----------+------------------+
|_hoodie_commit_time|  _hoodie_record_key|_hoodie_partition_path|    rider|    driver|              fare|
+-------------------+--------------------+----------------------+---------+----------+------------------+
|     20201012105507|be02e0a9-b16e-473...|  americas/united_s...|rider-284|driver-284|  90.9053809533154|
|     20201012105507|ce0be450-2cd1-4ec...|  americas/united_s...|rider-284|driver-284|  98.3428192817987|
|     20201012105507|bdf16b8c-a46d-4ea...|  americas/united_s...|rider-284|driver-284|49.527694252432056|
|     20201012103604|9015e0f8-bd34-49f...|  americas/united_s...|rider-213|driver-213|19.179139106643607|
|     20201012103604|bb80978f-4908-48b...|  americas/united_s...|rider-213|driver-213| 93.56018115236618|
|     20201012105507|f18dc5a2-6b0d-4b4...|    asia/india/chennai|rider-284|driver-284| 9.384124531808036|
|     20201012105507|980b6370-7800-46a...|    asia/india/chennai|rider-284|driver-284| 90.25710109008239|
|     20201012103604|f7120569-ca09-4c1...|  americas/brazil/s...|rider-213|driver-213|  43.4923811219014|
|     20201012105507|f70dd1da-8372-476...|  americas/brazil/s...|rider-284|driver-284| 29.47661370147079|
|     20201012105507|107b9d18-bb56-470...|  americas/brazil/s...|rider-284|driver-284| 63.72504913279929|
+-------------------+--------------------+----------------------+---------+----------+------------------+

7、增量查詢

Hudi還提供了獲取給定提交時間戳以來已更改的記錄流的功能。 這可以通過使用Hudi的增量視圖并提供所需更改的開始時間來實現。 如果我們需要給定提交之后的所有更改(這是常見的情況),則無需指定結束時間。

// reload data
spark.read.format("org.apache.hudi").load(basePath + "/*/*/*/*").createOrReplaceTempView("hudi_ro_table")
val commits = spark.sql("select distinct(_hoodie_commit_time) as commitTime from  hudi_ro_table order by commitTime").map(k => k.getString(0)).take(50)
val beginTime = commits(commits.length - 2) // commit time we are interested in
// 增量查詢數據
val incViewDF = spark.read.format("org.apache.hudi").
    option(VIEW_TYPE_OPT_KEY, VIEW_TYPE_INCREMENTAL_OPT_VAL).
    option(BEGIN_INSTANTTIME_OPT_KEY, beginTime).
    load(basePath);
incViewDF.registerTempTable("hudi_incr_table")
spark.sql("select `_hoodie_commit_time`, fare, begin_lon, begin_lat, rider, driver,  ts from  hudi_incr_table where fare > 20.0").show()
+-------------------+------------------+--------------------+-------------------+---------+----------+---+
|_hoodie_commit_time|              fare|           begin_lon|          begin_lat|    rider|    driver| ts|
+-------------------+------------------+--------------------+-------------------+---------+----------+---+
|     20201012105507|  90.9053809533154| 0.19949323322922063|0.18294079059016366|rider-284|driver-284|  0|
|     20201012105507|  98.3428192817987|  0.3349917833248327| 0.4777395067707303|rider-284|driver-284|  0|
|     20201012105507|49.527694252432056|  0.5142184937933181| 0.7340133901254792|rider-284|driver-284|  0|
|     20201012105507| 90.25710109008239|  0.4006983139989222|0.08528650347654165|rider-284|driver-284|  0|
|     20201012105507| 29.47661370147079|0.010872312870502165| 0.1593867607188556|rider-284|driver-284|  0|
|     20201012105507| 63.72504913279929|   0.888493603696927| 0.6570857443423376|rider-284|driver-284|  0|
+-------------------+------------------+--------------------+-------------------+---------+----------+---+

這將提供在開始時間提交之后發生的所有更改,其中包含票價大于20.0的過濾器。關于此功能的獨特之處在于,它現在使您可以在批量數據上創作流式管道。

參考:https://blog.csdn.net/x950913/article/details/107178364
參考:https://www.bookstack.cn/read/apache-hudi-0.5-zh/docs-0.5.0-quickstart.md

?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。