Hcatalog是apache開源的對于表和底層數據管理統一服務平臺,目前最新release版本是0.5,不過需要Hive0.10支持,由于我們hive集群版本是0.9.0,所以只能降級使用hcatalog 0.4,由于hcatalog中所有的底層數據信息都是保存在hive metastore里,所以hive版本升級后schema變動或者api變動會對hacatalog產生影響,因此在hive 0.11中已經集成了了hcatalog,以后也會成為hive的一部分,而不是獨立的項目。
HCatalog底層依賴于Hive Metastore,執行過程中會創建一個HiveMetaStoreClient,通過這個instance提供的api來獲取表結構數據,如果是local metastore mode的話,會直接返回一個HiveMetaStore.HMSHandler,如果是remote mode的話(hive.metastore.local設置為false),會依據hive.metastore.uris(比如thrift://10.1.8.42:9083, thrift://10.1.8.51:9083)中設定的一串uri逐一順序建立連接。只要有一個鏈接建立就可以了,同時為了避免所有client都和第一個uri建立連接,導致負載過大,我加了點小trick,對這串uris隨機shuffle來做load balance
由于我們的集群開啟了kerberos security,需要獲取DelegationToken,但是local mode是不支持的,所以只用能remote mode
HiveMetaStoreClient.Java
```
publicString?getDelegationToken(String?owner,?String?renewerKerberosPrincipalName)throws
MetaException,?TException?{
if(localMetaStore)?{
thrownewUnsupportedOperationException("getDelegationToken()?can?be?"+
"called?only?in?thrift?(non?local)?mode");
}
returnclient.get_delegation_token(owner,?renewerKerberosPrincipalName);
}
```
HCatInputFormat和HCatOutputFormat提供一些mapreduce api來讀取表和寫入表
HCatInputFormat API:
```
publicstaticvoidsetInput(Job?job,
InputJobInfo?inputJobInfo)throwsIOException;
```
先實例化一個InputJobInfo對象,該對象包含三個參數dbname,tablename,filter,然后傳給setInput函數,來讀取相應的數據
```
publicstaticHCatSchema?getTableSchema(JobContext?context)
throwsIOException;
```
在運行時(比如mapper階段的setup函數中),可以傳進去JobContext,調用靜態getTableSchema來獲取先前setInput時設置的table schema信息
HCatOutputFormat API:
```
publicstaticvoidsetOutput(Job?job,?OutputJobInfo?outputJobInfo)throwsIOException;
```
OutPutJobInfo接受三個參數databaseName, tableName, partitionValues,其中第三個參數類型是Map,partition key放在map key里,partition value放在對應map key的value中,該參數可傳入null或空map,如果指定的partition存在的話,會拋org.apache.hcatalog.common.HCatException : 2002 : Partition already present with given partition key values
比如要要寫入指定的partition(dt='2013-06-13',country='china' ),可以這樣寫
```
Map?partitionValues?=newHashMap();
partitionValues.put("dt","2013-06-13");
partitionValues.put("country","china");
HCatTableInfo?info?=?HCatTableInfo.getOutputTableInfo(dbName,?tblName,?partitionValues);
HCatOutputFormat.setOutput(job,?info);
publicstaticHCatSchema?getTableSchema(JobContext?context)throwsIOException;
```
獲取之前HCatOutputFormat.setOutput指定的table schema信息
```
publicstaticvoidsetSchema(finalJob?job,finalHCatSchema?schema)throwsIOException;
```
設置最終寫入數據的schema信息,若不調用這個方法,則默認會使用table schema信息
下面提供一個完整mapreduce例子計算一天每個guid訪問頁面次數,map階段從表中讀取guid字段,reduce階段統計該guid對應pageview的總數,然后寫回另外一張帶有guid和count字段的表中
```
importjava.io.IOException;
importjava.util.Iterator;
importorg.apache.hadoop.conf.Configuration;
importorg.apache.hadoop.conf.Configured;
importorg.apache.hadoop.io.IntWritable;
importorg.apache.hadoop.io.Text;
importorg.apache.hadoop.io.WritableComparable;
importorg.apache.hadoop.mapreduce.Job;
importorg.apache.hadoop.mapreduce.Mapper;
importorg.apache.hadoop.mapreduce.Reducer;
importorg.apache.hadoop.util.Tool;
importorg.apache.hadoop.util.ToolRunner;
importorg.apache.hcatalog.data.DefaultHCatRecord;
importorg.apache.hcatalog.data.HCatRecord;
importorg.apache.hcatalog.data.schema.HCatSchema;
importorg.apache.hcatalog.mapreduce.HCatInputFormat;
importorg.apache.hcatalog.mapreduce.HCatOutputFormat;
importorg.apache.hcatalog.mapreduce.InputJobInfo;
importorg.apache.hcatalog.mapreduce.OutputJobInfo;
publicclassGroupByGuidextendsConfiguredimplementsTool?{
@SuppressWarnings("rawtypes")
publicstaticclassMapextends
Mapper?{
HCatSchema?schema;
Text?guid;
IntWritable?one;
@Override
protectedvoidsetup(org.apache.hadoop.mapreduce.Mapper.Context?context)
throwsIOException,?InterruptedException?{
guid?=newText();
one?=newIntWritable(1);
schema?=?HCatInputFormat.getTableSchema(context);
}
@Override
protectedvoidmap(WritableComparable?key,?HCatRecord?value,
Context?context)throwsIOException,?InterruptedException?{
guid.set(value.getString("guid",?schema));
context.write(guid,?one);
}
}
@SuppressWarnings("rawtypes")
publicstaticclassReduceextends
Reducer?{
HCatSchema?schema;
@Override
protectedvoidsetup(org.apache.hadoop.mapreduce.Reducer.Context?context)
throwsIOException,?InterruptedException?{
schema?=?HCatOutputFormat.getTableSchema(context);
}
@Override
protectedvoidreduce(Text?key,?Iterable?values,
Context?context)throwsIOException,?InterruptedException?{
intsum?=0;
Iterator?iter?=?values.iterator();
while(iter.hasNext())?{
sum++;
iter.next();
}
HCatRecord?record?=newDefaultHCatRecord(2);
record.setString("guid",?schema,?key.toString());
record.setInteger("count",?schema,?sum);
context.write(null,?record);
}
}
@Override
publicintrun(String[]?args)throwsException?{
Configuration?conf?=?getConf();
String?dbname?=?args[0];
String?inputTable?=?args[1];
String?filter?=?args[2];
String?outputTable?=?args[3];
intreduceNum?=?Integer.parseInt(args[4]);
Job?job?=newJob(conf,
"GroupByGuid,?Calculating?every?guid's?pageview");
HCatInputFormat.setInput(job,
InputJobInfo.create(dbname,?inputTable,?filter));
job.setJarByClass(GroupByGuid.class);
job.setInputFormatClass(HCatInputFormat.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(WritableComparable.class);
job.setOutputValueClass(DefaultHCatRecord.class);
job.setNumReduceTasks(reduceNum);
HCatOutputFormat.setOutput(job,
OutputJobInfo.create(dbname,?outputTable,null));
HCatSchema?s?=?HCatOutputFormat.getTableSchema(job);
HCatOutputFormat.setSchema(job,?s);
job.setOutputFormatClass(HCatOutputFormat.class);
return(job.waitForCompletion(true)??0:1);
}
publicstaticvoidmain(String[]?args)throwsException?{
intexitCode?=?ToolRunner.run(newGroupByGuid(),?args);
System.exit(exitCode);
}
}
```
其實hcatalog還支持動態分區dynamic partition,我們可以在OutJobInfo中指定部分partition keyvalue pair,在運行時候根據傳進來的值設置HCatRecord對應的其他partition keyvalue pair,這樣就能在一個job中同時寫多個partition了
本文轉自?http://blog.csdn.net/lalaguozhe/article/details/9083905
作者:yukangkk