1. 前言
有時(shí)候需要按照key去做reduce操作時(shí),一般情況下調(diào)用reduceByKey就可以完成按照key reduce的任務(wù),reduceByKey的調(diào)用就必然意味著shuffle操作。但是有的時(shí)候如果我們已經(jīng)知道相同的key都在同一個(gè)partition里面了,這個(gè)時(shí)候其實(shí)沒(méi)有必要去使用reduceByKey通過(guò)一次shuffle將相同的key收集到同一個(gè)reducer分區(qū)里面,而是可以直接在map端就去完成reduce操作。
比如下面是一個(gè)word count在2個(gè)分區(qū)里面的分布:
------partition 1----------
(failure,1)
(count,1)
(thief,1)
(failure,1)
------partition 2--------
(fortification,1)
(peek,1)
(lepta,1)
(peek,1)
由于相同的word都在同一個(gè)分區(qū)里面了,沒(méi)必要去通過(guò)reduceByKey去完成word count操作。
2. 解決方法
實(shí)現(xiàn)一個(gè)RDD,在其compute方法里完成按key聚合,實(shí)現(xiàn)如下:
/**
K: key type
V: 上游rdd中value的type
C: V 經(jīng)過(guò)reduce之后的type
參考ShufferedRDD
*/
class MapsideReduceRDD[K:ClassTag, V:ClassTag, C:ClassTag](
// 上游rdd,要求上游的rdd中數(shù)據(jù)已經(jīng)轉(zhuǎn)換成(key,value)的形式
@transient var prev : RDD[_ <: Product2[K,V]]
) extends RDD[(K,C)](prev){
// 需要一個(gè)aggregator去完成value的聚合, reduceByKey也會(huì)創(chuàng)建這個(gè)
private var aggregator : Option[Aggregator[K,V,C]] = None
def setAggregator(aggregator: Aggregator[K,V,C]):this.type ={
this.aggregator = Option(aggregator)
this
}
override def compute(split: Partition, context: TaskContext): Iterator[(K, C)] = {
/* 創(chuàng)建一個(gè)ExternalAppendOnlyMap,這個(gè)數(shù)據(jù)結(jié)構(gòu)是spark中提供的,插入
. (K,V)的數(shù)據(jù),然后按照傳遞給他聚合方法完成(K,V)的聚合,返回(K,C)的數(shù)據(jù)
*/
val externalMap = createExternalMap
// 這里迭代上游rdd中(K,V)類型的記錄
val rddIter = dependencies(0).rdd.asInstanceOf[RDD[Product2[K,V]]].iterator(split, context)
// 插入到externalMap中
externalMap.insertAll(rddIter)
// 返回
new InterruptibleIterator(context,
externalMap.iterator
)
}
override protected def getPartitions: Array[Partition] = firstParent[Product2[K,V]].partitions
private def createExternalMap: ExternalAppendOnlyMap[K,V,C] = {
require(aggregator.nonEmpty, "aggregator should not be empty")
/**
創(chuàng)建ExternalAppendOnlyMap, 它需要一下參數(shù):
- 一個(gè)V => C 類型的函數(shù),用于迭代時(shí)發(fā)現(xiàn)某個(gè)key的第一個(gè)value,將它轉(zhuǎn)換成C
- 一個(gè)(C,V) => C類型的函數(shù),用于將value合并到C上
- 一個(gè)(C,C) => C類型的函數(shù),將兩個(gè)部分聚合的結(jié)果合并到一起
*/
new ExternalAppendOnlyMap[K,V,C](aggregator.get.createCombiner, aggregator.get.mergeValue, aggregator.get.mergeCombiners)
}
}
ExternalAppendOnlyMap會(huì)在必要時(shí)spill到磁盤(pán)
2.1 測(cè)試
測(cè)試類如下:
object MapsideReduceTest {
def main(args: Array[String]): Unit ={
val sc = new SparkContext()
val words = Seq(Seq("failure","count","thief","failure","count"),Seq("fortification","peek","lepta","peek"));
// 分兩個(gè)分區(qū),第一個(gè)分區(qū)包含Seq("failure","count","thief","failure","count")
// 這樣相同的word只在一個(gè)分區(qū)里面,然后統(tǒng)計(jì)word count
val wordsRDD = sc.parallelize(words, 2)
// flatMap將Seq()展開(kāi),然后調(diào)用map轉(zhuǎn)換成(failure,1)這種數(shù)據(jù)
val wordsCount = wordsRDD.flatMap(seq => seq).map(word => (word,1))
val aggregator = createAggregator
val mapsideReduceRDD = new MapsideReduceRDD[String, Int, Int](wordsCount).setAggregator(aggregator)
mapsideReduceRDD.saveAsTextFile("/Users/eric/mapsideReduce")
}
def createAggregator: Aggregator[String,Int,Int] ={
val createCombiner: Int => Int = value => value
val mergeValue : (Int, Int) => Int = (mergedValue, newValue) => {
mergedValue + newValue
}
val mergeCombiner = mergeValue
new Aggregator[String, Int,Int](createCombiner, mergeValue, mergeCombiner)
}
}
提交后測(cè)試結(jié)果如下:
產(chǎn)生2個(gè)輸出文件以及內(nèi)容:
------ part-00000 -------
(failure,2)
(count,2)
(thief,1)
------- part-00001 ------
(lepta,1)
(peek,2)
(fortification,1)