[SPARK][CORE] 面試問題之 BypassMergeSortShuffleWriter的細節

歡迎關注公眾號 “Tim在路上”
BypassMergeSortShuffleWriter 就如其名,旁支的sort-baesd Shuffle, 他是采用Hash-style實現的Sort based Shuffle。在map階段records會按分區寫入不同的文件, 一個分區一個文件。然后鏈接這些分區文件形成一個output文件,并生成其index。reducer通過IndexShuffleBlockResolver 查找消費輸出文件的不同分區。

BypassMergeSortShuffleWriter 中records是不會緩存在內存中,所有的records最終都會被flush到磁盤。

在寫入時,BypassMergeSortShuffleWriter 會同時為所有的分區打開單獨的序列化器和文件流,所以當reduce分區數量特別大的時候性能會非常低下。

ShuffleWriter 的調用是在ShuffleMapTask的runTask中進行調用,每個mapTask 都會調用一次runTask。

BypassMergeSortShuffleWriter 源碼解析

首先,我們來回顧下ShuffleWriter的過程。Shuffle發生與寬依賴的stage間,由于stage內的計算采用pipeline的方式。shuffle發生的上一個stage為map節點,下游的stage為reduce階段。而shuffle寫的過程就發生在map階段,shuffleWriter的調用主要是在ShuffleMapStage中,每個ShuffleMapStage包含多個ShuffleMapTask, mapTask個數和分區數相關。

這樣每個ShuffleMapTask都會在其runTask調用下Writer接口,其并非直接調用到具體的執行類。而是在劃分寬依賴時想ShuffleManage注冊shuffle時,返回的ShuffleHandler決定的。

在ShuffleMapTask調用Writer時,是先調用了ShuffleWriteProcessor ,主要控制了ShuffleWriter的生命周期。下面我們看下ShuffleWriteProcessor 中的Write的實現:

// ShuffleWriteProcessor
def write(
    rdd: RDD[_],
    dep: ShuffleDependency[_, _, _],
    mapId: Long,
    context: TaskContext,
    partition: Partition): MapStatus = {
  var writer: ShuffleWriter[Any, Any] = null
  try {
    // [1] 通過SparkEnv獲取ShuffleManager, 并通過dep的shuffleHandle, 獲取對應的shuffleWriter的具體實現。
    val manager = SparkEnv.get.shuffleManager
    writer = manager.getWriter[Any, Any](
      dep.shuffleHandle,
      mapId,
      context,
      createMetricsReporter(context))
    // [2] 調用shuffleWriter的write方法, 并將當前rdd的迭代器傳入
    writer.write(
      rdd.iterator(partition, context).asInstanceOf[Iterator[_ <: Product2[Any, Any]]])
    // [3] shuffleWriter結束后,返回mapStatus,或清空數據
    val mapStatus = writer.stop(success = true)
    // [4] 如果shuffleWriter執行成功,初始化push-based shuffle, 后面再細講
    if (mapStatus.isDefined) {
      // Initiate shuffle push process if push based shuffle is enabled
      // The map task only takes care of converting the shuffle data file into multiple
      // block push requests. It delegates pushing the blocks to a different thread-pool -
      // ShuffleBlockPusher.BLOCK_PUSHER_POOL.
      if (dep.shuffleMergeEnabled && dep.getMergerLocs.nonEmpty && !dep.shuffleMergeFinalized) {
        manager.shuffleBlockResolver match {
          case resolver: IndexShuffleBlockResolver =>
            val dataFile = resolver.getDataFile(dep.shuffleId, mapId)
            new ShuffleBlockPusher(SparkEnv.get.conf)
              .initiateBlockPush(dataFile, writer.getPartitionLengths(), dep, partition.index)
          case _ =>
        }
      }
    }
    mapStatus.get
  }
...
}

ShuffleWriteProcessor 中主要做了三件事:

  • [1] 通過SparkEnv獲取ShuffleManager, 并通過dep的shuffleHandle, 獲取對應的shuffleWriter的具體實現。
  • [2] 調用shuffleWriter的write方法, 并將當前rdd的迭代器傳入
  • [3] shuffleWriter結束后,返回mapStatus,或清空數據

可見每一個ShuffleMapTask執行結束后,就會返回一個mapStatus。Task 結果被封裝成 CompletionEvent發送到Driver DAG Scheduler 。判斷Task的類型是ShuffleMapTask會DagScheduler 會向 MapOutputTracker 注冊 MapOutput status 信息。

那么map中的數據是如何通過BypassMergeSortShuffleWriter寫入的?

// BypassMergeSortShuffleWriter
@Override
public void write(Iterator<Product2<K, V>> records) throws IOException {
  assert (partitionWriters == null);
  // [1] 創建處理mapTask所有分區數據commit提交writer
  ShuffleMapOutputWriter mapOutputWriter = shuffleExecutorComponents
      .createMapOutputWriter(shuffleId, mapId, numPartitions);
  try {
    // 如果沒有數據,直接提交所有分區的commit, 并返回分區長度,獲取mapStatus
    if (!records.hasNext()) {
      partitionLengths = mapOutputWriter.commitAllPartitions(
        ShuffleChecksumHelper.EMPTY_CHECKSUM_VALUE).getPartitionLengths();
      mapStatus = MapStatus$.MODULE$.apply(
        blockManager.shuffleServerId(), partitionLengths, mapId);
      return;
    }
    final SerializerInstance serInstance = serializer.newInstance();
    final long openStartTime = System.nanoTime();
    // [2] 為每個分區創建一個DiskBlockObjectWriter寫入流和FileSegment文件段
    partitionWriters = new DiskBlockObjectWriter[numPartitions];
    partitionWriterSegments = new FileSegment[numPartitions];
    for (int i = 0; i < numPartitions; i++) {
      // [2.1] 每個分區創建個臨時file和blockid, 并生成維護一個寫入流
      final Tuple2<TempShuffleBlockId, File> tempShuffleBlockIdPlusFile =
          blockManager.diskBlockManager().createTempShuffleBlock();
      final File file = tempShuffleBlockIdPlusFile._2();
      final BlockId blockId = tempShuffleBlockIdPlusFile._1();
      DiskBlockObjectWriter writer =
        blockManager.getDiskWriter(blockId, file, serInstance, fileBufferSize, writeMetrics);
      if (partitionChecksums.length > 0) {
        writer.setChecksum(partitionChecksums[i]);
      }
      partitionWriters[i] = writer;
    } 
    // Creating the file to write to and creating a disk writer both involve interacting with
    // the disk, and can take a long time in aggregate when we open many files, so should be
    // included in the shuffle write time.
    writeMetrics.incWriteTime(System.nanoTime() - openStartTime);
    // [3] 依次將records寫入到對應分區的寫入流中, 并提交
    while (records.hasNext()) {
      final Product2<K, V> record = records.next();
      final K key = record._1();
      partitionWriters[partitioner.getPartition(key)].write(key, record._2());
    }

    // [3.1]依次對每個分區提交和flush寫入流
    for (int i = 0; i < numPartitions; i++) {
      try (DiskBlockObjectWriter writer = partitionWriters[i]) {
        partitionWriterSegments[i] = writer.commitAndGet();
      }
    }
    // [4] 遍歷所有分區的FileSegement, 并將其鏈接為一個文件,同時會調用writeMetadataFileAndCommit,為其生成索引文件
    partitionLengths = writePartitionedData(mapOutputWriter);
    mapStatus = MapStatus$.MODULE$.apply(
      blockManager.shuffleServerId(), partitionLengths, mapId);
  } catch (Exception e) {
    try {
      mapOutputWriter.abort(e);
    } catch (Exception e2) {
logger.error("Failed to abort the writer after failing to write map output.", e2);
      e.addSuppressed(e2);
    }
    throw e;
  }
}

綜上,Bypass的writer步驟有四步:

  • [1] 創建處理mapTask所有分區數據commit提交writer

  • [2] 為每個分區創建一個DiskBlockObjectWriter寫入流和FileSegment文件段

    • [2.1] 每個分區創建個臨時file和blockid, 并生成維護一個DiskBlockObjectWriter寫入流
  • [3] 依次將records寫入到對應分區的寫入流中, 并提交

    • [3.1]依次對每個分區提交和flush寫入流
  • [4] 遍歷所有分區的FileSegement, 并將其鏈接為一個文件,同時會調用writeMetadataFileAndCommit,為其生成索引文件

所以說, Bypass在進行寫入時會為每個MapTask都會生成reduce分區個FileSegement, 寫入時會并發的為所有的分區都創建臨時文件和維護一個io的寫入流, 最終在鏈接為一個文件。所以如果分區數特別多的情況下,是會維護很多io流,所以Bypass限制了分區的閾值。另外通過源碼發現Bypass在實現過程中并沒有使用buffer, 而是直接將數據寫入到流中,這也就是為什么Bypass不能處理mapSide的預聚合的算子。

那么BypassMergeSortShuffleWriter 屬于sort-based Shuffle 到底有沒有排序呢?

接下來,我們再看下Bypass是如何將分區的FileSegement, 并將其鏈接為一個文件, 我們就需要詳細看下writePartitionedData是如何實現的。

private long[] writePartitionedData(ShuffleMapOutputWriter mapOutputWriter) throws IOException {
  // Track location of the partition starts in the output file
  if (partitionWriters != null) {
    final long writeStartTime = System.nanoTime();
    try {
      for (int i = 0; i < numPartitions; i++) {
        // [1] 獲取每個分區的 fileSegement 臨時文件,和writer寫出流
        final File file = partitionWriterSegments[i].file();
        ShufflePartitionWriter writer = mapOutputWriter.getPartitionWriter(i);
        if (file.exists()) {
          if (transferToEnabled) {
            // Using WritableByteChannelWrapper to make resource closing consistent between
            // this implementation and UnsafeShuffleWriter.
            Optional<WritableByteChannelWrapper> maybeOutputChannel = writer.openChannelWrapper();
            if (maybeOutputChannel.isPresent()) {
              writePartitionedDataWithChannel(file, maybeOutputChannel.get());
            } else {
              writePartitionedDataWithStream(file, writer);
            }
          } else {
            // [2] 將fileSegement合并為一個文件
            writePartitionedDataWithStream(file, writer);
          }
          if (!file.delete()) {
logger.error("Unable to delete file for partition {}", i);
          }
        }
      }
    } finally {
      writeMetrics.incWriteTime(System.nanoTime() - writeStartTime);
    }
    partitionWriters = null;
  }
  // [3] 提交所有的分區,傳入每個分區數據的長度, 調用 writeMetadataFileAndCommit生成索引文件,記錄每個分區的偏移量
  return mapOutputWriter.commitAllPartitions(getChecksumValues(partitionChecksums))
    .getPartitionLengths();
}

writePartitionedData是如何實現,有三個步驟:

  • [1] 獲取每個分區的 fileSegement 臨時文件,和writer寫出流
  • [2] 將fileSegement合并為一個文件
  • [3] 提交所有的分區,傳入每個分區數據的長度, 調用 writeMetadataFileAndCommit生成索引文件,記錄每個分區的偏移量
bypass.png

總結, BypassMergeSortShuffleWriter 的實現是hash-style的方式,其中沒有sort, 沒有buffer,每一個mapTask都會生成分區數量個FileSegment, 最后再合并為一個File, 最終根據分區的長度為其生成索引文件。所以BypassMergeSortShuffleWriter在分區數量比較小的情況下,性能是比較佳的。其最終每個task會生成2個文件, 所以最終的生成文件數也是2 * M個文件。

今天就先到這里,通過上面的介紹,我們也留下些面試題:

  1. BypassMergeSortShuffleWriter和HashShuffle有什么區別?
  2. 為什么不保留HashShuffleManage, 而是將其作為SortShuffleManager中的一個Writer實現?

歡迎關注公眾號 “Tim在路上”

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容