【Spark Core】TaskScheduler源碼與任務提交原理淺析2

引言

上一節(jié)《TaskScheduler源碼與任務提交原理淺析1》介紹了TaskScheduler的創(chuàng)建過程,在這一節(jié)中,我將承接《Stage生成和Stage源碼淺析》中的submitMissingTasks函數(shù)繼續(xù)介紹task的創(chuàng)建和分發(fā)工作。

DAGScheduler中的submitMissingTasks函數(shù)

如果一個Stage的所有的parent stage都已經(jīng)計算完成或者存在于cache中,那么他會調(diào)用submitMissingTasks來提交該Stage所包含的Tasks。
submitMissingTasks負責創(chuàng)建新的Task。
Spark將由Executor執(zhí)行的Task分為ShuffleMapTask和ResultTask兩種。
每個Stage生成Task的時候根據(jù)Stage中的isShuffleMap標記確定是否為ShuffleMapStage,如果標記為真,則這個Stage輸出的結果會經(jīng)過Shuffle階段作為下一個Stage的輸入,創(chuàng)建ShuffleMapTask;否則是ResultStage,這樣會創(chuàng)建ResultTask,Stage的結果會輸出到Spark空間;最后,Task是通過taskScheduler.submitTasks來提交的。

計算流程

submitMissingTasks的計算流程如下:

  1. 首先得到RDD中需要計算的partition,對于Shuffle類型的stage,需要判斷stage中是否緩存了該結果;對于Result類型的Final Stage,則判斷計算Job中該partition是否已經(jīng)計算完成。
  1. 序列化task的binary。Executor可以通過廣播變量得到它。每個task運行的時候首先會反序列化。這樣在不同的executor上運行的task是隔離的,不會相互影響。
  2. 為每個需要計算的partition生成一個task:對于Shuffle類型依賴的Stage,生成ShuffleMapTask類型的task;對于Result類型的Stage,生成一個ResultTask類型的task。
  3. 確保Task是可以被序列化的。因為不同的cluster有不同的taskScheduler,在這里判斷可以簡化邏輯;保證TaskSet的task都是可以序列化的。
  4. 通過TaskScheduler提交TaskSet。

部分代碼

下面是submitMissingTasks判斷是否為ShuffleMapStage的部分代碼,其中部分參數(shù)說明在注釋中:

    val tasks: Seq[Task[_]] = if (stage.isShuffleMap) {
      partitionsToCompute.map { id =>
        val locs = getPreferredLocs(stage.rdd, id)
        val part = stage.rdd.partitions(id)
        //stage.id:Stage的序號
        //taskBinary:這個在下面具體介紹
        //part:RDD對應的partition
        //locs:最適合的執(zhí)行位置
        new ShuffleMapTask(stage.id, taskBinary, part, locs)
      }
    } else {
      val job = stage.resultOfJob.get
      partitionsToCompute.map { id =>
        val p: Int = job.partitions(id)
        val part = stage.rdd.partitions(p)
        val locs = getPreferredLocs(stage.rdd, p)
        //p:partition索引,表示從哪個partition讀取數(shù)據(jù)
        //id:輸出的分區(qū)索引,表示reduceID
        new ResultTask(stage.id, taskBinary, part, locs, id)
      }
    }

關于taskBinary參數(shù):這是RDD和ShuffleDependency的廣播變量(broadcase version),作為序列化之后的結果。
這里將RDD和其依賴關系進行序列化,在executor運行task之前再進行反序列化。這種方式對不同的task之間提供了較好的隔離。

下面是submitMissingTasks進行任務提交的部分代碼:

    if (tasks.size > 0) {
      logInfo("Submitting " + tasks.size + " missing tasks from " + stage + " (" + stage.rdd + ")")
      stage.pendingTasks ++= tasks
      logDebug("New pending tasks: " + stage.pendingTasks)
      taskScheduler.submitTasks(
        new TaskSet(tasks.toArray, stage.id, stage.newAttemptId(), stage.jobId, properties))
      stage.latestInfo.submissionTime = Some(clock.getTimeMillis())
    } else {
      // Because we posted SparkListenerStageSubmitted earlier, we should mark
      // the stage as completed here in case there are no tasks to run
      markStageAsFinished(stage, None)
      logDebug("Stage " + stage + " is actually done; %b %d %d".format(
        stage.isAvailable, stage.numAvailableOutputs, stage.numPartitions))
    }

TaskSchedulerImpl中的submitTasks

submitTasks的流程如下:

  1. 任務(tasks)會被包裝成TaskSetManager(由于TaskSetManager不是線程安全的,所以源碼中需要進行同步)
  1. TaskSetManager實例通過schedulableBuilder(分為FIFOSchedulableBuilder和FairSchedulableBuilder兩種)投入調(diào)度池中等待調(diào)度
  2. 任務提交同時啟動定時器,如果任務還未被執(zhí)行,定時器會持續(xù)發(fā)出警告直到任務被執(zhí)行
  3. 調(diào)用backend的reviveOffers函數(shù),向backend的driverActor實例發(fā)送ReviveOffers消息,driveerActor收到ReviveOffers消息后,調(diào)用makeOffers處理函數(shù)
  override def submitTasks(taskSet: TaskSet) {
    val tasks = taskSet.tasks
    logInfo("Adding task set " + taskSet.id + " with " + tasks.length + " tasks")
    this.synchronized {
      val manager = createTaskSetManager(taskSet, maxTaskFailures)
      activeTaskSets(taskSet.id) = manager
      schedulableBuilder.addTaskSetManager(manager, manager.taskSet.properties)

      if (!isLocal && !hasReceivedTask) {
        starvationTimer.scheduleAtFixedRate(new TimerTask() {
          override def run() {
            if (!hasLaunchedTask) {
              logWarning("Initial job has not accepted any resources; " +
                "check your cluster UI to ensure that workers are registered " +
                "and have sufficient resources")
            } else {
              this.cancel()
            }
          }
        }, STARVATION_TIMEOUT, STARVATION_TIMEOUT)
      }
      hasReceivedTask = true
    }
    backend.reviveOffers()
  }

TaskSetManager調(diào)度

每個Stage一經(jīng)確認,生成相應的TaskSet(即為一組tasks),其對應一個TaskSetManager通過Stage回溯到最源頭缺失的Stage提交到調(diào)度池pool中,在調(diào)度池中,這些TaskSetMananger又會根據(jù)Job ID排序,先提交的Job的TaskSetManager優(yōu)先調(diào)度,然后一個Job內(nèi)的TaskSetManager ID小的先調(diào)度,并且如果有未執(zhí)行完的父母Stage的TaskSetManager,則不會提交到調(diào)度池中。

reviveOffers函數(shù)代碼

下面是CoarseGrainedSchedulerBackend的reviveOffers函數(shù):

  override def reviveOffers() {
    driverActor ! ReviveOffers
  }

driveerActor收到ReviveOffers消息后,調(diào)用makeOffers處理函數(shù)。

DriverActor的makeOffers函數(shù)

makeOffers函數(shù)的處理邏輯是:

  1. 找到空閑的Executor,分發(fā)的策略是隨機分發(fā)的,即盡可能將任務平攤到各個Executor
  1. 如果有空閑的Executor,就將任務列表中的部分任務利用launchTasks發(fā)送給指定的Executor

SchedulerBackend(這里實際是CoarseGrainedSchedulerBackend)負責將新創(chuàng)建的Task分發(fā)給Executor,從launchTasks代碼中可以看出,在發(fā)送LauchTasks指令之前需要將TaskDescription序列化。

    // Make fake resource offers on all executors
    def makeOffers() {
      launchTasks(scheduler.resourceOffers(executorDataMap.map { case (id, executorData) =>
        new WorkerOffer(id, executorData.executorHost, executorData.freeCores)
      }.toSeq))
    }

TaskSchedulerImpl中的resourceOffers函數(shù)

任務是隨機分發(fā)給各個Executor的,資源分配的工作由resourceOffers函數(shù)處理。
正如上面submitTasks函數(shù)提到的,在TaskSchedulerImpl中,這一組Task被交給一個新的TaskSetManager實例進行管理,所有的TaskSetManager經(jīng)由SchedulableBuilder根據(jù)特定的調(diào)度策略進行排序,在TaskSchedulerImpl的resourceOffers函數(shù)中,當前被選擇的TaskSetManager的ResourceOffer函數(shù)被調(diào)用并返回包含了序列化任務數(shù)據(jù)的TaskDescription,最后這些TaskDescription再由SchedulerBackend派發(fā)到ExecutorBackend去執(zhí)行

resourceOffers主要做了3件事:

  1. 從Workers里面隨機抽出一些來執(zhí)行任務。
  1. 通過TaskSetManager找出和Worker在一起的Task,最后編譯打包成TaskDescription返回。
  2. 將Worker-->Array[TaskDescription]的映射關系返回。
  /**
   * Called by cluster manager to offer resources on slaves. We respond by asking our active task
   * sets for tasks in order of priority. We fill each node with tasks in a round-robin manner so
   * that tasks are balanced across the cluster.
   */
  def resourceOffers(offers: Seq[WorkerOffer]): Seq[Seq[TaskDescription]] = synchronized {
    // Mark each slave as alive and remember its hostname
    // Also track if new executor is added
    var newExecAvail = false
    // 遍歷worker提供的資源,更新executor相關的映射
    for (o <- offers) {
      executorIdToHost(o.executorId) = o.host
      activeExecutorIds += o.executorId
      if (!executorsByHost.contains(o.host)) {
        executorsByHost(o.host) = new HashSet[String]()
        executorAdded(o.executorId, o.host)
        newExecAvail = true
      }
      for (rack <- getRackForHost(o.host)) {
        hostsByRack.getOrElseUpdate(rack, new HashSet[String]()) += o.host
      }
    }
    // 從worker當中隨機選出一些來,防止任務都堆在一個機器上
    // Randomly shuffle offers to avoid always placing tasks on the same set of workers.
    val shuffledOffers = Random.shuffle(offers)
    // Build a list of tasks to assign to each worker.
    // worker的task列表
    val tasks = shuffledOffers.map(o => new ArrayBuffer[TaskDescription](o.cores))
    val availableCpus = shuffledOffers.map(o => o.cores).toArray
    // getSortedTask函數(shù)對taskset進行排序
    val sortedTaskSets = rootPool.getSortedTaskSetQueue
    for (taskSet <- sortedTaskSets) {
      logDebug("parentName: %s, name: %s, runningTasks: %s".format(
        taskSet.parent.name, taskSet.name, taskSet.runningTasks))
      if (newExecAvail) {
        taskSet.executorAdded()
      }
    }

    // Take each TaskSet in our scheduling order, and then offer it each node in increasing order
    // of locality levels so that it gets a chance to launch local tasks on all of them.
    // NOTE: the preferredLocality order: PROCESS_LOCAL, NODE_LOCAL, NO_PREF, RACK_LOCAL, ANY
    // 隨機遍歷抽出來的worker,通過TaskSetManager的resourceOffer,把本地性最高的Task分給Worker
    // 本地性是根據(jù)當前的等待時間來確定的任務本地性的級別。
    // 它的本地性主要是包括四類:PROCESS_LOCAL, NODE_LOCAL, RACK_LOCAL, ANY。

    //1. 首先依次遍歷 sortedTaskSets, 并對于每個 Taskset, 遍歷 TaskLocality
    //2. 越 local 越優(yōu)先, 找不到(launchedTask 為 false)才會到下個 locality 級別
    //3. (封裝在resourceOfferSingleTaskSet函數(shù))在多次遍歷offer list,
    //因為一次taskSet.resourceOffer只會占用一個core, 
    //而不是一次用光所有的 core, 這樣有助于一個 taskset 中的 task 比較均勻的分布在workers上
    //4. 只有在該taskset, 該locality下, 對所有worker offer都找不到合適的task時, 
    //才跳到下個 locality 級別
    var launchedTask = false
    for (taskSet <- sortedTaskSets; maxLocality <- taskSet.myLocalityLevels) {
      do {
        launchedTask = resourceOfferSingleTaskSet(
            taskSet, maxLocality, shuffledOffers, availableCpus, tasks)
      } while (launchedTask)
    }

    if (tasks.size > 0) {
      hasLaunchedTask = true
    }
    return tasks
  }

TaskDescription代碼:

private[spark] class TaskDescription(
    val taskId: Long,
    val attemptNumber: Int,
    val executorId: String,
    val name: String,
    val index: Int,    // Index within this task's TaskSet
    _serializedTask: ByteBuffer)
  extends Serializable {

  // Because ByteBuffers are not serializable, wrap the task in a SerializableBuffer
  private val buffer = new SerializableBuffer(_serializedTask)

  def serializedTask: ByteBuffer = buffer.value

  override def toString: String = "TaskDescription(TID=%d, index=%d)".format(taskId, index)
}

DriverActor的launchTasks函數(shù)

launchTasks函數(shù)流程:

  1. launchTasks函數(shù)將resourceOffers函數(shù)返回的TaskDescription信息進行序列化
  1. 向executorActor發(fā)送封裝了serializedTask的LaunchTask消息

由于受到Akka Frame Size尺寸的限制,如果發(fā)送數(shù)據(jù)過大,會被截斷。

    // Launch tasks returned by a set of resource offers
    def launchTasks(tasks: Seq[Seq[TaskDescription]]) {
      for (task <- tasks.flatten) {
        val ser = SparkEnv.get.closureSerializer.newInstance()
        val serializedTask = ser.serialize(task)
        if (serializedTask.limit >= akkaFrameSize - AkkaUtils.reservedSizeBytes) {
          val taskSetId = scheduler.taskIdToTaskSetId(task.taskId)
          scheduler.activeTaskSets.get(taskSetId).foreach { taskSet =>
            try {
              var msg = "Serialized task %s:%d was %d bytes, which exceeds max allowed: " +
                "spark.akka.frameSize (%d bytes) - reserved (%d bytes). Consider increasing " +
                "spark.akka.frameSize or using broadcast variables for large values."
              msg = msg.format(task.taskId, task.index, serializedTask.limit, akkaFrameSize,
                AkkaUtils.reservedSizeBytes)
              taskSet.abort(msg)
            } catch {
              case e: Exception => logError("Exception in error callback", e)
            }
          }
        }
        else {
          val executorData = executorDataMap(task.executorId)
          executorData.freeCores -= scheduler.CPUS_PER_TASK
          executorData.executorActor ! LaunchTask(new SerializableBuffer(serializedTask))
        }
      }
    }

參考資料

Spark大數(shù)據(jù)處理,高彥杰著,機械工業(yè)出版社
Spark技術內(nèi)幕: Task向Executor提交的源碼解析
Spark源碼系列(三)作業(yè)運行過程

轉(zhuǎn)載請注明作者Jason Ding及其出處
GitCafe博客主頁(http://jasonding1354.gitcafe.io/)
Github博客主頁(http://jasonding1354.github.io/)
CSDN博客(http://blog.csdn.net/jasonding1354)
簡書主頁(http://www.lxweimin.com/users/2bd9b48f6ea8/latest_articles)
Google搜索jasonding1354進入我的博客主頁

最后編輯于
?著作權歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內(nèi)容