Spark源碼分析:Spark運行模式及原理

Spark源碼分析:Spark運行模式及原理

1.運行模式概述

spark運行模式多種多樣,分為以下幾種

  • 本地模式
  • 為分布式
  • 集群
    • standalone
    • mesos
    • hadoop yarn

基本框架:

2.相關類介紹

  • taskscheduler/taskschedulerImpl
    private[spark] trait TaskScheduler {

      private val appId = "spark-application-" + System.currentTimeMillis
    
      def rootPool: Pool
    
      def schedulingMode: SchedulingMode
    
      def start(): Unit
    
      // Invoked after system has successfully initialized (typically in spark context).
      // Yarn uses this to bootstrap allocation of resources based on preferred locations,
      // wait for slave registrations, etc.
      def postStartHook() { }
    
      // Disconnect from the cluster.
      def stop(): Unit
    
      // Submit a sequence of tasks to run.
      def submitTasks(taskSet: TaskSet): Unit
    
      // Cancel a stage.
      def cancelTasks(stageId: Int, interruptThread: Boolean): Unit
    
      /**
       * Kills a task attempt.
       *
       * @return Whether the task was successfully killed.
       */
      def killTaskAttempt(taskId: Long, interruptThread: Boolean, reason: String): Boolean
    
      // Set the DAG scheduler for upcalls. This is guaranteed to be set before submitTasks is called.
      def setDAGScheduler(dagScheduler: DAGScheduler): Unit
    
      // Get the default level of parallelism to use in the cluster, as a hint for sizing jobs.
      def defaultParallelism(): Int
    
      /**
       * Update metrics for in-progress tasks and let the master know that the BlockManager is still
       * alive. Return true if the driver knows about the given block manager. Otherwise, return false,
       * indicating that the block manager should re-register.
       */
      def executorHeartbeatReceived(
          execId: String,
          accumUpdates: Array[(Long, Seq[AccumulatorV2[_, _]])],
          blockManagerId: BlockManagerId): Boolean
    
      /**
       * Get an application ID associated with the job.
       *
       * @return An application ID
       */
      def applicationId(): String = appId
    
      /**
       * Process a lost executor
       */
      def executorLost(executorId: String, reason: ExecutorLossReason): Unit
    
      /**
       * Get an application's attempt ID associated with the job.
       *
       * @return An application's Attempt ID
       */
      def applicationAttemptId(): Option[String]
    
    }
    

    taskscheduler主要用于核dagscheduler交互,負責任務的具體調度和運行。
    其核心接口是取消和提交任務sumbittasks和cancletasks

  • schedulerbackend
    主要用于和底層資源調度系統交互(yarn mesos)
    /**
    * A backend interface for scheduling systems that allows plugging in different ones under
    * TaskSchedulerImpl. We assume a Mesos-like model where the application gets resource offers as
    * machines become available and can launch tasks on them.
    */
    private[spark] trait SchedulerBackend {
    private val appId = "spark-application-" + System.currentTimeMillis

      def start(): Unit
      def stop(): Unit
      def reviveOffers(): Unit
      def defaultParallelism(): Int
    
      /**
       * Requests that an executor kills a running task.
       *
       * @param taskId Id of the task.
       * @param executorId Id of the executor the task is running on.
       * @param interruptThread Whether the executor should interrupt the task thread.
       * @param reason The reason for the task kill.
       */
      def killTask(
          taskId: Long,
          executorId: String,
          interruptThread: Boolean,
          reason: String): Unit =
        throw new UnsupportedOperationException
    
      def isReady(): Boolean = true
    
      /**
       * Get an application ID associated with the job.
       *
       * @return An application ID
       */
      def applicationId(): String = appId
    
      /**
       * Get the attempt ID for this run, if the cluster manager supports multiple
       * attempts. Applications run in client mode will not have attempt IDs.
       *
       * @return The application attempt id, if available.
       */
      def applicationAttemptId(): Option[String] = None
    
      /**
       * Get the URLs for the driver logs. These URLs are used to display the links in the UI
       * Executors tab for the driver.
       * @return Map containing the log names and their respective URLs
       */
      def getDriverLogUrls: Option[Map[String, String]] = None
    
    }
    
  • executor

    實際任務的執行都有executor執行,executor對每個人物創建一個taskrunner,交給線程池執行

3.local模式

localbackend 響應scheduler的receiveOffers 請求,根據可用的cpu的核的設定值直接生成cpu資源返回給scheduler,并通過executor在線程池中依次啟動核運行scheduler返回的任務列表。

4.yarn

YARN是一個資源管理、任務調度的框架,主要包含三大模塊:ResourceManager(RM)、NodeManager(NM)、ApplicationMaster(AM)。

其中,ResourceManager負責所有資源的監控、分配和管理;ApplicationMaster負責每一個具體應用程序的調度和協調;NodeManager負責每一個節點的維護。

對于所有的applications,RM擁有絕對的控制權和對資源的分配權。而每個AM則會和RM協商資源,同時和NodeManager通信來執行和監控task。幾個模塊之間的關系如圖所示

Yarn Cluster 模式

Spark的Yarn Cluster 模式流程如下:

  • 本地用YARN Client 提交App 到 Yarn Resource Manager
  • Yarn Resource Manager 選個 YARN Node Manager,用它來
    創建個ApplicationMaster,SparkContext相當于是這個ApplicationMaster管的APP,生成YarnClusterScheduler與YarnClusterSchedulerBackend選擇集群中的容器啟動CoarseCrainedExecutorBackend,用來啟動spark.executor。
  • ApplicationMaster與CoarseCrainedExecutorBackend會有遠程調用。

Spark的Yarn Client 模式流程如下:

  • 本地啟動SparkContext,生成YarnClientClusterScheduler 和 YarnClientClusterSchedulerBackend
  • YarnClientClusterSchedulerBackend啟動yarn.Client,用它提交App 到 Yarn Resource Manager
  • Yarn Resource Manager 選個 YARN Node Manager,用它來選擇集群中的容器啟動CoarseCrainedExecutorBackend,用來啟動spark.executor
  • YarnClientClusterSchedulerBackend與CoarseCrainedExecutorBackend會有遠程調用。

standalone

  1. 啟動app,在SparkContxt啟動過程中,先初始化DAGScheduler 和 TaskScheduler,并初始化 SparkDeploySchedulerBackend,并在其內部啟動DriverEndpoint和ClientEndpoint。

  2. ClientEndpoint想Master注冊app,Master收到注冊信息后把該app加入到等待運行app列表中,等待由Master分配給該app worker。

  3. app獲取到worker后,Master通知Worker的WorkerEndpont創建CoarseGrainedExecutorBackend進程,在該進程中創建執行容器executor

  4. executor創建完畢后發送信息給Master和DriverEndpoint,告知Executor創建完畢,在SparkContext注冊,后等待DriverEndpoint發送執行任務的消息。

  5. SparkContext分配TaskSet給CoarseGrainedExecutorBackend,按一定調度策略在executor執行。

  6. CoarseGrainedExecutorBackend在Task處理的過程中,把處理Task的狀態發送給DriverEndpoint,Spark根據不同的執行結果來處理。若處理完畢,則繼續發送其他TaskSet。

  7. app運行完成后,SparkContext會進行資源回收,銷毀Worker的CoarseGrainedExecutorBackend進程,然后注銷自己。
    啟動master
    private[deploy] object Master extends Logging {
    val SYSTEM_NAME = "sparkMaster"
    val ENDPOINT_NAME = "Master"

      def main(argStrings: Array[String]) {
        Utils.initDaemon(log)
        val conf = new SparkConf
        //解析參數
        val args = new MasterArguments(argStrings, conf)
        val (rpcEnv, _, _) = startRpcEnvAndEndpoint(args.host, args.port, args.webUiPort, conf)
        rpcEnv.awaitTermination()
      }
    
      /**
       * Start the Master and return a three tuple of:
       *   (1) The Master RpcEnv
       *   (2) The web UI bound port
       *   (3) The REST server bound port, if any
       */
      def startRpcEnvAndEndpoint(
          host: String,
          port: Int,
          webUiPort: Int,
          conf: SparkConf): (RpcEnv, Int, Option[Int]) = {
        val securityMgr = new SecurityManager(conf)
        val rpcEnv = RpcEnv.create(SYSTEM_NAME, host, port, conf, securityMgr)
        val masterEndpoint = rpcEnv.setupEndpoint(ENDPOINT_NAME,
          new Master(rpcEnv, rpcEnv.address, webUiPort, securityMgr, conf))
        val portsResponse = masterEndpoint.askSync[BoundPortsResponse](BoundPortsRequest)
        (rpcEnv, portsResponse.webUIPort, portsResponse.restPort)
      }
    }
    

master解析參數

/**
 * Command-line parser for the master.
 */
private[master] class MasterArguments(args: Array[String], conf: SparkConf) extends Logging {
  var host = Utils.localHostName()
  var port = 7077
  var webUiPort = 8080
  var propertiesFile: String = null

  // Check for settings in environment variables
  if (System.getenv("SPARK_MASTER_IP") != null) {
    logWarning("SPARK_MASTER_IP is deprecated, please use SPARK_MASTER_HOST")
    host = System.getenv("SPARK_MASTER_IP")
  }

  if (System.getenv("SPARK_MASTER_HOST") != null) {
    host = System.getenv("SPARK_MASTER_HOST")
  }
  if (System.getenv("SPARK_MASTER_PORT") != null) {
    port = System.getenv("SPARK_MASTER_PORT").toInt
  }
  if (System.getenv("SPARK_MASTER_WEBUI_PORT") != null) {
    webUiPort = System.getenv("SPARK_MASTER_WEBUI_PORT").toInt
  }

  parse(args.toList)

  // This mutates the SparkConf, so all accesses to it must be made after this line
  propertiesFile = Utils.loadDefaultSparkProperties(conf, propertiesFile)

  if (conf.contains("spark.master.ui.port")) {
    webUiPort = conf.get("spark.master.ui.port").toInt
  }

  @tailrec
  private def parse(args: List[String]): Unit = args match {
    case ("--ip" | "-i") :: value :: tail =>
      Utils.checkHost(value, "ip no longer supported, please use hostname " + value)
      host = value
      parse(tail)

    case ("--host" | "-h") :: value :: tail =>
      Utils.checkHost(value, "Please use hostname " + value)
      host = value
      parse(tail)

    case ("--port" | "-p") :: IntParam(value) :: tail =>
      port = value
      parse(tail)

    case "--webui-port" :: IntParam(value) :: tail =>
      webUiPort = value
      parse(tail)

    case ("--properties-file") :: value :: tail =>
      propertiesFile = value
      parse(tail)

    case ("--help") :: tail =>
      printUsageAndExit(0)

    case Nil => // No-op

    case _ =>
      printUsageAndExit(1)
  }

  /**
   * Print usage and exit JVM with the given exit code.
   */
  private def printUsageAndExit(exitCode: Int) {
    // scalastyle:off println
    System.err.println(
      "Usage: Master [options]\n" +
      "\n" +
      "Options:\n" +
      "  -i HOST, --ip HOST     Hostname to listen on (deprecated, please use --host or -h) \n" +
      "  -h HOST, --host HOST   Hostname to listen on\n" +
      "  -p PORT, --port PORT   Port to listen on (default: 7077)\n" +
      "  --webui-port PORT      Port for web UI (default: 8080)\n" +
      "  --properties-file FILE Path to a custom Spark properties file.\n" +
      "                         Default is conf/spark-defaults.conf.")
    // scalastyle:on println
    System.exit(exitCode)
  }
}

系統環境變量<spark?default.conf中的屬性<命令行參數<應用級代碼中的參數設置

啟動worker

private[deploy] object Worker extends Logging {
  val SYSTEM_NAME = "sparkWorker"
  val ENDPOINT_NAME = "Worker"

  def main(argStrings: Array[String]) {
    Utils.initDaemon(log)
    val conf = new SparkConf
    val args = new WorkerArguments(argStrings, conf)
    val rpcEnv = startRpcEnvAndEndpoint(args.host, args.port, args.webUiPort, args.cores,
      args.memory, args.masters, args.workDir, conf = conf)
    rpcEnv.awaitTermination()
  }

  def startRpcEnvAndEndpoint(
      host: String,
      port: Int,
      webUiPort: Int,
      cores: Int,
      memory: Int,
      masterUrls: Array[String],
      workDir: String,
      workerNumber: Option[Int] = None,
      conf: SparkConf = new SparkConf): RpcEnv = {

    // The LocalSparkCluster runs multiple local sparkWorkerX RPC Environments
    val systemName = SYSTEM_NAME + workerNumber.map(_.toString).getOrElse("")
    val securityMgr = new SecurityManager(conf)
    val rpcEnv = RpcEnv.create(systemName, host, port, conf, securityMgr)
    val masterAddresses = masterUrls.map(RpcAddress.fromSparkURL(_))
    rpcEnv.setupEndpoint(ENDPOINT_NAME, new Worker(rpcEnv, webUiPort, cores, memory,
      masterAddresses, ENDPOINT_NAME, workDir, conf, securityMgr))
    rpcEnv
  }

  def isUseLocalNodeSSLConfig(cmd: Command): Boolean = {
    val pattern = """\-Dspark\.ssl\.useNodeLocalConf\=(.+)""".r
    val result = cmd.javaOpts.collectFirst {
      case pattern(_result) => _result.toBoolean
    }
    result.getOrElse(false)
  }

  def maybeUpdateSSLSettings(cmd: Command, conf: SparkConf): Command = {
    val prefix = "spark.ssl."
    val useNLC = "spark.ssl.useNodeLocalConf"
    if (isUseLocalNodeSSLConfig(cmd)) {
      val newJavaOpts = cmd.javaOpts
          .filter(opt => !opt.startsWith(s"-D$prefix")) ++
          conf.getAll.collect { case (key, value) if key.startsWith(prefix) => s"-D$key=$value" } :+
          s"-D$useNLC=true"
      cmd.copy(javaOpts = newJavaOpts)
    } else {
      cmd
    }
  }
}

剩下的解析和master類似

資源回收

我們在概述中提到了“ app運行完成后,SparkContext會進行資源回收,銷毀Worker的CoarseGrainedExecutorBackend進程,然后注銷自己。”接下來我們就來講解下Master和Executor是如何感知到Application的退出的。

調用棧如下:

  • SparkContext.stop
    • DAGScheduler.stop
      • TaskSchedulerImpl.stop
        • CoarseGrainedSchedulerBackend.stop
          • CoarseGrainedSchedulerBackend.stopExecutors
            • CoarseGrainedSchedulerBackend.DriverEndpoint.receiveAndReply - CoarseGrainedExecutorBackend.receive
              - Executor.stop
              • CoarseGrainedSchedulerBackend.DriverEndpoint.receiveAndReply
最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 228,461評論 6 532
  • 序言:濱河連續發生了三起死亡事件,死亡現場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機,發現死者居然都...
    沈念sama閱讀 98,538評論 3 417
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 176,423評論 0 375
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 62,991評論 1 312
  • 正文 為了忘掉前任,我火速辦了婚禮,結果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當我...
    茶點故事閱讀 71,761評論 6 410
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發上,一...
    開封第一講書人閱讀 55,207評論 1 324
  • 那天,我揣著相機與錄音,去河邊找鬼。 笑死,一個胖子當著我的面吹牛,可吹牛的內容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 43,268評論 3 441
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 42,419評論 0 288
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當地人在樹林里發現了一具尸體,經...
    沈念sama閱讀 48,959評論 1 335
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 40,782評論 3 354
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發現自己被綠了。 大學時的朋友給我發了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 42,983評論 1 369
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 38,528評論 5 359
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質發生泄漏。R本人自食惡果不足惜,卻給世界環境...
    茶點故事閱讀 44,222評論 3 347
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 34,653評論 0 26
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 35,901評論 1 286
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 51,678評論 3 392
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 47,978評論 2 374

推薦閱讀更多精彩內容