 1.Spark原理之作业执行原理中任务调度概述
   
   再次简要回顾 Spark 中的几个重要概念：
       Job 是以 Action 方法为界，遇到一个 Action 方法则触发一个 Job
       Stage是Job的子集，以 RDD宽依赖(即 Shuffle)为界，遇到Shuffle做一次划分。Stage有两个具体子类：
	       ShuffleMapStage，是其他 Stage 的输入
		       ShuffleMapStage 内部的转换操作(map、filter等)会组成pipeline，连在一起计算
产生           map输出文件(Shuffle 过程中输出的文件)
		   ResultStage。一个job中只有一个ResultStage，最后一个 Stage即为ResultStage
	   Task 是 Stage 的子集，以并行度(分区数)来衡量，分区数是多少，则有多少个 task
   SparkContext中的三大组件：
       DAGScheduler(class) 负责 Stage 的调度
       TaskScheduler(trait，仅有一个实现 TaskSchedulerImpl) 负责 Task 的调度
       SchedulerBackend 有多种实现，分别对应不同的资源管理器	  
   Spark 的任务调度可分为：Stage级调度(高层调度)、Task级调度(底层调度)。总体调度流程如下图所示：

 2.job触发
   
   Action 操作后会触发 Job 的计算，并交给 DAGScheduler 来提交。
   1).Action 触发 sc.runJob
   2).触发 dagScheduler.runJob
   def runJob[T, U: ClassTag](
      rdd: RDD[T],
      func: (TaskContext, Iterator[T]) => U,
      partitions: Seq[Int],
      resultHandler: (Int, U) => Unit): Unit = {
    if (stopped.get()) {
      throw new IllegalStateException("SparkContext has been shutdown")
    }
    val callSite = getCallSite
    val cleanedFunc = clean(func)
    logInfo("Starting job: " + callSite.shortForm)
    if (conf.getBoolean("spark.logLineage", false)) {
      logInfo("RDD's recursive dependencies:\n" + rdd.toDebugString)
    }
    dagScheduler.runJob(rdd, cleanedFunc, partitions, callSite, resultHandler, localProperties.get)
    progressBar.foreach(_.finishAll())
    rdd.doCheckpoint()
    }
   spark.logLineage 为 true 时，调用 Action 时打印 rdd 的 lineage 信息。
   3).dagScheduler.runJob 提交job
   作业提交后发生阻塞，等待执行结果,job是串行执行的。
   def runJob[T, U](
      rdd: RDD[T],
      func: (TaskContext, Iterator[T]) => U,
      partitions: Seq[Int],
      callSite: CallSite,
      resultHandler: (Int, U) => Unit,
      properties: Properties): Unit = {
    // 启动时间
    val start = System.nanoTime
	// 提交Job，该方法是异步的，会立即返回JobWaiter对象
    val waiter = submitJob(rdd, func, partitions, callSite, resultHandler, properties)
    ThreadUtils.awaitReady(waiter.completionFuture, Duration.Inf)
	// 获取运行结果
    waiter.completionFuture.value.get match {
      case scala.util.Success(_) => // Job执行成功
        logInfo("Job %d finished: %s, took %f s".format
          (waiter.jobId, callSite.shortForm, (System.nanoTime - start) / 1e9))
      case scala.util.Failure(exception) => // Job执行失败
        logInfo("Job %d failed: %s, took %f s".format
          (waiter.jobId, callSite.shortForm, (System.nanoTime - start) / 1e9))
        // SPARK-8644: Include user stack trace in exceptions coming from DAGScheduler.
		// 记录线程异常堆栈信息
        val callerStackTrace = Thread.currentThread().getStackTrace.tail
        exception.setStackTrace(exception.getStackTrace ++ callerStackTrace)
		// 抛出异常
        throw exception
    }
  }
 
 3.Stage划分
      
   Spark的任务调度从 DAG 划分开始，由 DAGScheduler 完成
       DAGScheduler 根据 RDD 的血缘关系构成的 DAG 进行切分，将一个Job划分为若干Stages，具体划分策略是：
从最后一个RDD开始，通过回溯依赖判断父依赖是否是宽依赖(即以Shuffle为界)，划分Stage；窄依赖的RDD之间被
划分到同一个Stage中，可以进行 pipeline 式的计算
       在向前搜索的过程中使用深度优先搜索算法
       最后一个Stage称为ResultStage，其他的都是ShuffleMapStage
	   一个Stage是否被提交，需要判断它的父Stage是否执行。只有父Stage执行完毕才能提交当前Stage，如果一个
Stage没有父Stage，那么从该Stage开始提交
   总体而言，DAGScheduler做的事情较为简单，仅仅是在Stage层面上划分DAG，提交Stage并监控相关状态信息。
   1).DAGScheduler中的重要对象
   DAGSchedulerEventProcessLoop： DAGScheduler内部的事件循环处理器，用于处理DAGSchedulerEvent类型
的事件。DAGSchedulerEventProcessLoop 实现了自 EventLoop。
   EventLoop是个消息异步处理策略抽象类(abstract class)
       内置了一个消息队列(双端队列) eventQueue: LinkedBlockingDeque[E]，配合实现消息存储、消息消费使用
	   内置了一个消费线程eventThread，消费线程消费队列中的消息，消费处理接口函数是onReceive(event: E)，消
费异常函数接口onError(e: Throwable)
       对外开放了接收消息的post方法：接收到外部消息并存入队列，等待被消费
	   消费线程启动方法start。在调用线程启动方法：eventThread.start()之前，需要调用onStart()为启动做准备接
口函数
       消费线程停止方法stop。在调用线程停止方法：eventThread.interrupt&eventThread.join()之后，需要调用
onStop()做补充接口函数
private[spark] abstract class EventLoop[E](name: String) extends Logging {
  // 事件队列，双端队列
  private val eventQueue: BlockingQueue[E] = new LinkedBlockingDeque[E]()
  
  // 标记当前事件循环是否停止
  private val stopped = new AtomicBoolean(false)

  // Exposed for testing.
  private[spark] val eventThread = new Thread(name) {
	// 设置为守护线程  
    setDaemon(true)
    
	// 主要的run()方法
    override def run(): Unit = {
      try {
        while (!stopped.get) {
	      // 从事件队列中取出事件		
          val event = eventQueue.take()
          try {
			// 交给onReceive()方法处理  
            onReceive(event)
          } catch { // 异常处理
            case NonFatal(e) => // // 非致命异常
              try {
				// 回调给onError()方法处理
                onError(e)
              } catch {
                case NonFatal(e) => logError("Unexpected error in " + name, e)
              }
          }
        }
      } catch { // 中断等其他异常
        case ie: InterruptedException => // exit even if eventQueue is not empty
        case NonFatal(e) => logError("Unexpected error in " + name, e)
      }
    }

  }
  
  // 启动当前事件循环
  def start(): Unit = {
	// 判断是否已被停止，被停止的事件循环无法被启动  
    if (stopped.get) {
      throw new IllegalStateException(name + " has already been stopped")
    }
    // Call onStart before starting the event thread to make sure it happens before onReceive
    // 调用onStart()方法通知事件循环启动了，onStart()方法由子类实现
	onStart()
	// 启动事件处理线程
    eventThread.start()
  }
  
  // 停止当前事件循环
  def stop(): Unit = {
	// CAS方式修改stopped为true，标识事件循环被停止  
    if (stopped.compareAndSet(false, true)) {
	  // 中断事件处理线程
      eventThread.interrupt()
	  // 标识是否调用了onStop()方法
      var onStopCalled = false
      try {
		// 对事件处理线程进行join，等待它完成
        eventThread.join()
        // Call onStop after the event thread exits to make sure onReceive happens before onStop
        // 标记onStopCalled并调用onStop()方法通知事件循环停止了，onStop()方法由子类实现
		onStopCalled = true
        onStop()
      } catch {
        case ie: InterruptedException =>
          Thread.currentThread().interrupt()
          if (!onStopCalled) {
            // ie is thrown from `eventThread.join()`. Otherwise, we should not call `onStop` since
            // it's already called.
			// 如果join过程中出现中断异常，则直接调用onStop()方法
            onStop()
          }
      }
    } else {
      // Keep quiet to allow calling `stop` multiple times.
    }
  }

  /**
   * Put the event into the event queue. The event thread will process it later.
   * 投递事件，会放入eventQueue事件队列
   */
  def post(event: E): Unit = {
    eventQueue.put(event)
  }

  /**
   * Return if the event thread has already been started but not yet stopped.
   * 判断事件循环是否处于激活状态
   */
  def isActive: Boolean = eventThread.isAlive

  /**
   * Invoked when `start()` is called but before the event thread starts.
   * 表示事件循环启动了，需子类实现
   */
  protected def onStart(): Unit = {}

  /**
   * Invoked when `stop()` is called and the event thread exits.
   * 表示事件循环停止了，需子类实现
   */
  protected def onStop(): Unit = {}

  /**
   * Invoked in the event thread when polling events from the event queue.
   *
   * Note: Should avoid calling blocking actions in `onReceive`, or the event thread will be blocked
   * and cannot process events in time. If you want to call some blocking actions, run them in
   * another thread.
   * 表示收到事件，需子类实现
   */
  protected def onReceive(event: E): Unit

  /**
   * Invoked if `onReceive` throws any non fatal error. Any non fatal error thrown from `onError`
   * will be ignored.
   * 表示在处理事件时出现异常，需子类实现
   */
  protected def onError(e: Throwable): Unit

}
   
   JobWaiter实现了 JobListener 接口，等待 DAGScheduler 中的job计算完成。
   每个 Task 结束后，通过回调函数，将对应结果传递给句柄函数 resultHandler 处理。
   所有Tasks都完成时认为job完成。
private[spark] class JobWaiter[T](
    dagScheduler: DAGScheduler,
    val jobId: Int,
    totalTasks: Int,
    resultHandler: (Int, T) => Unit)
  extends JobListener with Logging {
  // 等待完成的Job中已经完成的Task数量
  private val finishedTasks = new AtomicInteger(0)
  // If the job is finished, this will be its result. In the case of 0 task jobs (e.g. zero
  // partition RDDs), we set the jobResult directly to JobSucceeded.
  // 用来代表Job完成后的结果
  // 如果totalTasks等于零，说明没有Task需要执行，此时将被直接设置为Success。
  private val jobPromise: Promise[Unit] =
    if (totalTasks == 0) Promise.successful(()) else Promise()
  
  // Job是否已经完成
  def jobFinished: Boolean = jobPromise.isCompleted
  
  // 返回jobPromise的future
  def completionFuture: Future[Unit] = jobPromise.future

  /**
   * Sends a signal to the DAGScheduler to cancel the job. The cancellation itself is handled
   * asynchronously. After the low level scheduler cancels all the tasks belonging to this job, it
   * will fail this job with a SparkException.
   * 取消对Job的执行
   */
  def cancel() {
    dagScheduler.cancelJob(jobId, None)
  }
  
  // Job执行成功后将调用该方法
  override def taskSucceeded(index: Int, result: Any): Unit = {
    // resultHandler call must be synchronized in case resultHandler itself is not thread safe.
    synchronized { // 加锁进行回调
      resultHandler(index, result.asInstanceOf[T])
    }
	// 完成Task数量自增，如果所有Task都完成了就调用JobPromise的success()方法
    if (finishedTasks.incrementAndGet() == totalTasks) {
      jobPromise.success(())
    }
  }
  // Job执行失败后将调用该方法
  override def jobFailed(exception: Exception): Unit = {
	// 调用jobPromise的相关方法将其设置为Failure  
    if (!jobPromise.tryFailure(exception)) {
      logWarning("Ignore failure", exception)
    }
  }

}
   2).dagScheduler.submit 发送消息
   def submitJob[T, U](
      rdd: RDD[T],
      func: (TaskContext, Iterator[T]) => U,
      partitions: Seq[Int],
      callSite: CallSite,
      resultHandler: (Int, U) => Unit,
      properties: Properties): JobWaiter[U] = {
    // Check to make sure we are not launching a task on a partition that does not exist.
    // 获取当前Job的最大分区数
	val maxPartitions = rdd.partitions.length
    // 检查不存在的分区，如果有就抛出异常
	partitions.find(p => p >= maxPartitions || p < 0).foreach { p =>
      throw new IllegalArgumentException(
        "Attempting to access a non-existent partition: " + p + ". " +
          "Total number of partitions: " + maxPartitions)
    }
    // 生成下一个Job的jobId
    val jobId = nextJobId.getAndIncrement()
	/*
	 * 如果Job的分区数量等于0，则创建一个totalTasks属性为0的JobWaiter并返回。
	 * 根据JobWaiter的实现，totalTasks属性为0的JobWaiter的jobPromise将被设置为Success。
	 */
    if (partitions.size == 0) {
      // Return immediately if the job is running 0 tasks
      return new JobWaiter[U](this, jobId, 0, resultHandler)
    }
    // 分区数量大于0
    assert(partitions.size > 0)
    val func2 = func.asInstanceOf[(TaskContext, Iterator[_]) => _]
	// 创建JobWaiter
    val waiter = new JobWaiter(this, jobId, partitions.size, resultHandler)
	/*
	 * 将JobWaiter包装到JobSubmitted消息中，投递给DAGSchedulerEventProcessLoop，
	 * 这个消息最终会被DAGScheduler的handleJobSubmitted()方法处理。
	 */
    eventProcessLoop.post(JobSubmitted(
      jobId, rdd, func2, partitions.toArray, callSite, waiter,
      SerializationUtils.clone(properties)))
    waiter
  }
   3).调用 dagScheduler.handleJobSubmitted
   // 处理Job的提交
   private[scheduler] def handleJobSubmitted(jobId: Int,
      finalRDD: RDD[_],
      func: (TaskContext, Iterator[_]) => _,
      partitions: Array[Int],
      callSite: CallSite,
      listener: JobListener,
      properties: Properties) {
    var finalStage: ResultStage = null
    try {
      // New stage creation may throw an exception if, for example, jobs are run on a
      // HadoopRDD whose underlying HDFS files have been deleted.
	  // 创建ResultStage
      finalStage = createResultStage(finalRDD, func, partitions, jobId, callSite)
    } catch {
      case e: BarrierJobSlotsNumberCheckFailed =>
        logWarning(s"The job $jobId requires to run a barrier stage that requires more slots " +
          "than the total number of slots in the cluster currently.")
        // If jobId doesn't exist in the map, Scala coverts its value null to 0: Int automatically.
        val numCheckFailures = barrierJobIdToNumTasksCheckFailures.compute(jobId,
          new BiFunction[Int, Int, Int] {
            override def apply(key: Int, value: Int): Int = value + 1
          })
        if (numCheckFailures <= maxFailureNumTasksCheck) {
          messageScheduler.schedule(
            new Runnable {
              override def run(): Unit = eventProcessLoop.post(JobSubmitted(jobId, finalRDD, func,
                partitions, callSite, listener, properties))
            },
            timeIntervalNumTasksCheck,
            TimeUnit.SECONDS
          )
          return
        } else {
          // Job failed, clear internal data.
          barrierJobIdToNumTasksCheckFailures.remove(jobId)
          listener.jobFailed(e)
          return
        }

      case e: Exception =>
        logWarning("Creating new stage failed due to exception - job: " + jobId, e)
        listener.jobFailed(e)
        return
    }
    // Job submitted, clear internal data.
    barrierJobIdToNumTasksCheckFailures.remove(jobId)
    
	// 创建ActiveJob
    val job = new ActiveJob(jobId, finalStage, callSite, listener, properties)
	// 清空缓存的各个RDD的所有分区的位置信息
    clearCacheLocs()
    logInfo("Got job %s (%s) with %d output partitions".format(
      job.jobId, callSite.shortForm, partitions.length))
    logInfo("Final stage: " + finalStage + " (" + finalStage.name + ")")
    logInfo("Parents of final stage: " + finalStage.parents)
    logInfo("Missing parents: " + getMissingParentStages(finalStage))
    
	// 生成Job提交时间
    val jobSubmissionTime = clock.getTimeMillis()
	// 记录jobId与ActiveJob的映射
    jobIdToActiveJob(jobId) = job
    activeJobs += job
	// 将finalStage的ActiveJob设置为当前提交的Job
    finalStage.setActiveJob(job)
	// 获取Job所有Stage的StageInfo对象
    val stageIds = jobIdToStageIds(jobId).toArray
    val stageInfos = stageIds.flatMap(id => stageIdToStage.get(id).map(_.latestInfo))
    // 向事件总线投递SparkListenerJobStart事件
	listenerBus.post(
      SparkListenerJobStart(job.jobId, jobSubmissionTime, stageInfos, properties))
	// 提交ResultStage
    submitStage(finalStage)
  }
   4).handleJobSubmitted => createResultStage
   private def createResultStage(
      rdd: RDD[_],
      func: (TaskContext, Iterator[_]) => _,
      partitions: Array[Int],
      jobId: Int,
      callSite: CallSite): ResultStage = {
    checkBarrierStageWithDynamicAllocation(rdd)
    checkBarrierStageWithNumSlots(rdd)
    checkBarrierStageWithRDDChainPattern(rdd, partitions.toSet.size)
	/**
      * 获取所有父Stage的列表，父Stage主要是宽依赖（ShuffleDependency）对应的Stage，此列表内的Stage包
      *含以下几种：
      * 1. 当前RDD的直接或间接的依赖是ShuffleDependency且已经注册过的Stage。
      * 2. 当前RDD的直接或间接的依赖是ShuffleDependency且没有注册过Stage的。
      * 对于这种ShuffleDependency，则根据ShuffleDependency中的RDD，
      * 找到它的直接或间接的依赖是ShuffleDependency且没有注册过Stage的所有ShuffleDependency，为它
      * 们创建并注册Stage。
      * 3. 当前RDD的直接或间接的依赖是ShuffleDependency且没有注册过Stage的。为此ShuffleDependency创建
      * 并注册Stage。
      */
    val parents = getOrCreateParentStages(rdd, jobId)
    // 生成ResultStage的身份标识
	val id = nextStageId.getAndIncrement()
    // 创建ResultStage对象
	val stage = new ResultStage(id, rdd, func, partitions, parents, jobId, callSite)
    // 记录Stage与stageId的映射
	stageIdToStage(id) = stage
	// 更新Job的身份标识与ResultStage及其所有祖先的映射关系
    updateJobIdStageIdMaps(jobId, stage)
    stage
  }

  /**
   * Get or create the list of parent stages for a given RDD.  The new Stages will be created with
   * the provided firstJobId.
   * 获取给定RDD直接父类的 shuffle 依赖
   */
  private def getOrCreateParentStages(rdd: RDD[_], firstJobId: Int): List[Stage] = {
    // 获取RDD的所有ShuffleDependency的序列
	getShuffleDependencies(rdd)
	  // 逐个访问每个RDD及其依赖的非Shuffle的RDD
	  .map { shuffleDep =>
	  // 为每一个ShuffleDependency获取或者创建对应的ShuffleMapStage
      getOrCreateShuffleMapStage(shuffleDep, firstJobId)
    }.toList // 返回得到的ShuffleMapStage列表
  }
  
  // 获取RDD的所有ShuffleDependency的序列
  private[scheduler] def getShuffleDependencies(
      rdd: RDD[_]): HashSet[ShuffleDependency[_, _, _]] = {
    val parents = new HashSet[ShuffleDependency[_, _, _]]
    val visited = new HashSet[RDD[_]]
    val waitingForVisit = new ArrayStack[RDD[_]]
    // 先将rdd压入waitingForVisit栈中
	waitingForVisit.push(rdd)
    // 当waitingForVisit栈不为空时
	while (waitingForVisit.nonEmpty) {
      // 弹出栈顶RDD
	  val toVisit = waitingForVisit.pop()
      // 判断是否已经处理过
	  if (!visited(toVisit)) { // 没有处理该RDD
        // 添加到已处理集合进行记录
		visited += toVisit
		// 遍历该栈顶RDD的所有依赖
        toVisit.dependencies.foreach {
          // 如果是ShuffleDependency，就将其记录到parents集合
		  case shuffleDep: ShuffleDependency[_, _, _] =>
            parents += shuffleDep
          // 如果是其他Dependency，就将该依赖的RDD压入到waitingForVisit栈中
		  case dependency =>
            waitingForVisit.push(dependency.rdd)
        }
      }
    }
    parents
  }
   5).提交ResultStage
   submitStage 方法会通过入参 ResultStage 逐层获取父stage，再从最上游stage开始逐步调用
   TaskScheduler.submitTasks 方法提交task集合，最后才提交ResultStage的task集合。
   先调用getMissingParentStages来获取是否有未提交的父stages。若有，则依次递归提交父stages，并将missing加
入到waitingStages中。对于要依次提交的父stage，也是如此；
   若missing存在未提交的父stages，则先提交父stages；
   这时会调用submitMissingTasks(stage, jobId.get)，参数就是missing及其对应的jobId.get。这个函数便是将stage与
taskSet对应起来，然后DAGScheduler将taskSet提交给TaskScheduler去执行的实施者。
  
  private def submitStage(stage: Stage) {
    // 获取当前Stage对应的Job的ID
	val jobId = activeJobForStage(stage)
    if (jobId.isDefined) {
      logDebug(s"submitStage($stage (name=${stage.name};" +
        s"jobs=${stage.jobIds.toSeq.sorted.mkString(",")}))")
      // 当前Stage未提交		
      if (!waitingStages(stage) && !runningStages(stage) && !failedStages(stage)) {
        // 获取当前Stage的所有未提交的父Stage
		val missing = getMissingParentStages(stage).sortBy(_.id)
        logDebug("missing: " + missing)
        if (missing.isEmpty) { // 不存在未提交的父Stage
          logInfo("Submitting " + stage + " (" + stage.rdd + "), which has no missing parents")
          // 提交当前Stage所有未提交的Task
		  submitMissingTasks(stage, jobId.get)
        } else { // 存在未提交的父Stage
          // 提交所有未提交的父Stage
		  for (parent <- missing) {
            submitStage(parent)
          }
		  // 并且将当前Stage加入waitingStages集合中，当前Stage必须等待所有父Stage执行完成
          waitingStages += stage
        }
      }
    } else { // Job ID未定义，放弃提交当前Stage
      abortStage(stage, "No active job for stage " + stage.id, None)
    }
  }
  
  // 获取Stage的所有未提交的父Stage
  private def getMissingParentStages(stage: Stage): List[Stage] = {
    val missing = new HashSet[Stage]
    val visited = new HashSet[RDD[_]]
    // We are manually maintaining a stack here to prevent StackOverflowError
    // caused by recursively visiting
    val waitingForVisit = new ArrayStack[RDD[_]]
    // 定义visit()方法
	def visit(rdd: RDD[_]) {
      // 判断是否已经处理过
	  if (!visited(rdd)) { // 未处理过
	    // 添加到已处理集合进行记录
        visited += rdd
		/*
		 * 获取RDD各个分区的TaskLocation序列，判断是否包含Nil。
		 * Stage的RDD的分区中存在没有对应TaskLocation序列的分区，
		 * 则说明当前Stage的某个上游ShuffleMapStage的某个分区任务未执行。
		 */
        val rddHasUncachedPartitions = getCacheLocs(rdd).contains(Nil)
        if (rddHasUncachedPartitions) { TaskLocation序列包含Nil
          // 遍历该rdd的所有依赖
		  for (dep <- rdd.dependencies) {
            dep match {
              case shufDep: ShuffleDependency[_, _, _] => // 是ShuffleDependency
			    // 获取该ShuffleDependency的上游第一个提交的ShuffleMapStage
                val mapStage = getOrCreateShuffleMapStage(shufDep, stage.firstJobId)
                if (!mapStage.isAvailable) {
                  // 将其添加到missing集合进行记录
				  missing += mapStage
                }
              case narrowDep: NarrowDependency[_] => // 是NarrowDependency
                // 将该窄依赖的rdd压入waitingForVisit栈中
				waitingForVisit.push(narrowDep.rdd)
            }
          }
        }
      }
    }
    waitingForVisit.push(stage.rdd)
    while (waitingForVisit.nonEmpty) {
      visit(waitingForVisit.pop())
    }
    missing.toList
  }
   6).提交 Task
           得到RDD中需要计算的partition
   对于Shuffle类型的stage，需要判断stage中是否缓存了该结果；对于Result类型的Final Stage，则判断计算Job中该
partition是否已经计算完成。这么做（没有直接提交全部tasks）的原因是，stage中某个task执行失败其他执行成功
的时候就需要找出这个失败的task对应要计算的partition而不是要计算所有partition。
		   序列化task的binary
   Executor可以通过广播变量得到它。每个task运行的时候首先会反序列化
		   为每个需要计算的partition生成一个task
   ShuffleMapStage对应的task全是ShuffleMapTask；ResultStage对应的全是ResultTask。task继承Serializable，要
确保task是可序列化的。
		   提交tasks
   先用tasks来初始化一个 TaskSet 对象，再调用 TaskScheduler.submitTasks 提交
   /** Called when stage's parents are available and we can now do its task. */
  private def submitMissingTasks(stage: Stage, jobId: Int) {
    logDebug("submitMissingTasks(" + stage + ")")
    // 清空当前Stage的pendingPartitions，便于记录需要计算的分区任务。
    stage.pendingPartitions.clear()
	
    // First figure out the indexes of partition ids to compute.
    // 找出当前Stage的所有分区中还没有完成计算的分区的索引
	val partitionsToCompute: Seq[Int] = stage.findMissingPartitions()

    // Use the scheduling pool, job group, description, etc. from an ActiveJob associated
    // with this Stage
    // 获取ActiveJob的properties。properties包含了当前Job的调度、group、描述等属性信息。
	val properties = jobIdToActiveJob(jobId).properties

    // 将stage添加到runningStages集合中，表示其正在运行
	runningStages += stage
    // SparkListenerStageSubmitted should be posted before testing whether tasks are
    // serializable. If tasks are not serializable, a SparkListenerStageCompleted event
    // will be posted, which should always come after a corresponding SparkListenerStageSubmitted
    // event.
	// 启动对当前Stage的输出提交到HDFS的协调机制
    stage match {
      case s: ShuffleMapStage =>
        outputCommitCoordinator.stageStart(stage = s.id, maxPartitionId = s.numPartitions - 1)
      case s: ResultStage =>
        outputCommitCoordinator.stageStart(
          stage = s.id, maxPartitionId = s.rdd.partitions.length - 1)
    }
	// 获取还没有完成计算的每一个分区的偏好位置
    val taskIdToLocations: Map[Int, Seq[TaskLocation]] = try {
      stage match {
        case s: ShuffleMapStage =>
          partitionsToCompute.map { id => (id, getPreferredLocs(stage.rdd, id))}.toMap
        case s: ResultStage =>
          partitionsToCompute.map { id =>
            val p = s.partitions(id)
            (id, getPreferredLocs(stage.rdd, p))
          }.toMap
      }
    } catch {
      // 如果发生任何异常，则调用Stage的makeNewStageAttempt()方法开始一次新的Stage执行尝试
	  case NonFatal(e) =>
        stage.makeNewStageAttempt(partitionsToCompute.size)
        listenerBus.post(SparkListenerStageSubmitted(stage.latestInfo, properties))
        abortStage(stage, s"Task creation failed: $e\n${Utils.exceptionString(e)}", Some(e))
        runningStages -= stage
        return
    }
    
	// 开始Stage的执行尝试
    stage.makeNewStageAttempt(partitionsToCompute.size, taskIdToLocations.values.toSeq)

    // If there are tasks to execute, record the submission time of the stage. Otherwise,
    // post the even without the submission time, which indicates that this stage was
    // skipped.
    if (partitionsToCompute.nonEmpty) {
      stage.latestInfo.submissionTime = Some(clock.getTimeMillis())
    }
	// 向事件总线投递SparkListenerStageSubmitted事件
    listenerBus.post(SparkListenerStageSubmitted(stage.latestInfo, properties))

    // TODO: Maybe we can keep the taskBinary in Stage to avoid serializing it multiple times.
    // Broadcasted binary for the task, used to dispatch tasks to executors. Note that we broadcast
    // the serialized copy of the RDD and for each task we will deserialize it, which means each
    // task gets a different copy of the RDD. This provides stronger isolation between tasks that
    // might modify state of objects referenced in their closures. This is necessary in Hadoop
    // where the JobConf/Configuration object is not thread-safe.
    // 对任务进行序列化
	var taskBinary: Broadcast[Array[Byte]] = null
    var partitions: Array[Partition] = null
    try {
      // For ShuffleMapTask, serialize and broadcast (rdd, shuffleDep).
      // For ResultTask, serialize and broadcast (rdd, func).
      var taskBinaryBytes: Array[Byte] = null
      // taskBinaryBytes and partitions are both effected by the checkpoint status. We need
      // this synchronization in case another concurrent job is checkpointing this RDD, so we get a
      // consistent view of both variables.
      RDDCheckpointData.synchronized {
        taskBinaryBytes = stage match {
          // 对Stage的rdd和ShuffleDependency进行序列化
		  case stage: ShuffleMapStage =>
            JavaUtils.bufferToArray(
              closureSerializer.serialize((stage.rdd, stage.shuffleDep): AnyRef))
          // 对Stage的rdd和对RDD的分区进行计算的函数func进行序列化
		  case stage: ResultStage =>
            JavaUtils.bufferToArray(closureSerializer.serialize((stage.rdd, stage.func): AnyRef))
        }

        partitions = stage.rdd.partitions
      }

      // 广播任务的序列化对象
	  taskBinary = sc.broadcast(taskBinaryBytes)
    } catch {
      // In the case of a failure during serialization, abort the stage.
      case e: NotSerializableException =>
        abortStage(stage, "Task not serializable: " + e.toString, Some(e))
        runningStages -= stage

        // Abort execution
        return
      case e: Throwable =>
        abortStage(stage, s"Task serialization failed: $e\n${Utils.exceptionString(e)}", Some(e))
        runningStages -= stage

        // Abort execution
        return
    }
    
	// 创建Task序列
    val tasks: Seq[Task[_]] = try {
      val serializedTaskMetrics = closureSerializer.serialize(stage.latestInfo.taskMetrics).array()
      stage match {
        case stage: ShuffleMapStage =>
          stage.pendingPartitions.clear()
          partitionsToCompute.map { id =>
            // 对应分区的偏好位置序列
			val locs = taskIdToLocations(id)
            // RDD的分区
			val part = partitions(id)
            stage.pendingPartitions += id
            // 创建ShuffleMapTask
			new ShuffleMapTask(stage.id, stage.latestInfo.attemptNumber,
              taskBinary, part, locs, properties, serializedTaskMetrics, Option(jobId),
              Option(sc.applicationId), sc.applicationAttemptId, stage.rdd.isBarrier())
          }

        case stage: ResultStage => // 为ResultStage的每一个分区创建一个ResultTask
          partitionsToCompute.map { id =>
            val p: Int = stage.partitions(id)
            // RDD的分区
			val part = partitions(p)
            // 分区偏好位置序列
			val locs = taskIdToLocations(id)
            // 创建ResultTask
			new ResultTask(stage.id, stage.latestInfo.attemptNumber,
              taskBinary, part, locs, id, properties, serializedTaskMetrics,
              Option(jobId), Option(sc.applicationId), sc.applicationAttemptId,
              stage.rdd.isBarrier())
          }
      }
    } catch {
      case NonFatal(e) =>
        // 出现错误就放弃提交
		abortStage(stage, s"Task creation failed: $e\n${Utils.exceptionString(e)}", Some(e))
        runningStages -= stage
        return
    }

    if (tasks.size > 0) { // Task数量大于0
      logInfo(s"Submitting ${tasks.size} missing tasks from $stage (${stage.rdd}) (first 15 " +
        s"tasks are for partitions ${tasks.take(15).map(_.partitionId)})")
      // 将提交的分区添加到pendingPartitions集合中，表示它们正在等待处理
      stage.pendingPartitions ++= tasks.map(_.partitionId)
      logDebug("New pending partitions: " + stage.pendingPartitions)
      // 为这批Task创建TaskSet，调用TaskScheduler的submitTasks方法提交此批Task
	  taskScheduler.submitTasks(new TaskSet(
        tasks.toArray, stage.id, stage.latestInfo.attemptNumber, jobId, properties))
	  // 记录最后一次提交时间
      stage.latestInfo.submissionTime = Some(clock.getTimeMillis())
    } else { // Task数量为0，没有创建任何Task
      // Because we posted SparkListenerStageSubmitted earlier, we should mark
      // the stage as completed here in case there are no tasks to run
      // 将当前Stage标记为完成
	  markStageAsFinished(stage, None)

      stage match {
        case stage: ShuffleMapStage =>
          logDebug(s"Stage ${stage} is actually done; " +
              s"(available: ${stage.isAvailable}," +
              s"available outputs: ${stage.numAvailableOutputs}," +
              s"partitions: ${stage.numPartitions})")
          markMapStageJobsAsFinished(stage)
        case stage : ResultStage =>
          logDebug(s"Stage ${stage} is actually done; (partitions: ${stage.numPartitions})")
      }
	  // 提交当前Stage的子Stage
      submitWaitingChildStages(stage)
    }
  }