今天就跟大家聊聊有关spark RDD的依赖关系是什么,可能很多人都不太了解,为了让大家更加了解,小编给大家总结了以下内容,希望大家根据这篇文章可以有所收获。
依赖关系
基本概念
RDD的依赖关系有一种类似于上下文之间的联系,这种关系也是存在于各个RDD算子间的,相邻两个RDD间的关系被称作依赖关系,多个连续的RDD之间的关系,被称作血缘关系。
每个RDD都会保存血缘关系,就像是知道自己的父亲是谁,自己的父亲的父亲是谁一样。 
RDD不会保存数据,因此当一个算子出错的时候,为了能够提高容错性,需要通过算子间的依赖关系找到数据源头,再按顺序执行,从而重新读取计算。
def main(args: Array[String]): Unit = {
val sparConf = new SparkConf().setMaster("local").setAppName("WordCount")
val sc = new SparkContext(sparConf)
val lines: RDD[String] = sc.makeRDD(List("hello world","hello spark"))
println(lines.toDebugString)
println("*************************")
val words: RDD[String] = lines.flatMap(_.split(" "))
println(words.toDebugString)
println("*************************")
val wordToOne = words.map(word=>(word,1))
println(wordToOne.toDebugString)
println("*************************")
val wordToSum: RDD[(String, Int)] = wordToOne.reduceByKey(_+_)
println(wordToSum.toDebugString)
println("*************************")
val array: Array[(String, Int)] = wordToSum.collect()
array.foreach(println)
sc.stop()
}
输出的血缘关系日志如下:
(1) ParallelCollectionRDD[0] at makeRDD at RDD_Dependence_01.scala:13 []
*************************
(1) MapPartitionsRDD[1] at flatMap at RDD_Dependence_01.scala:16 []
| ParallelCollectionRDD[0] at makeRDD at RDD_Dependence_01.scala:13 []
*************************
(1) MapPartitionsRDD[2] at map at RDD_Dependence_01.scala:19 []
| MapPartitionsRDD[1] at flatMap at RDD_Dependence_01.scala:16 []
| ParallelCollectionRDD[0] at makeRDD at RDD_Dependence_01.scala:13 []
*************************
(1) ShuffledRDD[3] at reduceByKey at RDD_Dependence_01.scala:22 []
+-(1) MapPartitionsRDD[2] at map at RDD_Dependence_01.scala:19 []
| MapPartitionsRDD[1] at flatMap at RDD_Dependence_01.scala:16 []
| ParallelCollectionRDD[0] at makeRDD at RDD_Dependence_01.scala:13 []
*************************

宽依赖和窄依赖
窄依赖
窄依赖指的是父RDD的分区数据只提供给一个对应的子RDD的分区

宽依赖
宽依赖指的是父RDD的分区数据提供给多个对应的子RDD的分区,当父RDD有Shuffle操作的时候,父RDD与子RDD的依赖关系必定是宽依赖,因此其也被称为Shuffle依赖。

阶段划分
DAG(Directed Acyclic Graph)有向无环图是由点和线组成的拓扑图形,该图形具有方向, 不会闭环。例如,DAG 记录了 RDD 的转换过程和任务的阶段。
DAGScheduler部分源码解释了任务的阶段划分过程:
在handleJobSubmitted方法有一个传入参数为finalRDD,通过 finalStage = createResultStage(finalRDD, func, partitions, jobId, callSite)
方法,可以看出无论有多少个RDD,都会默认通过最终的RDD去创建一个resultStage。
之后createResultStage调用了getOrCreateParentStages(rdd: RDD[_], firstJobId: Int): List[Stage]
方法,通过 getShuffleDependencies( rdd: RDD[_])
返回依赖关系的链式结构(ShuffleDependency的存储map),如: A <-- B <-- C
遍历ShuffleDependency的存储map,通过getOrCreateShuffleMapStage(shuffleDep, firstJobId)
去创建阶段,这里通过firstJobId去做关联,缓存的stage在shuffleIdToMapStage中。
/**
* Create a ResultStage associated with the provided jobId.
*/
private def createResultStage(
rdd: RDD[_],
func: (TaskContext, Iterator[_]) => _,
partitions: Array[Int],
jobId: Int,
callSite: CallSite): ResultStage = {
checkBarrierStageWithDynamicAllocation(rdd)
checkBarrierStageWithNumSlots(rdd)
checkBarrierStageWithRDDChainPattern(rdd, partitions.toSet.size)
val parents = getOrCreateParentStages(rdd, jobId) //这里调用
val id = nextStageId.getAndIncrement()
val stage = new ResultStage(id, rdd, func, partitions, parents, jobId, callSite)
stageIdToStage(id) = stage
updateJobIdStageIdMaps(jobId, stage)
stage
}
/**
* Get or create the list of parent stages for a given RDD. The new Stages will be created with
* the provided firstJobId.
*/
private def getOrCreateParentStages(rdd: RDD[_], firstJobId: Int): List[Stage] = {
getShuffleDependencies(rdd).map { shuffleDep =>
getOrCreateShuffleMapStage(shuffleDep, firstJobId)
}.toList
}
/**
* Returns shuffle dependencies that are immediate parents of the given RDD.
*
* This function will not return more distant ancestors. For example, if C has a shuffle
* dependency on B which has a shuffle dependency on A:
*
* A <-- B <-- C
*
* calling this function with rdd C will only return the B <-- C dependency.
*
* This function is scheduler-visible for the purpose of unit testing.
*/
private[scheduler] def getShuffleDependencies(
rdd: RDD[_]): HashSet[ShuffleDependency[_, _, _]] = {
val parents = new HashSet[ShuffleDependency[_, _, _]]
val visited = new HashSet[RDD[_]]
val waitingForVisit = new ListBuffer[RDD[_]]
waitingForVisit += rdd
while (waitingForVisit.nonEmpty) {
val toVisit = waitingForVisit.remove(0)
if (!visited(toVisit)) {
visited += toVisit
toVisit.dependencies.foreach {
case shuffleDep: ShuffleDependency[_, _, _] =>
parents += shuffleDep
case dependency =>
waitingForVisit.prepend(dependency.rdd)
}
}
}
parents
}
任务划分
RDD 任务切分为:Application、Job、Stage 和 Task
Application:初始化一个 SparkContext 即生成一个 Application;
Job:一个 Action 算子就会生成一个 Job;
Stage:Stage 等于宽依赖(ShuffleDependency)的个数加 1;
Task:一个 Stage 阶段中,最后一个 RDD 的分区个数就是 Task 的个数。
注意:Application->Job->Stage->Task 每一层都是 1 对 n 的关系。
看完上述内容,你们对spark RDD的依赖关系是什么有进一步的了解吗?如果还想了解更多知识或者相关内容,请关注天达云行业资讯频道,感谢大家的支持。