本套系列博客從真實商業環境抽取案例進行總結和分享,并給出Spark商業應用的調優建議和集群環境容量規劃等內容,請持續關注本套博客。版權聲明:本套Spark調優系列版權歸作者(秦凱新)所有,禁止轉載,歡迎學習。
Spark商業環境實戰及調優進階系列
- Spark商業環境實戰-Spark內置框架rpc通訊機制及RpcEnv基礎設施
- Spark商業環境實戰-Spark事件監聽總線流程分析
- Spark商業環境實戰-Spark存儲體系底層架構剖析
- Spark商業環境實戰-Spark底層多個MessageLoop循環線程執行流程分析
1. Spark存儲體系組件關系解釋
BlockInfoManger 主要提供讀寫鎖控制,層級僅僅位于BlockManger之下,通常Spark讀寫操作都先調用BlockManger,然后咨詢BlockInfoManger是否存在鎖競爭,然后才會調用DiskStore和MemStore,進而調用DiskBlockManger來確定數據與位置映射,或者調用 MemoryManger來確定內存池的軟邊界和內存使用申請。
1.1 Driver 與 Executor 與 SparkEnv 與 BlockManger 組件關系:
Driver與 Executor 組件各自擁有任務執行的SparkEnv環境,而每一個SparkEnv 中都有一個BlockManger負責存儲服務,作為高層抽象,BlockManger 之間需要通過 RPCEnv,ShuffleClient,及BlocakTransferService相互通訊。
1.1 BlockInfoManger 與 BlockInfo 共享鎖和排它鎖讀寫控制關系:
BlockInfo中具有讀寫鎖的標志,通過標志可以判斷是否進行寫控制
val NO_WRITER: Long = -1
val NON_TASK_WRITER: Long = -1024
* The task attempt id of the task which currently holds the write lock for this block, or
* [[BlockInfo.NON_TASK_WRITER]] if the write lock is held by non-task code, or
* [[BlockInfo.NO_WRITER]] if this block is not locked for writing.
def writerTask: Long = _writerTask
def writerTask_=(t: Long): Unit = {
_writerTask = t
checkInvariants()
BlockInfoManager具有BlockId與BlockInfo的映射關系以及任務id與BlockId的鎖映射:
private[this] val infos = new mutable.HashMap[BlockId, BlockInfo]
*Tracks the set of blocks that each task has locked for writing.
private[this] val writeLocksByTask = new mutable.HashMap[TaskAttemptId, mutable.Set[BlockId]]
with mutable.MultiMap[TaskAttemptId, BlockId]
*Tracks the set of blocks that each task has locked for reading, along with the number of times
*that a block has been locked (since our read locks are re-entrant).
private[this] val readLocksByTask =
new mutable.HashMap[TaskAttemptId, ConcurrentHashMultiset[BlockId]]
1.3 DiskBlockManager 與 DiskStore 組件關系:
可以看到DiskStore內部會調用DiskBlockManager來確定Block的讀寫位置:
-
以下是DiskStore的抽象寫操作,需要傳入FileOutputStream => Unit高階函數:
def put(blockId: BlockId)(writeFunc: FileOutputStream => Unit): Unit = { if (contains(blockId)) { throw new IllegalStateException(s"Block $blockId is already present in the disk store") } logDebug(s"Attempting to put block $blockId") val startTime = System.currentTimeMillis val file = diskManager.getFile(blockId) val fileOutputStream = new FileOutputStream(file) var threwException: Boolean = true try { writeFunc(fileOutputStream) threwException = false } finally { try { Closeables.close(fileOutputStream, threwException) } finally { if (threwException) { remove(blockId) } } } val finishTime = System.currentTimeMillis logDebug("Block %s stored as %s file on disk in %d ms".format( file.getName, Utils.bytesToString(file.length()), finishTime - startTime)) }
-
以下是DiskStore的讀操作,調用DiskBlockManager來獲取數據位置:
def getBytes(blockId: BlockId): ChunkedByteBuffer = { val file = diskManager.getFile(blockId.name) val channel = new RandomAccessFile(file, "r").getChannel Utils.tryWithSafeFinally { * For small files, directly read rather than memory map if (file.length < minMemoryMapBytes) { val buf = ByteBuffer.allocate(file.length.toInt) channel.position(0) while (buf.remaining() != 0) { if (channel.read(buf) == -1) { throw new IOException("Reached EOF before filling buffer\n" + s"offset=0\nfile=${file.getAbsolutePath}\nbuf.remaining=${buf.remaining}") } } buf.flip() new ChunkedByteBuffer(buf) } else { new ChunkedByteBuffer(channel.map(MapMode.READ_ONLY, 0, file.length)) } } { channel.close() } }
1.3 MemManager 與 MemStore 與 MemoryPool 組件關系:
在這里要強調的是:第一代大數據框架hadoop只將內存作為計算資源,而Spark不僅將內存作為計算資源外,還將內存的一部分納入存儲體系:
- 內存池模型 :邏輯上分為堆內存和堆外內存,然后堆內存(或堆外內存)內部又分為StorageMemoryPool和ExecutionMemoryPool。
- MemManager是抽象的,定義了內存管理器的接口規范,方便以后擴展,比如:老版的StaticMemoryManager和新版的UnifiedMemoryManager.
- MemStore 依賴于UnifiedMemoryManager進行內存的申請和軟邊界變化或內存釋放。
- MemStore 內部同時負責存儲真實的對象,比如內部成員變量:entries ,建立了內存中的BlockId與MemoryEntry(Block的內存的形式)之間的映射。
- MemStore 內部的“占座”行為,如:內部變量offHeapUnrollMemoryMap 和onHeapUnrollMemoryMap。
1.4 BlockManagerMaster 與 BlockManager 組件關系:
- BlockManagerMaster的作用就是對存在于Dirver或Executor上的BlockManger進行統一管理,這簡直是代理行為,因為他持有BlockManagerMasterEndpointREf,進而和BlockManagerMasterEndpoint進行通訊。
2. Spark存儲體系組件BlockTransferServic傳輸服務
未完待續
3. 總結
存儲體系是Spark的基石,我爭取把每一塊細微的知識點進行剖析,和大部分博客不同的是,我會盡量采用最平實的語言,畢竟技術就是一層窗戶紙。
秦凱新 20181031 凌晨