- Kafka里有關(guān)log操作的類比較類, 但是層次關(guān)系還是很清晰的,實(shí)際上就是上次會(huì)把操作代理給下一層;
- 是時(shí)候放出這張圖了
Log層級(jí).png
- 相關(guān)的一些類我們?cè)谇懊娴恼鹿?jié)中都有介紹過
- 目前看起來我們只剩下上圖中的
Log
類沒有介紹, 所以這章基本上就是過一下這個(gè)Log
類
Log
- 所在文件: core/src/main/scala/kafka/log/Log.scala
- 作用: kafka的數(shù)據(jù)落盤存在不同的目錄下,目錄的命名規(guī)則是
Topic-Partiton
, 這個(gè)Log
封裝的就是針對(duì)這樣的每個(gè)目錄的操作 - 主要方法:
-
private val segments: ConcurrentNavigableMap[java.lang.Long, LogSegment] = new ConcurrentSkipListMap[java.lang.Long, LogSegment]
: 每個(gè)目錄里包含多個(gè)LogSegment
, 每個(gè)Segment分為L(zhǎng)og和Index兩類文件,這兩個(gè)文件的以存儲(chǔ)的最小的offset來命名,這個(gè)Map管理了當(dāng)前目錄下所有的LogSegment
, key就是這個(gè)最小的offset; -
private def loadSegments()
: 從磁盤文件加載初始化每個(gè)LogSegment
, 在每個(gè)Log
類對(duì)象創(chuàng)建初始化時(shí)會(huì)調(diào)用, 這個(gè)函數(shù)比較重要, 下面的代碼里加了注釋
-
dir.mkdirs()
var swapFiles = Set[File]()
// first do a pass through the files in the log directory and remove any temporary files
// and find any interrupted swap operations
for(file <- dir.listFiles if file.isFile) {
if(!file.canRead)
throw new IOException("Could not read file " + file)
val filename = file.getName
//對(duì)于.deleted和.cleaned結(jié)尾的文件直接刪除
if(filename.endsWith(DeletedFileSuffix) || filename.endsWith(CleanedFileSuffix)) {
// if the file ends in .deleted or .cleaned, delete it
file.delete()
} else if(filename.endsWith(SwapFileSuffix)) {
// we crashed in the middle of a swap operation, to recover:
// if a log, delete the .index file, complete the swap operation later
// if an index just delete it, it will be rebuilt
//.swap文件需要真正恢復(fù), 對(duì)應(yīng)的indes文件都刪除
val baseName = new File(CoreUtils.replaceSuffix(file.getPath, SwapFileSuffix, ""))
if(baseName.getPath.endsWith(IndexFileSuffix)) {
file.delete()
} else if(baseName.getPath.endsWith(LogFileSuffix)){
// delete the index
val index = new File(CoreUtils.replaceSuffix(baseName.getPath, LogFileSuffix, IndexFileSuffix))
index.delete()
swapFiles += file
}
}
}
// now do a second pass and load all the .log and .index files
for(file <- dir.listFiles if file.isFile) {
val filename = file.getName
if(filename.endsWith(IndexFileSuffix)) {
//有index文件但沒有對(duì)應(yīng)的log文件,則刪除index文件
// if it is an index file, make sure it has a corresponding .log file
val logFile = new File(file.getAbsolutePath.replace(IndexFileSuffix, LogFileSuffix))
if(!logFile.exists) {
warn("Found an orphaned index file, %s, with no corresponding log file.".format(file.getAbsolutePath))
file.delete()
}
} else if(filename.endsWith(LogFileSuffix)) {
// if its a log file, load the corresponding log segment
val start = filename.substring(0, filename.length - LogFileSuffix.length).toLong
val indexFile = Log.indexFilename(dir, start)
val segment = new LogSegment(dir = dir,
startOffset = start,
indexIntervalBytes = config.indexInterval,
maxIndexSize = config.maxIndexSize,
rollJitterMs = config.randomSegmentJitter,
time = time,
fileAlreadyExists = true)
if(indexFile.exists()) {
try {
segment.index.sanityCheck()
} catch {
case e: java.lang.IllegalArgumentException =>
warn("Found a corrupted index file, %s, deleting and rebuilding index...".format(indexFile.getAbsolutePath))
indexFile.delete()
segment.recover(config.maxMessageSize)
}
}
else {
error("Could not find index file corresponding to log file %s, rebuilding index...".format(segment.log.file.getAbsolutePath))
segment.recover(config.maxMessageSize)
}
segments.put(start, segment)
}
}
// 針對(duì).swap文件作恢復(fù),實(shí)際上就是刪除目錄下對(duì)swap文件的offset有重疊的log文件
// Finally, complete any interrupted swap operations. To be crash-safe,
// log files that are replaced by the swap segment should be renamed to .deleted
// before the swap file is restored as the new segment file.
for (swapFile <- swapFiles) {
val logFile = new File(CoreUtils.replaceSuffix(swapFile.getPath, SwapFileSuffix, ""))
val fileName = logFile.getName
val startOffset = fileName.substring(0, fileName.length - LogFileSuffix.length).toLong
val indexFile = new File(CoreUtils.replaceSuffix(logFile.getPath, LogFileSuffix, IndexFileSuffix) + SwapFileSuffix)
val index = new OffsetIndex(file = indexFile, baseOffset = startOffset, maxIndexSize = config.maxIndexSize)
val swapSegment = new LogSegment(new FileMessageSet(file = swapFile),
index = index,
baseOffset = startOffset,
indexIntervalBytes = config.indexInterval,
rollJitterMs = config.randomSegmentJitter,
time = time)
info("Found log file %s from interrupted swap operation, repairing.".format(swapFile.getPath))
swapSegment.recover(config.maxMessageSize)
val oldSegments = logSegments(swapSegment.baseOffset, swapSegment.nextOffset)
replaceSegments(swapSegment, oldSegments.toSeq, isRecoveredSwapFile = true)
}
if(logSegments.size == 0) {
// no existing segments, create a new mutable segment beginning at offset 0
segments.put(0L, new LogSegment(dir = dir,
startOffset = 0,
indexIntervalBytes = config.indexInterval,
maxIndexSize = config.maxIndexSize,
rollJitterMs = config.randomSegmentJitter,
time = time,
fileAlreadyExists = false,
initFileSize = this.initFileSize(),
preallocate = config.preallocate))
} else {
recoverLog()
// reset the index size of the currently active log segment to allow more entries
activeSegment.index.resize(config.maxIndexSize)
}
-
def append(messages: ByteBufferMessageSet, assignOffsets: Boolean = true)
: 追加新的msg到Log文件
3.1 對(duì)`messages`中的每條`Record`重新賦予offset
val offset = new AtomicLong(nextOffsetMetadata.messageOffset)
try {
validMessages = validMessages.validateMessagesAndAssignOffsets(offset, appendInfo.sourceCodec, appendInfo.targetCodec, config.compact)
} catch {
case e: IOException => throw new KafkaException("Error in validating messages while appending to log '%s'".format(name), e)
3.2 驗(yàn)證每條`Record`中的msg大小是否超出系統(tǒng)配置中的限制
for(messageAndOffset <- validMessages.shallowIterator) {
if(MessageSet.entrySize(messageAndOffset.message) > config.maxMessageSize) {
// we record th e original message set size instead of trimmed size
// to be consistent with pre-compression bytesRejectedRate recording
BrokerTopicStats.getBrokerTopicStats(topicAndPartition.topic).bytesRejectedRate.mark(messages.sizeInBytes)
BrokerTopicStats.getBrokerAllTopicsStats.bytesRejectedRate.mark(messages.sizeInBytes)
throw new MessageSizeTooLargeException("Message size is %d bytes which exceeds the maximum configured message size of %d."
.format(MessageSet.entrySize(messageAndOffset.message), config.maxMessageSize))
}
}
3.3 檢查Record set的整體大小是否超出一個(gè)LogSegment的配置限制
if(validMessages.sizeInBytes > config.segmentSize) {
throw new MessageSetSizeTooLargeException("Message set size is %d bytes which exceeds the maximum configured segment size of %d."
.format(validMessages.sizeInBytes, config.segmentSize))
}
3.4 如果需要的話,關(guān)閉當(dāng)前的LogSegment, 新建一個(gè)LogSegment用入寫入當(dāng)前的msg
val segment = maybeRoll(validMessages.sizeInBytes)
3.5 追加新msg到ActiveLogSegment
segment.append(appendInfo.firstOffset, validMessages)
3.6 更新LogEndOffset
updateLogEndOffset(appendInfo.lastOffset + 1)
-
def read(startOffset: Long, maxLength: Int, maxOffset: Option[Long] = None)
: 從log文件中讀取msg
// **驗(yàn)證startOffset的有效性**
val currentNextOffsetMetadata = nextOffsetMetadata
val next = currentNextOffsetMetadata.messageOffset
if(startOffset == next)
return FetchDataInfo(currentNextOffsetMetadata, MessageSet.Empty)
//鎖定開始讀取的LogSegment
var entry = segments.floorEntry(startOffset)
// attempt to read beyond the log end offset is an error
if(startOffset > next || entry == null)
throw new OffsetOutOfRangeException("Request for offset %d but we only have log segments in the range %d to %d.".format(startOffset, segments.firstKey, next))
// 確定maxPosition
val maxPosition = {
if (entry == segments.lastEntry) {
val exposedPos = nextOffsetMetadata.relativePositionInSegment.toLong
// Check the segment again in case a new segment has just rolled out.
if (entry != segments.lastEntry)
// New log segment has rolled out, we can read up to the file end.
entry.getValue.size
else
exposedPos
} else {
entry.getValue.size
}
}
//讀取
val fetchInfo = entry.getValue.read(startOffset, maxOffset, maxLength, maxPosition)