使用VideoToolBox硬解碼MJPEG編碼格式視頻

Motion JPEG(MJPEG,Motion Joint Photographic Experts Group,FourCC:MJPG)是一種視頻壓縮格式,其中每一幀圖像都分別使用JPEG編碼。一般我們使用看到的使用VideoToolBox處理的都是H.264壓縮格式的視頻,這里我介紹一個使用VideoToolBox解碼MJPEG視頻編碼格式的視頻的方法,我們同樣使用ffmpeg讀取一個MJPEG編碼格式的網絡視頻流。

1.同樣的我們需要先創建 VTDecompressionSessionRef
<pre>
//根據視頻信息創建視頻解碼的session
if (videoFormatDescr == NULL){

    // 獲取視頻的寬高
    int videoWidth = pCodecCtx->width;
    int videoHeight = pCodecCtx->height;
    
    //創建視頻描敘信息,包括視頻的寬,高和編碼信息,這里我們使用MJPEG對應的編碼類型kCMVideoCodecType_JPEG
    CMVideoFormatDescriptionCreate(kCFAllocatorDefault, kCMVideoCodecType_JPEG, videoWidth, videoHeight, NULL, &videoFormatDescr);
    
    //創建解碼回調函數
    VTDecompressionOutputCallbackRecord callback;
    callback.decompressionOutputCallback = didDecompress;
    callback.decompressionOutputRefCon = (__bridge void *)self;
    
    //定義解碼得到的圖片像素格式
    NSDictionary *destinationImageBufferAttributes =[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:NO],(id)kCVPixelBufferOpenGLESCompatibilityKey,[NSNumber numberWithInt:kCVPixelFormatType_32BGRA],(id)kCVPixelBufferPixelFormatTypeKey,nil];
    
    //創建session
    status = VTDecompressionSessionCreate(kCFAllocatorDefault, videoFormatDescr, NULL, (__bridge CFDictionaryRef)destinationImageBufferAttributes, &callback, &session);
    if (status != noErr){
        NSLog(@"Init decoder session failed status= %d", (int)status);
    }
}

</pre>

2.開始解碼

<pre>
//創建BlockBuffer,這里的數據我們直接使用AVPacket中的數據去填充
CMBlockBufferRef videoBlock = NULL;
status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault, packet.data, packet.size, kCFAllocatorNull, NULL, 0, packet.size, 0, &videoBlock);
if (status != noErr){
NSLog(@"CMBlockBufferRef failed status=%d", (int)status);
}

//創建sampleBuffer
CMSampleBufferRef sampleBuffer = NULL;

const size_t sampleSizeArray[] = {packet.size};
status = CMSampleBufferCreate(kCFAllocatorDefault, videoBlock, true, NULL, NULL, videoFormatDescr, 1, 0, NULL, 1, sampleSizeArray, &sampleBuffer);
if (status != noErr){
    NSLog(@"CMSampleBufferRef failed status=%d", (int)status);
}

//開始解碼
VTDecodeFrameFlags flags = kVTDecodeFrame_EnableAsynchronousDecompression;
VTDecodeInfoFlags flagOut;
status = VTDecompressionSessionDecodeFrame(session, sampleBuffer, flags, &sampleBuffer, &flagOut);

if (status != noErr) {
    NSLog(@"Decode falied status = %d",(int)status);
}

//釋放內存
CFRelease(videoBlock);
CFRelease(sampleBuffer);

</pre>

3.解碼之后得到的回調結果數據

<pre>
void didDecompress( void *decompressionOutputRefCon, void *sourceFrameRefCon, OSStatus status, VTDecodeInfoFlags infoFlags, CVImageBufferRef imageBuffer, CMTime presentationTimeStamp, CMTime presentationDuration )
{
if (status != noErr || !imageBuffer) {
NSLog(@"Error decompresssing frame at time: %.3f error: %d infoFlags: %u", (float)presentationTimeStamp.value/presentationTimeStamp.timescale, (int)status, (unsigned int)infoFlags);
return;
}
}
</pre>

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容