iOS 圖片解碼探索

前言

項目中有對本地圖片做了一層封裝,簡單來說就說使用imageWithContentsOfFile 來初始化圖片,所有取圖片的地方都是通過這層封裝來取圖片,因為這里沒有解碼過程,所以,這里先研究一下第三方的解碼策略。力爭想通,本地大量使用這樣子的做法,而且不解碼,對性能到底時好時壞……

UIImage 的

+ (nullable UIImage *)imageWithContentsOfFile:(NSString *)path;

從一個路徑獲取圖片,這個圖片是未解碼的。

為甚要解碼?恩?你問我為啥要解碼。

事實上,不管是 JPEG 還是 PNG 圖片,都是一種壓縮的位圖圖形格式。只不過 PNG 圖片是無損壓縮,并且支持 alpha 通道,而 JPEG 圖片則是有損壓縮,可以指定 0-100% 的壓縮比,因此,在將磁盤中的圖片渲染到屏幕之前,必須先要得到圖片的原始像素數據,才能執行后續的繪制操作,這就是為什么需要對圖片解壓縮的原因。詳見 談談 iOS 中圖片的解壓縮

AFNetwworking 對圖片的處理

AFNetworking 對圖片處理,有圖片下載,和緩存,下載部分,直接引入

#import "UIImageView+AFNetworking.h"

就可以簡單的對網絡圖片進行下載了,比如使用:

- (void)setImageWithURL:(NSURL *)url
       placeholderImage:(nullable UIImage *)placeholderImage;

對網絡圖片進行下載,placeholderImage 是默認圖片。類似于sdwebimage. 和SDWebimage 的區別在于,AF中圖片緩存使用的NSCache.

對于緩存,我們項目可能只有用戶頭像的緩存是固定的,其他的緩存,比如:直播的索引圖,一分鐘變化一次,所以:真的需要把圖片存文件么?使用了SDWebimage 相當于,這次打開APP ,本地存的首頁圖片,幾分鐘之后幾乎全部是無效的,但是它被存在了本地,只有手動清空或者緩存數據超過極限的時候清空。

AF的作者也說過如下的話:

NSURLCache 提醒著我們熟悉我們正在操作的系統是多么地重要。開發 iOS 或 OS X 程序時,這些系統中的重中之重,非 URL Loading System莫屬。
無數開發者嘗試自己做一個簡陋而脆弱的系統來實現網絡緩存的功能,殊不知 NSURLCache 只要兩行代碼就能搞定且好上100倍。甚至更多開發者根本不知道網絡緩存的好處,也從未嘗試過,導致他們的應用向服務器作了無數不必要的網絡請求。

http://nshipster.cn/nsurlcache/

我們不猜測作者想表達的意思,但是對于直播APP,或者有效期只有幾分鐘的圖片,真的有必要存在本地?

此文我們僅僅關注AFNetworking中圖片的下載之后的解碼邏輯。

下載圖片解碼等邏輯在 AFImageResponseSerializer 類中。

AFImageResponseSerializer 繼承自AFHTTPResponseSerializer,如果下載內容是圖片,就會調用AFImageResponseSerializer 相關的函數對圖片進行解碼。

在 AFHTTPResponseSerializer 的基礎上增加了兩個屬性

@property (nonatomic, assign) CGFloat imageScale;
@property (nonatomic, assign) BOOL automaticallyInflatesResponseImage;

一個是縮放因子,一個是,下載好圖片后是否對圖片進行解碼,根據注釋 automaticallyInflatesResponseImage 設置為YES 可以顯著的提高圖片的顯示性能。

- (id)responseObjectForResponse:(NSURLResponse *)response
                           data:(NSData *)data
                          error:(NSError *__autoreleasing *)error
{
    if (![self validateResponse:(NSHTTPURLResponse *)response data:data error:error]) {
        if (!error || AFErrorOrUnderlyingErrorHasCodeInDomain(*error, NSURLErrorCannotDecodeContentData, AFURLResponseSerializationErrorDomain)) {
            return nil;
        }
    }

#if TARGET_OS_IOS || TARGET_OS_TV || TARGET_OS_WATCH
    if (self.automaticallyInflatesResponseImage) {
        return AFInflatedImageFromResponseWithDataAtScale((NSHTTPURLResponse *)response, data, self.imageScale);
    } else {
        return AFImageWithDataAtScale(data, self.imageScale);
    }
....

    return nil;
}

可以看到,當http 請求返回的時候,如果請求是圖片會調用這里的方法。如果要自動解碼 會調用 AFInflatedImageFromResponseWithDataAtScale 方法,如果不用解碼,會調用 AFImageWithDataAtScale 返回圖片。

static UIImage * AFImageWithDataAtScale(NSData *data, CGFloat scale) {
    UIImage *image = [UIImage af_safeImageWithData:data];
    if (image.images) {
        return image;
    }
    
    return [[UIImage alloc] initWithCGImage:[image CGImage] scale:scale orientation:image.imageOrientation];
}

不解碼的圖片,直接調用系統方法生成

+ (UIImage *)af_safeImageWithData:(NSData *)data {
    UIImage* image = nil;
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        imageLock = [[NSLock alloc] init];
    });
    
    [imageLock lock];
    image = [UIImage imageWithData:data];
    [imageLock unlock];
    return image;
}

這里因為 imageWithData 不是線程安全的,所以加了鎖:(有空寫代碼驗證一下!!!!),我們知道 imageWithData 是不會對圖片進行解碼的,這樣子直接返回圖片,當顯示圖片的時候回解碼,在主線程,對性能挺考驗的。

我們看看解碼處理:

static UIImage * AFInflatedImageFromResponseWithDataAtScale(NSHTTPURLResponse *response, NSData *data, CGFloat scale) {
    if (!data || [data length] == 0) {
        return nil;
    }

    CGImageRef imageRef = NULL;
    CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);

    if ([response.MIMEType isEqualToString:@"image/png"]) {
        imageRef = CGImageCreateWithPNGDataProvider(dataProvider,  NULL, true, kCGRenderingIntentDefault);
    } else if ([response.MIMEType isEqualToString:@"image/jpeg"]) {
        imageRef = CGImageCreateWithJPEGDataProvider(dataProvider, NULL, true, kCGRenderingIntentDefault);

        if (imageRef) {
            CGColorSpaceRef imageColorSpace = CGImageGetColorSpace(imageRef);
            CGColorSpaceModel imageColorSpaceModel = CGColorSpaceGetModel(imageColorSpace);

            // CGImageCreateWithJPEGDataProvider does not properly handle CMKY, so fall back to AFImageWithDataAtScale
            if (imageColorSpaceModel == kCGColorSpaceModelCMYK) {
                CGImageRelease(imageRef);
                imageRef = NULL;
            }
        }
    }

    CGDataProviderRelease(dataProvider);

    UIImage *image = AFImageWithDataAtScale(data, scale);
    if (!imageRef) {
        if (image.images || !image) {
            return image;
        }

        imageRef = CGImageCreateCopy([image CGImage]);
        if (!imageRef) {
            return nil;
        }
    }

    size_t width = CGImageGetWidth(imageRef);
    size_t height = CGImageGetHeight(imageRef);
    size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);

    if (width * height > 1024 * 1024 || bitsPerComponent > 8) {
        CGImageRelease(imageRef);

        return image;
    }

    // CGImageGetBytesPerRow() calculates incorrectly in iOS 5.0, so defer to CGBitmapContextCreate
    size_t bytesPerRow = 0;
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGColorSpaceModel colorSpaceModel = CGColorSpaceGetModel(colorSpace);
    CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);

    if (colorSpaceModel == kCGColorSpaceModelRGB) {
        uint32_t alpha = (bitmapInfo & kCGBitmapAlphaInfoMask);
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wassign-enum"
        if (alpha == kCGImageAlphaNone) {
            bitmapInfo &= ~kCGBitmapAlphaInfoMask;
            bitmapInfo |= kCGImageAlphaNoneSkipFirst;
        } else if (!(alpha == kCGImageAlphaNoneSkipFirst || alpha == kCGImageAlphaNoneSkipLast)) {
            bitmapInfo &= ~kCGBitmapAlphaInfoMask;
            bitmapInfo |= kCGImageAlphaPremultipliedFirst;
        }
#pragma clang diagnostic pop
    }

    CGContextRef context = CGBitmapContextCreate(NULL, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);

    CGColorSpaceRelease(colorSpace);

    if (!context) {
        CGImageRelease(imageRef);

        return image;
    }

    CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), imageRef);
    CGImageRef inflatedImageRef = CGBitmapContextCreateImage(context);

    CGContextRelease(context);

    UIImage *inflatedImage = [[UIImage alloc] initWithCGImage:inflatedImageRef scale:scale orientation:image.imageOrientation];

    CGImageRelease(inflatedImageRef);
    CGImageRelease(imageRef);

    return inflatedImage;
}

這里因為解碼的過程都是在網絡請求回來調用的,在非主線程解碼,顯示圖片的時候直接繪制,節省GPU 開銷。

SDWebImage 解碼過程

SDWebImage 的解碼,在

#import "SDWebImageDecoder.h"

中,

我們看看實現

@interface UIImage (ForceDecode)

+ (nullable UIImage *)decodedImageWithImage:(nullable UIImage *)image;

+ (nullable UIImage *)decodedAndScaledDownImageWithImage:(nullable UIImage *)image;

@end

對外提供了兩個函數,一個是解碼圖片,一個是解碼大圖片。會對圖片進行縮放。

decodedAndScaledDownImageWithImage 函數會在下載圖片的時候,如果設置了 SDWebImageDownloaderScaleDownLargeImages ,解碼的時候就會調用 decodedAndScaledDownImageWithImage,比如:

[self.imageView sd_setImageWithURL:[NSURL URLWithString:item.picture] placeholderImage:[UIImage imageNamed:@"PTV_Normal_Default_Icon"] options:SDWebImageScaleDownLargeImages | SDWebImageRetryFailed];

這里下載圖片的options 設置的是

SDWebImageScaleDownLargeImages | SDWebImageRetryFailed

在圖片解碼的時候就會調用,大圖片解碼,如果是大圖片的話,就會調用大圖解碼。如下:

- (void)URLSession:(NSURLSession *)session task:(NSURLSessionTask *)task didCompleteWithError:(NSError *)error

函數中

if (self.shouldDecompressImages) {
                        if (self.options & SDWebImageDownloaderScaleDownLargeImages) {
#if SD_UIKIT || SD_WATCH
                            image = [UIImage decodedAndScaledDownImageWithImage:image];
                            [self.imageData setData:UIImagePNGRepresentation(image)];
#endif
                        } else {
                            image = [UIImage decodedImageWithImage:image];
                        }
                    }

tips:下載圖片默認的options 是 SDWebImageRetryFailed,SDWebImageRetryFailed,表示如果一個圖片下載失敗了,下次會重新下載,如果option 不包含 SDWebImageRetryFailed,就會把下載鏈接加入黑名單,下次就不會下載了!

我們看看SDWebiamge中定義的大圖處理過程(AF中沒有這個過程)


+ (nullable UIImage *)decodedAndScaledDownImageWithImage:(nullable UIImage *)image {
    if (![UIImage shouldDecodeImage:image]) {
        return image;
    }
    
    if (![UIImage shouldScaleDownImage:image]) {
        return [UIImage decodedImageWithImage:image];
    }
    ……

上面函數,首先判斷能不能解碼(shouldDecodeImage),然后是要不要縮放(shouldScaleDownImage),如果不要縮放的,就直接調用一般的解碼函數了。

+ (BOOL)shouldScaleDownImage:(nonnull UIImage *)image {
    BOOL shouldScaleDown = YES;
        
    CGImageRef sourceImageRef = image.CGImage;
    CGSize sourceResolution = CGSizeZero;
    sourceResolution.width = CGImageGetWidth(sourceImageRef);
    sourceResolution.height = CGImageGetHeight(sourceImageRef);
    float sourceTotalPixels = sourceResolution.width * sourceResolution.height;
    float imageScale = kDestTotalPixels / sourceTotalPixels;
    if (imageScale < 1) {
        shouldScaleDown = YES;
    } else {
        shouldScaleDown = NO;
    }
    
    return shouldScaleDown;
}

shouldScaleDownImage,這里就做了兩件事,獲取源圖像的寬高,算出總共多大,如果大于最大值,就是需要特殊處理的,否則就不要。

最大值的定義如下:

static const CGFloat kDestImageSizeMB = 60.0f;

static const CGFloat kDestTotalPixels = kDestImageSizeMB * kPixelsPerMB;

對大圖片的處理方式如下:

+ (nullable UIImage *)decodedAndScaledDownImageWithImage:(nullable UIImage *)image {
    if (![UIImage shouldDecodeImage:image]) {
        return image;
    }
    
    if (![UIImage shouldScaleDownImage:image]) {
        return [UIImage decodedImageWithImage:image];
    }
    
    ///########以下開始處理大圖片
    
    CGContextRef destContext;
    
    // autorelease the bitmap context and all vars to help system to free memory when there are memory warning.
    // on iOS7, do not forget to call [[SDImageCache sharedImageCache] clearMemory];
    @autoreleasepool {
        CGImageRef sourceImageRef = image.CGImage;
        
        CGSize sourceResolution = CGSizeZero;
        sourceResolution.width = CGImageGetWidth(sourceImageRef);
        sourceResolution.height = CGImageGetHeight(sourceImageRef);
        float sourceTotalPixels = sourceResolution.width * sourceResolution.height;
        // Determine the scale ratio to apply to the input image
        // that results in an output image of the defined size.
        // see kDestImageSizeMB, and how it relates to destTotalPixels.
        float imageScale = kDestTotalPixels / sourceTotalPixels;
        CGSize destResolution = CGSizeZero;
        destResolution.width = (int)(sourceResolution.width*imageScale);
        destResolution.height = (int)(sourceResolution.height*imageScale);
        
        // current color space
        CGColorSpaceRef colorspaceRef = [UIImage colorSpaceForImageRef:sourceImageRef];
        
        size_t bytesPerRow = kBytesPerPixel * destResolution.width;
        
        // Allocate enough pixel data to hold the output image.
        void* destBitmapData = malloc( bytesPerRow * destResolution.height );
        if (destBitmapData == NULL) {
            return image;
        }
        
        // kCGImageAlphaNone is not supported in CGBitmapContextCreate.
        // Since the original image here has no alpha info, use kCGImageAlphaNoneSkipLast
        // to create bitmap graphics contexts without alpha info.
        destContext = CGBitmapContextCreate(destBitmapData,
                                            destResolution.width,
                                            destResolution.height,
                                            kBitsPerComponent,
                                            bytesPerRow,
                                            colorspaceRef,
                                            kCGBitmapByteOrderDefault|kCGImageAlphaNoneSkipLast);
        
        if (destContext == NULL) {
            free(destBitmapData);
            return image;
        }
        CGContextSetInterpolationQuality(destContext, kCGInterpolationHigh);
        
        // Now define the size of the rectangle to be used for the
        // incremental blits from the input image to the output image.
        // we use a source tile width equal to the width of the source
        // image due to the way that iOS retrieves image data from disk.
        // iOS must decode an image from disk in full width 'bands', even
        // if current graphics context is clipped to a subrect within that
        // band. Therefore we fully utilize all of the pixel data that results
        // from a decoding opertion by achnoring our tile size to the full
        // width of the input image.
        CGRect sourceTile = CGRectZero;
        sourceTile.size.width = sourceResolution.width;
        // The source tile height is dynamic. Since we specified the size
        // of the source tile in MB, see how many rows of pixels high it
        // can be given the input image width.
        sourceTile.size.height = (int)(kTileTotalPixels / sourceTile.size.width );
        sourceTile.origin.x = 0.0f;
        // The output tile is the same proportions as the input tile, but
        // scaled to image scale.
        CGRect destTile;
        destTile.size.width = destResolution.width;
        destTile.size.height = sourceTile.size.height * imageScale;
        destTile.origin.x = 0.0f;
        // The source seem overlap is proportionate to the destination seem overlap.
        // this is the amount of pixels to overlap each tile as we assemble the ouput image.
        float sourceSeemOverlap = (int)((kDestSeemOverlap/destResolution.height)*sourceResolution.height);
        CGImageRef sourceTileImageRef;
        // calculate the number of read/write operations required to assemble the
        // output image.
        int iterations = (int)( sourceResolution.height / sourceTile.size.height );
        // If tile height doesn't divide the image height evenly, add another iteration
        // to account for the remaining pixels.
        int remainder = (int)sourceResolution.height % (int)sourceTile.size.height;
        if(remainder) {
            iterations++;
        }
        // Add seem overlaps to the tiles, but save the original tile height for y coordinate calculations.
        float sourceTileHeightMinusOverlap = sourceTile.size.height;
        sourceTile.size.height += sourceSeemOverlap;
        destTile.size.height += kDestSeemOverlap;
        for( int y = 0; y < iterations; ++y ) {
            @autoreleasepool {
                sourceTile.origin.y = y * sourceTileHeightMinusOverlap + sourceSeemOverlap;
                destTile.origin.y = destResolution.height - (( y + 1 ) * sourceTileHeightMinusOverlap * imageScale + kDestSeemOverlap);
                sourceTileImageRef = CGImageCreateWithImageInRect( sourceImageRef, sourceTile );
                if( y == iterations - 1 && remainder ) {
                    float dify = destTile.size.height;
                    destTile.size.height = CGImageGetHeight( sourceTileImageRef ) * imageScale;
                    dify -= destTile.size.height;
                    destTile.origin.y += dify;
                }
                CGContextDrawImage( destContext, destTile, sourceTileImageRef );
                CGImageRelease( sourceTileImageRef );
            }
        }
        
        CGImageRef destImageRef = CGBitmapContextCreateImage(destContext);
        CGContextRelease(destContext);
        if (destImageRef == NULL) {
            return image;
        }
        UIImage *destImage = [UIImage imageWithCGImage:destImageRef scale:image.scale orientation:image.imageOrientation];
        CGImageRelease(destImageRef);
        if (destImage == nil) {
            return image;
        }
        return destImage;
    }
}

decodedAndScaledDownImageWithImage大圖處理過程

使用 @autoreleasepool ,及時釋放內存。比如:

for( int y = 0; y < iterations; ++y ) {
            @autoreleasepool {
              balabalabalbablablablalbal
              ////TODO 畫圖
            }
        }

1: 先計算出最終圖片的大小。

因為圖片是根據最大尺寸計算的,所以是根據最大尺寸計算的,

CGImageRef sourceImageRef = image.CGImage;
       
       CGSize sourceResolution = CGSizeZero;
       sourceResolution.width = CGImageGetWidth(sourceImageRef);
       sourceResolution.height = CGImageGetHeight(sourceImageRef);
       float sourceTotalPixels = sourceResolution.width * sourceResolution.height;
       
       float imageScale = kDestTotalPixels / sourceTotalPixels;
       CGSize destResolution = CGSizeZero;
       destResolution.width = (int)(sourceResolution.width*imageScale);
       destResolution.height = (int)(sourceResolution.height*imageScale);
       

這里先取出sourceImage 的大小,然后求出 縮放因子

float imageScale = kDestTotalPixels / sourceTotalPixels;
        

能走到這一步,縮放因子肯定是小于1的.

2: 使用 CGBitmapContextCreate 創建一個大小合適的 CGContextRef ,以后分段作圖要使用。
3: 計算出迭代因子 (iterations),即:(循環的次數)
4: 分段畫圖,使用 CGImageCreateWithImageInRect 獲取原圖的某一段,使用 CGContextDrawImage 把圖片畫第二步創建好的畫布上。
5: 使用 CGBitmapContextCreateImage 獲取解碼好的圖片
6:釋放相關資源。

未完,待續......

圖片解碼外部連接

如何避免圖片解壓縮開銷

圖片處理的tips

談談iOS圖片的解壓縮

改變圖片尺寸的方法和性能對比

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容