概述
GPUImage是一個著名的圖像處理開源庫,它讓你能夠在圖片、視頻、相機上使用GPU加速的濾鏡和其它特效。與CoreImage框架相比,可以根據GPUImage提供的接口,使用自定義的濾鏡。項目地址:https://github.com/BradLarson/GPUImage
這篇文章主要是閱讀GPUImage框架中的GPUImageRawDataInput、GPUImageRawDataOutput、GPUImageTextureInput、GPUImageTextureOutput 這幾個類的源碼。之前介紹過數據來自相機、圖片、音視頻文件、UI渲染,這里還會介紹其它兩種:RGBA原始數據(包括:RGB、RGBA、BGRA、LUMINANCE)和紋理對象。以下是源碼內容:
GPUImageRawDataInput
GPUImageRawDataOutput
GPUImageTextureInput
GPUImageTextureOutput
實現效果
GPUImageRawDataInput
GPUImageRawDataInput繼承自GPUImageOutput,它可以接受RawData輸入(包括:RGB、RGBA、BGRA、LUMINANCE數據)并生成幀緩存對象。
- 構造方法。構造的時候主要是根據RawData數據指針,數據大小,以及數據格式進行構造。
- (id)initWithBytes:(GLubyte *)bytesToUpload size:(CGSize)imageSize;
- (id)initWithBytes:(GLubyte *)bytesToUpload size:(CGSize)imageSize pixelFormat:(GPUPixelFormat)pixelFormat;
- (id)initWithBytes:(GLubyte *)bytesToUpload size:(CGSize)imageSize pixelFormat:(GPUPixelFormat)pixelFormat type:(GPUPixelType)pixelType;
- 構造的時候如果不指定像素格式和數據類型,默認會使用GPUPixelFormatBGRA和GPUPixelTypeUByte的方式。構造的過程是:1、初始化實例變量;2、生成只有紋理對象的GPUImageFramebuffer對象;3、給紋理對象綁定RawData數據。
- (id)initWithBytes:(GLubyte *)bytesToUpload size:(CGSize)imageSize;
{
if (!(self = [self initWithBytes:bytesToUpload size:imageSize pixelFormat:GPUPixelFormatBGRA type:GPUPixelTypeUByte]))
{
return nil;
}
return self;
}
- (id)initWithBytes:(GLubyte *)bytesToUpload size:(CGSize)imageSize pixelFormat:(GPUPixelFormat)pixelFormat;
{
if (!(self = [self initWithBytes:bytesToUpload size:imageSize pixelFormat:pixelFormat type:GPUPixelTypeUByte]))
{
return nil;
}
return self;
}
- (id)initWithBytes:(GLubyte *)bytesToUpload size:(CGSize)imageSize pixelFormat:(GPUPixelFormat)pixelFormat type:(GPUPixelType)pixelType;
{
if (!(self = [super init]))
{
return nil;
}
dataUpdateSemaphore = dispatch_semaphore_create(1);
uploadedImageSize = imageSize;
self.pixelFormat = pixelFormat;
self.pixelType = pixelType;
[self uploadBytes:bytesToUpload];
return self;
}
- (void)uploadBytes:(GLubyte *)bytesToUpload;
{
// 設置上下文對象
[GPUImageContext useImageProcessingContext];
// 生成GPUImageFramebuffer
outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:uploadedImageSize textureOptions:self.outputTextureOptions onlyTexture:YES];
// 綁定紋理數據
glBindTexture(GL_TEXTURE_2D, [outputFramebuffer texture]);
glTexImage2D(GL_TEXTURE_2D, 0, _pixelFormat, (int)uploadedImageSize.width, (int)uploadedImageSize.height, 0, (GLint)_pixelFormat, (GLenum)_pixelType, bytesToUpload);
}
- 其它方法。
// 上傳數據
- (void)updateDataFromBytes:(GLubyte *)bytesToUpload size:(CGSize)imageSize;
// 處理數據
- (void)processData;
- (void)processDataForTimestamp:(CMTime)frameTime;
// 輸出紋理大小
- (CGSize)outputImageSize;
在其它方法中比較重要的是數據處理方法,它的主要作用是驅動響應鏈,也就是將讀取的RawData數據傳遞給接下來的targets進行處理。
// 上傳原始數據
- (void)updateDataFromBytes:(GLubyte *)bytesToUpload size:(CGSize)imageSize;
{
uploadedImageSize = imageSize;
// 調用數據上傳方法
[self uploadBytes:bytesToUpload];
}
// 數據處理
- (void)processData;
{
if (dispatch_semaphore_wait(dataUpdateSemaphore, DISPATCH_TIME_NOW) != 0)
{
return;
}
runAsynchronouslyOnVideoProcessingQueue(^{
CGSize pixelSizeOfImage = [self outputImageSize];
// 遍歷所有的targets,將outputFramebuffer交給每個target處理
for (id<GPUImageInput> currentTarget in targets)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger textureIndexOfTarget = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
[currentTarget setInputSize:pixelSizeOfImage atIndex:textureIndexOfTarget];
[currentTarget setInputFramebuffer:outputFramebuffer atIndex:textureIndexOfTarget];
[currentTarget newFrameReadyAtTime:kCMTimeInvalid atIndex:textureIndexOfTarget];
}
dispatch_semaphore_signal(dataUpdateSemaphore);
});
}
// 數據處理
- (void)processDataForTimestamp:(CMTime)frameTime;
{
if (dispatch_semaphore_wait(dataUpdateSemaphore, DISPATCH_TIME_NOW) != 0)
{
return;
}
runAsynchronouslyOnVideoProcessingQueue(^{
CGSize pixelSizeOfImage = [self outputImageSize];
// 遍歷所有的targets,將outputFramebuffer交給每個target處理
for (id<GPUImageInput> currentTarget in targets)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger textureIndexOfTarget = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
[currentTarget setInputSize:pixelSizeOfImage atIndex:textureIndexOfTarget];
[currentTarget newFrameReadyAtTime:frameTime atIndex:textureIndexOfTarget];
}
dispatch_semaphore_signal(dataUpdateSemaphore);
});
}
// 輸出紋理大小
- (CGSize)outputImageSize;
{
return uploadedImageSize;
}
GPUImageRawDataOutput
GPUImageRawDataOutput實現了GPUImageInput協議,它可以將輸入的幀緩存轉換為原始數據。
- 構造方法。構造方法最主要的任務是構造GL程序。
- (id)initWithImageSize:(CGSize)newImageSize resultsInBGRAFormat:(BOOL)resultsInBGRAFormat;
初始化的時候需要指定紋理大小以及是否以BGRA形式的數據輸入。如果是以BGRA的形式輸入,則在選擇片段著色器的時候會選擇kGPUImageColorSwizzlingFragmentShaderString著色器來進行從BGRA-到RGBA的轉換。
- (id)initWithImageSize:(CGSize)newImageSize resultsInBGRAFormat:(BOOL)resultsInBGRAFormat;
{
if (!(self = [super init]))
{
return nil;
}
self.enabled = YES;
lockNextFramebuffer = NO;
outputBGRA = resultsInBGRAFormat;
imageSize = newImageSize;
hasReadFromTheCurrentFrame = NO;
_rawBytesForImage = NULL;
inputRotation = kGPUImageNoRotation;
[GPUImageContext useImageProcessingContext];
// 如果使用了BGRA ,則選擇kGPUImageColorSwizzlingFragmentShaderString著色器
if ( (outputBGRA && ![GPUImageContext supportsFastTextureUpload]) || (!outputBGRA && [GPUImageContext supportsFastTextureUpload]) )
{
dataProgram = [[GPUImageContext sharedImageProcessingContext] programForVertexShaderString:kGPUImageVertexShaderString fragmentShaderString:kGPUImageColorSwizzlingFragmentShaderString];
}
// 否則選用kGPUImagePassthroughFragmentShaderString著色器
else
{
dataProgram = [[GPUImageContext sharedImageProcessingContext] programForVertexShaderString:kGPUImageVertexShaderString fragmentShaderString:kGPUImagePassthroughFragmentShaderString];
}
if (!dataProgram.initialized)
{
[dataProgram addAttribute:@"position"];
[dataProgram addAttribute:@"inputTextureCoordinate"];
if (![dataProgram link])
{
NSString *progLog = [dataProgram programLog];
NSLog(@"Program link log: %@", progLog);
NSString *fragLog = [dataProgram fragmentShaderLog];
NSLog(@"Fragment shader compile log: %@", fragLog);
NSString *vertLog = [dataProgram vertexShaderLog];
NSLog(@"Vertex shader compile log: %@", vertLog);
dataProgram = nil;
NSAssert(NO, @"Filter shader link failed");
}
}
// 獲取統一變量和屬性
dataPositionAttribute = [dataProgram attributeIndex:@"position"];
dataTextureCoordinateAttribute = [dataProgram attributeIndex:@"inputTextureCoordinate"];
dataInputTextureUniform = [dataProgram uniformIndex:@"inputImageTexture"];
return self;
}
- 其他方法
// 獲取特定位置的像素向量
- (GPUByteColorVector)colorAtLocation:(CGPoint)locationInImage;
// 每行數據大小
- (NSUInteger)bytesPerRowInOutput;
// 設置紋理大小
- (void)setImageSize:(CGSize)newImageSize;
// 鎖定、與解鎖幀緩存
- (void)lockFramebufferForReading;
- (void)unlockFramebufferAfterReading;
方法實現如下:
// 獲取特定位置的像素向量
- (GPUByteColorVector)colorAtLocation:(CGPoint)locationInImage;
{
// 將數據轉為GPUByteColorVector類型
GPUByteColorVector *imageColorBytes = (GPUByteColorVector *)self.rawBytesForImage;
// NSLog(@"Row start");
// for (unsigned int currentXPosition = 0; currentXPosition < (imageSize.width * 2.0); currentXPosition++)
// {
// GPUByteColorVector byteAtPosition = imageColorBytes[currentXPosition];
// NSLog(@"%d - %d, %d, %d", currentXPosition, byteAtPosition.red, byteAtPosition.green, byteAtPosition.blue);
// }
// NSLog(@"Row end");
// GPUByteColorVector byteAtOne = imageColorBytes[1];
// GPUByteColorVector byteAtWidth = imageColorBytes[(int)imageSize.width - 3];
// GPUByteColorVector byteAtHeight = imageColorBytes[(int)(imageSize.height - 1) * (int)imageSize.width];
// NSLog(@"Byte 1: %d, %d, %d, byte 2: %d, %d, %d, byte 3: %d, %d, %d", byteAtOne.red, byteAtOne.green, byteAtOne.blue, byteAtWidth.red, byteAtWidth.green, byteAtWidth.blue, byteAtHeight.red, byteAtHeight.green, byteAtHeight.blue);
// 控制邊界,0 < x < width, 0 < y < height,
CGPoint locationToPickFrom = CGPointZero;
locationToPickFrom.x = MIN(MAX(locationInImage.x, 0.0), (imageSize.width - 1.0));
locationToPickFrom.y = MIN(MAX((imageSize.height - locationInImage.y), 0.0), (imageSize.height - 1.0));
// 如果是BGRA輸出,則把RGBA數據轉為BGRA數據
if (outputBGRA)
{
GPUByteColorVector flippedColor = imageColorBytes[(int)(round((locationToPickFrom.y * imageSize.width) + locationToPickFrom.x))];
GLubyte temporaryRed = flippedColor.red;
flippedColor.red = flippedColor.blue;
flippedColor.blue = temporaryRed;
return flippedColor;
}
else
{
// 返回某個位置的像素向量
return imageColorBytes[(int)(round((locationToPickFrom.y * imageSize.width) + locationToPickFrom.x))];
}
}
// 每行數據大小
- (NSUInteger)bytesPerRowInOutput;
{
return [retainedFramebuffer bytesPerRow];
}
// 設置輸出紋理大小
- (void)setImageSize:(CGSize)newImageSize {
imageSize = newImageSize;
if (_rawBytesForImage != NULL && (![GPUImageContext supportsFastTextureUpload]))
{
free(_rawBytesForImage);
_rawBytesForImage = NULL;
}
}
// 鎖定幀緩存
- (void)lockFramebufferForReading;
{
lockNextFramebuffer = YES;
}
// 解鎖幀緩存
- (void)unlockFramebufferAfterReading;
{
[retainedFramebuffer unlockAfterReading];
[retainedFramebuffer unlock];
retainedFramebuffer = nil;
}
// 獲取RGBA數據
- (GLubyte *)rawBytesForImage;
{
if ( (_rawBytesForImage == NULL) && (![GPUImageContext supportsFastTextureUpload]) )
{
// 申請空間,儲存讀取的數據
_rawBytesForImage = (GLubyte *) calloc(imageSize.width * imageSize.height * 4, sizeof(GLubyte));
hasReadFromTheCurrentFrame = NO;
}
if (hasReadFromTheCurrentFrame)
{
return _rawBytesForImage;
}
else
{
runSynchronouslyOnVideoProcessingQueue(^{
// Note: the fast texture caches speed up 640x480 frame reads from 9.6 ms to 3.1 ms on iPhone 4S
// 設置GL下文對象
[GPUImageContext useImageProcessingContext];
// 渲染到幀緩存
[self renderAtInternalSize];
if ([GPUImageContext supportsFastTextureUpload])
{
// 等待繪制結束
glFinish();
_rawBytesForImage = [outputFramebuffer byteBuffer];
}
else
{
// 以RGBA的形式讀取數據
glReadPixels(0, 0, imageSize.width, imageSize.height, GL_RGBA, GL_UNSIGNED_BYTE, _rawBytesForImage);
// GL_EXT_read_format_bgra
// glReadPixels(0, 0, imageSize.width, imageSize.height, GL_BGRA_EXT, GL_UNSIGNED_BYTE, _rawBytesForImage);
}
hasReadFromTheCurrentFrame = YES;
});
return _rawBytesForImage;
}
}
GPUImageTextureInput
GPUImageTextureInput繼承自GPUImageOutput,可以用傳入的紋理生成幀緩存對象。因此,可以作為響應鏈源使用。
- 構造方法。構造方法只有一個,接收的參數是紋理對象和紋理大小。
- (id)initWithTexture:(GLuint)newInputTexture size:(CGSize)newTextureSize;
構造的時候,主要是用傳入的紋理生成幀緩存對象。
- (id)initWithTexture:(GLuint)newInputTexture size:(CGSize)newTextureSize;
{
if (!(self = [super init]))
{
return nil;
}
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext useImageProcessingContext];
});
textureSize = newTextureSize;
runSynchronouslyOnVideoProcessingQueue(^{
// 生成幀緩存對象
outputFramebuffer = [[GPUImageFramebuffer alloc] initWithSize:newTextureSize overriddenTexture:newInputTexture];
});
return self;
}
- 其它方法。出去父類的方法,它只提供了一個數據處理的方法。
- (void)processTextureWithFrameTime:(CMTime)frameTime;
處理數據的過程就是將幀緩存對象傳遞給所有targets,驅動所有targets處理。
- (void)processTextureWithFrameTime:(CMTime)frameTime;
{
runAsynchronouslyOnVideoProcessingQueue(^{
// 遍歷所有targets,驅動各個target處理數據
for (id<GPUImageInput> currentTarget in targets)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger targetTextureIndex = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
[currentTarget setInputSize:textureSize atIndex:targetTextureIndex];
[currentTarget setInputFramebuffer:outputFramebuffer atIndex:targetTextureIndex];
[currentTarget newFrameReadyAtTime:frameTime atIndex:targetTextureIndex];
}
});
}
GPUImageTextureOutput
GPUImageTextureOutput繼承自NSObject實現了GPUImageInput協議。它可以獲取到輸入的幀緩存中的紋理對象。
- 屬性。這里最重要的屬性是texture,我們可以拿到texture并使用。
// GPUImageTextureOutputDelegate
@property(readwrite, unsafe_unretained, nonatomic) id<GPUImageTextureOutputDelegate> delegate;
// 紋理對象
@property(readonly) GLuint texture;
// 是否啟用
@property(nonatomic) BOOL enabled;
- 構造方法。構造的時候不需要傳遞參數。
- (id)init;
構造方法特別簡單,只是將enabled設為YES。
- (id)init;
{
if (!(self = [super init]))
{
return nil;
}
self.enabled = YES;
return self;
}
- 其它方法。
// 紋理使用完,解鎖紋理對象
- (void)doneWithTexture;
方法很少,也比較簡單。
// 紋理使用完,解鎖紋理對象
- (void)doneWithTexture;
{
[firstInputFramebuffer unlock];
}
實現過程
- 通過GPUImageRawDataInput加載圖片。我們將圖片轉換為RGBA的數據格式,然后使用GPUImageRawDataInput加載并顯示。
- (void)showPicture
{
// 加載紋理
UIImage *image = [UIImage imageNamed:@"1.jpg"];
size_t width = CGImageGetWidth(image.CGImage);
size_t height = CGImageGetHeight(image.CGImage);
unsigned char *imageData = [QMImageHelper convertUIImageToBitmapRGBA8:image];
// 初始化GPUImageRawDataInput
_rawDataInput = [[GPUImageRawDataInput alloc] initWithBytes:imageData size:CGSizeMake(width, height) pixelFormat:GPUPixelFormatRGBA];
// 濾鏡
GPUImageSolarizeFilter *filter = [[GPUImageSolarizeFilter alloc] init];
[_rawDataInput addTarget:filter];
[filter addTarget:_imageView];
// 開始處理數據
[_rawDataInput processData];
// 清理
if (imageData) {
free(imageData);
image = NULL;
}
}
- 通過GPUImageRawDataOutput輸出RGBA數據。首先用GPUImageRawDataOutput生成RGBA原始數據,再利用libpng編碼為png圖片。
- (void)writeRGBADataToFile
{
// 加載紋理
UIImage *image = [UIImage imageNamed:@"2.jpg"];
size_t width = CGImageGetWidth(image.CGImage);
size_t height = CGImageGetHeight(image.CGImage);
unsigned char *imageData = [QMImageHelper convertUIImageToBitmapRGBA8:image];
// 初始化GPUImageRawDataInput
_rawDataInput = [[GPUImageRawDataInput alloc] initWithBytes:imageData size:CGSizeMake(width, height) pixelFormat:GPUPixelFormatRGBA];
// 濾鏡
GPUImageSaturationFilter *filter = [[GPUImageSaturationFilter alloc] init];
filter.saturation = 0.3;
// GPUImageRawDataOutput
GPUImageRawDataOutput *rawDataOutput = [[GPUImageRawDataOutput alloc] initWithImageSize:CGSizeMake(width, height) resultsInBGRAFormat:NO];
[rawDataOutput lockFramebufferForReading];
[_rawDataInput addTarget:filter];
[filter addTarget:_imageView];
[filter addTarget:rawDataOutput];
// 開始處理數據
[_rawDataInput processData];
// 生成png圖片
unsigned char *rawBytes = [rawDataOutput rawBytesForImage];
pic_data pngData = {(int)width, (int)height, 8, PNG_HAVE_ALPHA, rawBytes};
write_png_file([DOCUMENT(@"raw_data_output.png") UTF8String], &pngData);
// 清理
[rawDataOutput unlockFramebufferAfterReading];
if (imageData) {
free(imageData);
image = NULL;
}
NSLog(@"%@", DOCUMENT(@"raw_data_output.png"));
}
- 利用GPUImageTextureInput讀取圖片。讀取圖片數據,然后生成紋理對象去構造GPUImageTextureInput。
- (IBAction)textureInputButtonTapped:(UIButton *)sender
{
// 加載紋理
UIImage *image = [UIImage imageNamed:@"3.jpg"];
size_t width = CGImageGetWidth(image.CGImage);
size_t height = CGImageGetHeight(image.CGImage);
unsigned char *imageData = [QMImageHelper convertUIImageToBitmapRGBA8:image];
glGenTextures(1, &_texture);
glBindTexture(GL_TEXTURE_2D, _texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (GLsizei)width, (GLsizei)height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
glBindTexture(GL_TEXTURE_2D, 0);
GPUImageTextureInput *textureInput = [[GPUImageTextureInput alloc] initWithTexture:_texture size:CGSizeMake(width, height)];
[textureInput addTarget:_imageView];
[textureInput processTextureWithFrameTime:kCMTimeIndefinite];
// 清理
if (imageData) {
free(imageData);
image = NULL;
}
}
- 使用GPUImageTextureOutput生成紋理對象。先由GPUImageTextureOutput生成紋理對象,然后利用OpenGL ES繪制到幀緩存對象中,最后利用幀緩存對象生成圖片。
NSString *const kVertexShaderString = SHADER_STRING
(
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
);
NSString *const kFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
gl_FragColor = texture2D(inputImageTexture, textureCoordinate);
}
);
- (IBAction)textureOutputButtonTapped:(UIButton *)sender
{
UIImage *image = [UIImage imageNamed:@"3.jpg"];
size_t width = CGImageGetWidth(image.CGImage);
size_t height = CGImageGetHeight(image.CGImage);
// GPUImagePicture
GPUImagePicture *picture = [[GPUImagePicture alloc] initWithImage:image];
// GPUImageTextureOutput
GPUImageTextureOutput *output = [[GPUImageTextureOutput alloc] init];
[picture addTarget:output];
[picture addTarget:_imageView];
[picture processImage];
// 生成圖片
runSynchronouslyOnContextQueue([GPUImageContext sharedImageProcessingContext], ^{
// 設置程序
GLProgram *progam = [[GPUImageContext sharedImageProcessingContext] programForVertexShaderString:kVertexShaderString fragmentShaderString:kFragmentShaderString];
[progam addAttribute:@"position"];
[progam addAttribute:@"inputTextureCoordinate"];
// 激活程序
[GPUImageContext setActiveShaderProgram:progam];
[GPUImageContext useImageProcessingContext];
// GPUImageFramebuffer
GPUImageFramebuffer *frameBuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:CGSizeMake(width, height) onlyTexture:NO];
[frameBuffer lock];
static const GLfloat imageVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat textureCoordinates[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
};
glClearColor(1.0, 1.0, 1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glViewport(0, 0, (GLsizei)width, (GLsizei)height);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, output.texture);
glUniform1i([progam uniformIndex:@"inputImageTexture"], 2);
glVertexAttribPointer([progam attributeIndex:@"position"], 2, GL_FLOAT, 0, 0, imageVertices);
glVertexAttribPointer([progam attributeIndex:@"inputTextureCoordinate"], 2, GL_FLOAT, 0, 0, textureCoordinates);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
CGImageRef outImage = [frameBuffer newCGImageFromFramebufferContents];
NSData *pngData = UIImagePNGRepresentation([UIImage imageWithCGImage:outImage]);
[pngData writeToFile:DOCUMENT(@"texture_output.png") atomically:YES];
NSLog(@"%@", DOCUMENT(@"texture_output.png"));
// unlock
[frameBuffer unlock];
[output doneWithTexture];
});
}
總結
GPUImageRawDataInput、GPUImageRawDataOutput、GPUImageTextureInput、GPUImageTextureOutput 這幾個類并不是很常用,使用起來也比較簡單。如果需要使用原始圖片數據和紋理的時候可以考慮使用它們。
源碼地址:GPUImage源碼閱讀系列 https://github.com/QinminiOS/GPUImage
系列文章地址:GPUImage源碼閱讀 http://www.lxweimin.com/nb/11749791