概述
GPUImage是一個(gè)著名的圖像處理開源庫,它讓你能夠在圖片、視頻、相機(jī)上使用GPU加速的濾鏡和其它特效。與CoreImage框架相比,可以根據(jù)GPUImage提供的接口,使用自定義的濾鏡。項(xiàng)目地址:https://github.com/BradLarson/GPUImage
這篇文章主要是閱讀GPUImage框架中的 GPUImageFilter 類的源碼。GPUImageFilter 是GPUImage中很重要、很基礎(chǔ)的類,它可以處理幀緩存對象的輸入輸出,但是對紋理并不添加任何特效,也就是說只是簡單的讓紋理通過。它更多的是作為其它濾鏡的基類,一些具體的濾鏡由它的子類去完成。同時(shí)它也只能處理單個(gè)幀緩存對象的輸入,處理多個(gè)幀緩存對象的輸入也是由它的子類去完成。以下是源碼內(nèi)容:
GPUImageFilter
實(shí)現(xiàn)效果
- 通過繼承GPUImageFilter,實(shí)現(xiàn)自定義濾鏡特效。
- 實(shí)現(xiàn)簡單的光照效果。
GPUImageFilter
GPUImageFilter 本身并不實(shí)現(xiàn)相關(guān)的濾鏡特效,只是簡單的輸出輸入的紋理樣式。GPUImageFilter 更多的是作為其它濾鏡的基類,它提供了許多最基礎(chǔ)的接口,以及控制了整個(gè)濾鏡鏈的基本流程。GPUImageFilter 繼承自 GPUImageOutput 實(shí)現(xiàn)了 GPUImageInput 協(xié)議,可以將輸入的紋理經(jīng)過相關(guān)處理后輸出,從而對紋理應(yīng)用相關(guān)特效。在一個(gè)響應(yīng)鏈中可以有多個(gè) GPUImageFilter,從而實(shí)現(xiàn)了疊加濾鏡的效果。
- 矩陣
在 GPUImage 中主要用到了3維向量、4維向量、4x4矩陣、3x3矩陣,對應(yīng)OpenGL中的vec3、vec4、mat4、mat3。之所以使用這些向量、矩陣,是為了方便向著色器傳值。在 GPUImageFilter 中定義了一組傳值的接口,在需要向著色器傳值的時(shí)候很方便。具體向量定義如下:
struct GPUVector4 {
GLfloat one;
GLfloat two;
GLfloat three;
GLfloat four;
};
typedef struct GPUVector4 GPUVector4;
struct GPUVector3 {
GLfloat one;
GLfloat two;
GLfloat three;
};
typedef struct GPUVector3 GPUVector3;
struct GPUMatrix4x4 {
GPUVector4 one;
GPUVector4 two;
GPUVector4 three;
GPUVector4 four;
};
typedef struct GPUMatrix4x4 GPUMatrix4x4;
struct GPUMatrix3x3 {
GPUVector3 one;
GPUVector3 two;
GPUVector3 three;
};
typedef struct GPUMatrix3x3 GPUMatrix3x3;
- 著色器
在濾鏡中著色器程序是很重要的,它決定了濾鏡的表現(xiàn)效果。在 GPUImageFilter 中的著色器程序比較簡單,只是簡單的進(jìn)行紋理采樣,并沒有對像素?cái)?shù)據(jù)進(jìn)行相關(guān)操作。在自定義相關(guān)濾鏡的時(shí)候,我們通常改變片段著色器就行了,如果涉及多個(gè)紋理輸入,可以使用之前介紹的多重輸入濾鏡(也是GPUImageFilter的子類,但擴(kuò)展了幀緩存的輸入)。以下是 GPUImageFilter 的相關(guān)著色器。
NSString *const kGPUImageVertexShaderString = SHADER_STRING
(
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
);
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImagePassthroughFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
gl_FragColor = texture2D(inputImageTexture, textureCoordinate);
}
);
#else
NSString *const kGPUImagePassthroughFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
gl_FragColor = texture2D(inputImageTexture, textureCoordinate);
}
);
#endif
- 實(shí)例變量
GPUImageFilter 中有兩個(gè)比較重要的實(shí)例變量 firstInputFramebuffer、filterProgram。firstInputFramebuffer 表示輸入幀緩存對象,filterProgram 表示GL程序。
@interface GPUImageFilter : GPUImageOutput <GPUImageInput>
{
// 輸入幀緩存對象
GPUImageFramebuffer *firstInputFramebuffer;
// GL程序
GLProgram *filterProgram;
// 屬性變量
GLint filterPositionAttribute, filterTextureCoordinateAttribute;
// 紋理統(tǒng)一變量
GLint filterInputTextureUniform;
// GL清屏顏色
GLfloat backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha;
// 結(jié)束處理操作
BOOL isEndProcessing;
CGSize currentFilterSize;
// 屏幕旋轉(zhuǎn)方向
GPUImageRotationMode inputRotation;
BOOL currentlyReceivingMonochromeInput;
// 保存RestorationBlocks的字典
NSMutableDictionary *uniformStateRestorationBlocks;
// 信號(hào)量
dispatch_semaphore_t imageCaptureSemaphore;
}
- 構(gòu)造方法
GPUImageFilter 構(gòu)造方法需要我們傳入頂點(diǎn)著色器和片段著色器就,當(dāng)然我們一般只需要傳入片段著色器即可。初始化的過程可以概括為這幾個(gè)步驟:1、初始化相關(guān)實(shí)例變量;2、初始化GL上下文對象;3、初始化GL程序;4、創(chuàng)建GL程序;5、獲取GL相關(guān)變量。
- (id)initWithVertexShaderFromString:(NSString *)vertexShaderString fragmentShaderFromString:(NSString *)fragmentShaderString;
- (id)initWithFragmentShaderFromString:(NSString *)fragmentShaderString;
- (id)initWithFragmentShaderFromFile:(NSString *)fragmentShaderFilename;
/******************* 方法實(shí)現(xiàn) ************************************/
- (id)initWithVertexShaderFromString:(NSString *)vertexShaderString fragmentShaderFromString:(NSString *)fragmentShaderString;
{
if (!(self = [super init]))
{
return nil;
}
// 初始化相關(guān)實(shí)例變量
uniformStateRestorationBlocks = [NSMutableDictionary dictionaryWithCapacity:10];
_preventRendering = NO;
currentlyReceivingMonochromeInput = NO;
inputRotation = kGPUImageNoRotation;
backgroundColorRed = 0.0;
backgroundColorGreen = 0.0;
backgroundColorBlue = 0.0;
backgroundColorAlpha = 0.0;
imageCaptureSemaphore = dispatch_semaphore_create(0);
dispatch_semaphore_signal(imageCaptureSemaphore);
runSynchronouslyOnVideoProcessingQueue(^{
// 初始化GL上下文對象
[GPUImageContext useImageProcessingContext];
// 創(chuàng)建GL程序
filterProgram = [[GPUImageContext sharedImageProcessingContext] programForVertexShaderString:vertexShaderString fragmentShaderString:fragmentShaderString];
if (!filterProgram.initialized)
{
// 初始化屬性變量
[self initializeAttributes];
// 鏈接著色器程序
if (![filterProgram link])
{
// 輸出錯(cuò)誤日志
NSString *progLog = [filterProgram programLog];
NSLog(@"Program link log: %@", progLog);
NSString *fragLog = [filterProgram fragmentShaderLog];
NSLog(@"Fragment shader compile log: %@", fragLog);
NSString *vertLog = [filterProgram vertexShaderLog];
NSLog(@"Vertex shader compile log: %@", vertLog);
filterProgram = nil;
NSAssert(NO, @"Filter shader link failed");
}
}
// 獲取頂點(diǎn)屬性變量
filterPositionAttribute = [filterProgram attributeIndex:@"position"];
// 獲取紋理坐標(biāo)屬性變量
filterTextureCoordinateAttribute = [filterProgram attributeIndex:@"inputTextureCoordinate"];
// 獲取紋理統(tǒng)一變量
filterInputTextureUniform = [filterProgram uniformIndex:@"inputImageTexture"]; // This does assume a name of "inputImageTexture" for the fragment shader
// 使用當(dāng)前GL程序
[GPUImageContext setActiveShaderProgram:filterProgram];
// 啟用頂點(diǎn)屬性數(shù)組
glEnableVertexAttribArray(filterPositionAttribute);
glEnableVertexAttribArray(filterTextureCoordinateAttribute);
});
return self;
}
- (id)initWithFragmentShaderFromString:(NSString *)fragmentShaderString;
{
if (!(self = [self initWithVertexShaderFromString:kGPUImageVertexShaderString fragmentShaderFromString:fragmentShaderString]))
{
return nil;
}
return self;
}
- (id)initWithFragmentShaderFromFile:(NSString *)fragmentShaderFilename;
{
NSString *fragmentShaderPathname = [[NSBundle mainBundle] pathForResource:fragmentShaderFilename ofType:@"fsh"];
NSString *fragmentShaderString = [NSString stringWithContentsOfFile:fragmentShaderPathname encoding:NSUTF8StringEncoding error:nil];
if (!(self = [self initWithFragmentShaderFromString:fragmentShaderString]))
{
return nil;
}
return self;
}
- 其它方法
GPUImageFilter 的方法中,為著色器傳值的方法比較多,這是因?yàn)橹髂芙邮懿煌愋偷闹?,如:GLint、GLfloat、vec2、vec3、mat3 等。在這些方法中有三個(gè)比較重要的方法 - (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
和 - (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
和 - (void)informTargetsAboutNewFrameAtTime:(CMTime)frameTime;
這三個(gè)方法和響應(yīng)鏈密切相關(guān)。GPUImageFilter 會(huì)將接收到的幀緩存對象經(jīng)過特定的片段著色器繪制到即將輸出的幀緩存對象中,然后將自己輸出的幀緩存對象傳給所有Targets并通知它們進(jìn)行處理。方法被調(diào)用的順序:
1、生成新的幀緩存對象
- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
;
2、進(jìn)行GL繪制
- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
3、繪制完成通知所有的target處理
- (void)informTargetsAboutNewFrameAtTime:(CMTime)frameTime;
接下來主要看這幾個(gè)方法。
// 變換方法
- (void)setupFilterForSize:(CGSize)filterFrameSize;
- (CGSize)rotatedSize:(CGSize)sizeToRotate forIndex:(NSInteger)textureIndex;
- (CGPoint)rotatedPoint:(CGPoint)pointToRotate forRotation:(GPUImageRotationMode)rotation;
// 查詢方法
- (CGSize)sizeOfFBO;
+ (const GLfloat *)textureCoordinatesForRotation:(GPUImageRotationMode)rotationMode;
- (CGSize)outputFrameSize;
// 渲染方法
- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
- (void)informTargetsAboutNewFrameAtTime:(CMTime)frameTime;
// 設(shè)置清屏顏色
- (void)setBackgroundColorRed:(GLfloat)redComponent green:(GLfloat)greenComponent blue:(GLfloat)blueComponent alpha:(GLfloat)alphaComponent;
// 傳值方法
- (void)setInteger:(GLint)newInteger forUniformName:(NSString *)uniformName;
- (void)setFloat:(GLfloat)newFloat forUniformName:(NSString *)uniformName;
- (void)setSize:(CGSize)newSize forUniformName:(NSString *)uniformName;
- (void)setPoint:(CGPoint)newPoint forUniformName:(NSString *)uniformName;
- (void)setFloatVec3:(GPUVector3)newVec3 forUniformName:(NSString *)uniformName;
- (void)setFloatVec4:(GPUVector4)newVec4 forUniform:(NSString *)uniformName;
- (void)setFloatArray:(GLfloat *)array length:(GLsizei)count forUniform:(NSString*)uniformName;
- (void)setMatrix3f:(GPUMatrix3x3)matrix forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setMatrix4f:(GPUMatrix4x4)matrix forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setFloat:(GLfloat)floatValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setPoint:(CGPoint)pointValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setSize:(CGSize)sizeValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setVec3:(GPUVector3)vectorValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setVec4:(GPUVector4)vectorValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setFloatArray:(GLfloat *)arrayValue length:(GLsizei)arrayLength forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setInteger:(GLint)intValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setAndExecuteUniformStateCallbackAtIndex:(GLint)uniform forProgram:(GLProgram *)shaderProgram toBlock:(dispatch_block_t)uniformStateBlock;
- (void)setUniformsForProgramAtIndex:(NSUInteger)programIndex;
/******************* 方法實(shí)現(xiàn) ************************************/
// 根據(jù)旋轉(zhuǎn)方向獲取紋理坐標(biāo)
+ (const GLfloat *)textureCoordinatesForRotation:(GPUImageRotationMode)rotationMode;
{
static const GLfloat noRotationTextureCoordinates[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat rotateLeftTextureCoordinates[] = {
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 0.0f,
0.0f, 1.0f,
};
static const GLfloat rotateRightTextureCoordinates[] = {
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 1.0f,
1.0f, 0.0f,
};
static const GLfloat verticalFlipTextureCoordinates[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
static const GLfloat horizontalFlipTextureCoordinates[] = {
1.0f, 0.0f,
0.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
};
static const GLfloat rotateRightVerticalFlipTextureCoordinates[] = {
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
1.0f, 1.0f,
};
static const GLfloat rotateRightHorizontalFlipTextureCoordinates[] = {
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
0.0f, 0.0f,
};
static const GLfloat rotate180TextureCoordinates[] = {
1.0f, 1.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f,
};
switch(rotationMode)
{
case kGPUImageNoRotation: return noRotationTextureCoordinates;
case kGPUImageRotateLeft: return rotateLeftTextureCoordinates;
case kGPUImageRotateRight: return rotateRightTextureCoordinates;
case kGPUImageFlipVertical: return verticalFlipTextureCoordinates;
case kGPUImageFlipHorizonal: return horizontalFlipTextureCoordinates;
case kGPUImageRotateRightFlipVertical: return rotateRightVerticalFlipTextureCoordinates;
case kGPUImageRotateRightFlipHorizontal: return rotateRightHorizontalFlipTextureCoordinates;
case kGPUImageRotate180: return rotate180TextureCoordinates;
}
}
// 產(chǎn)生新的幀緩存
- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
{
static const GLfloat imageVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
// 先渲染到幀緩存
[self renderToTextureWithVertices:imageVertices textureCoordinates:[[self class] textureCoordinatesForRotation:inputRotation]];
// 通知所有的Targets
[self informTargetsAboutNewFrameAtTime:frameTime];
}
// 渲染到幀緩存
- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
{
if (self.preventRendering)
{
[firstInputFramebuffer unlock];
return;
}
[GPUImageContext setActiveShaderProgram:filterProgram];
outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:[self sizeOfFBO] textureOptions:self.outputTextureOptions onlyTexture:NO];
[outputFramebuffer activateFramebuffer];
if (usingNextFrameForImageCapture)
{
[outputFramebuffer lock];
}
[self setUniformsForProgramAtIndex:0];
// GL繪制
glClearColor(backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha);
glClear(GL_COLOR_BUFFER_BIT);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, [firstInputFramebuffer texture]);
glUniform1i(filterInputTextureUniform, 2);
glVertexAttribPointer(filterPositionAttribute, 2, GL_FLOAT, 0, 0, vertices);
glVertexAttribPointer(filterTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, textureCoordinates);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// 解鎖輸入幀緩存對象
[firstInputFramebuffer unlock];
// 需要等待繪制完成才去生成圖像
if (usingNextFrameForImageCapture)
{
// 發(fā)送渲染完成信號(hào)
dispatch_semaphore_signal(imageCaptureSemaphore);
}
}
// 通知所有的Targets
- (void)informTargetsAboutNewFrameAtTime:(CMTime)frameTime;
{
if (self.frameProcessingCompletionBlock != NULL)
{
self.frameProcessingCompletionBlock(self, frameTime);
}
// 傳遞幀緩存給所有target
for (id<GPUImageInput> currentTarget in targets)
{
if (currentTarget != self.targetToIgnoreForUpdates)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger textureIndex = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
[self setInputFramebufferForTarget:currentTarget atIndex:textureIndex];
[currentTarget setInputSize:[self outputFrameSize] atIndex:textureIndex];
}
}
// Release our hold so it can return to the cache immediately upon processing
[[self framebufferForOutput] unlock];
if (usingNextFrameForImageCapture)
{
// usingNextFrameForImageCapture = NO;
}
else
{
[self removeOutputFramebuffer];
}
// 通知所有targets產(chǎn)生新的幀緩存
for (id<GPUImageInput> currentTarget in targets)
{
if (currentTarget != self.targetToIgnoreForUpdates)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger textureIndex = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
// 讓所有target生成新的幀緩存
[currentTarget newFrameReadyAtTime:frameTime atIndex:textureIndex];
}
}
}
// 需要生成圖片則先消耗信號(hào)量,確保生成圖片的時(shí)候GL繪制已經(jīng)完成
- (void)useNextFrameForImageCapture;
{
usingNextFrameForImageCapture = YES;
// 消耗信號(hào)量
if (dispatch_semaphore_wait(imageCaptureSemaphore, DISPATCH_TIME_NOW) != 0)
{
return;
}
}
// 等待渲染完成信號(hào),如果接收到完成信號(hào)則生成圖片
- (CGImageRef)newCGImageFromCurrentlyProcessedOutput
{
// Give it three seconds to process, then abort if they forgot to set up the image capture properly
double timeoutForImageCapture = 3.0;
dispatch_time_t convertedTimeout = dispatch_time(DISPATCH_TIME_NOW, timeoutForImageCapture * NSEC_PER_SEC);
// 等待GL繪制完成,直到超時(shí)
if (dispatch_semaphore_wait(imageCaptureSemaphore, convertedTimeout) != 0)
{
return NULL;
}
// GL渲染完成且等待未超時(shí)則生成CGImage
GPUImageFramebuffer* framebuffer = [self framebufferForOutput];
usingNextFrameForImageCapture = NO;
dispatch_semaphore_signal(imageCaptureSemaphore);
CGImageRef image = [framebuffer newCGImageFromFramebufferContents];
return image;
}
實(shí)現(xiàn)過程
實(shí)現(xiàn)自定義濾鏡特效。
1、新建QMFishEyeFilter,并繼承自 GPUImageFilter。
//
// QMRotationFilter.h
// GPUImageFilter
//
// Created by qinmin on 2017/6/8.
// Copyright ? 2017年 Qinmin. All rights reserved.
//
#import <GPUImage.h>
@interface QMFishEyeFilter : GPUImageFilter
@property (nonatomic, assign) GLfloat radius;
- (instancetype)init;
@end
2、重寫 - (instancetype)init;
方法。
- (instancetype)init
{
if (self = [super initWithFragmentShaderFromString:kQMFishEyeFilterFragmentShaderString]) {
radiusUniform = [filterProgram uniformIndex:@"radius"];
self.radius = 0.5;
[self setBackgroundColorRed:0.0 green:1.0 blue:0.0 alpha:1.0];
}
return self;
}
3、 編寫自定義的著色器代碼。
NSString *const kQMFishEyeFilterFragmentShaderString = SHADER_STRING
(
precision highp float;
varying vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform float radius;
const float PI = 3.1415926535;
void main()
{
float aperture = 175.0;
float apertureHalf = radius * aperture * (PI / 180.0);
float maxFactor = sin(apertureHalf);
vec2 uv;
vec2 xy = 2.0 * textureCoordinate - 1.0;
float d = length(xy);
if (d < (2.0 - maxFactor)) {
d = length(xy * maxFactor);
float z = sqrt(1.0 - d * d);
float r = atan(d, z) / PI;
float phi = atan(xy.y, xy.x);
uv.x = r * cos(phi) + radius;
uv.y = r * sin(phi) + radius;
}else {
uv = textureCoordinate;
}
vec4 color = texture2D(inputImageTexture, uv);
gl_FragColor = color;
}
);
4、 使用自定義濾鏡特效。
#pragma mark - Events
- (IBAction)startButtonTapped:(UIButton *)sender
{
// 加載圖片
GPUImagePicture *picture = [[GPUImagePicture alloc] initWithImage:[UIImage imageNamed:@"3.jpg"]];
QMFishEyeFilter *filter = [[QMFishEyeFilter alloc] init];
[picture addTarget:filter];
[filter addTarget:_imageView];
[picture processImage];
}
簡單光照效果。
1、新建QM3DLightFilter,并繼承自 GPUImageFilter。
//
// QMRotationFilter.h
// GPUImageFilter
//
// Created by qinmin on 2017/6/8.
// Copyright ? 2017年 Qinmin. All rights reserved.
//
#import <GPUImage.h>
@interface QM3DLightFilter : GPUImageFilter
- (instancetype)init;
@end
2、重寫 - (instancetype)init;
方法。
- (instancetype)init
{
if (self = [super initWithVertexShaderFromString:kQM3DLightFilterVertexShaderString fragmentShaderFromString:kQM3DLightFilterFragmentShaderString]) {
[filterProgram addAttribute:@"normal"];
filterNormalAttribute = [filterProgram attributeIndex:@"normal"];
glEnableVertexAttribArray(filterNormalAttribute);
pUniform = [filterProgram uniformIndex:@"P"];
mvUniform = [filterProgram uniformIndex:@"MV"];
normalMatUniform = [filterProgram uniformIndex:@"normalMat"];
[self setMVPMatrix];
[self setupSurface];
[self setBackgroundColorRed:1.0 green:1.0 blue:1.0 alpha:1.0];
}
return self;
}
3、初始化相關(guān)矩陣和模型。模型加載使用了開源的tiny_obj_loader
。
- (void)setMVPMatrix
{
mat4_t P = mat4_perspective(M_PI/3, 1.0, 1.0, 10.0);
[self setMatrix4f:*((GPUMatrix4x4 *)&P) forUniform:pUniform program:filterProgram];
mat4_t MV = mat4_create_translation(0, 0, -2.2);
[self setMatrix4f:*((GPUMatrix4x4 *)&MV) forUniform:mvUniform program:filterProgram];
mat4_t normalMat = mat4_transpose(mat4_inverse(MV, NULL));
[self setMatrix4f:*((GPUMatrix4x4 *)&normalMat) forUniform:normalMatUniform program:filterProgram];
}
- (void)setupSurface
{
NSString *path = [[NSBundle mainBundle] pathForResource:@"Sphere" ofType:@"obj"];
_tinyOBJModel = std::make_shared<TinyOBJModel>();
_tinyOBJModel->LoadObj(path.UTF8String);
}
4、編寫著色器程序。
NSString *const kQM3DLightFilterVertexShaderString = SHADER_STRING
(
attribute vec4 position;
attribute vec2 inputTextureCoordinate;
attribute vec3 normal;
uniform mat4 MV;
uniform mat4 P;
uniform mat4 normalMat;
varying vec2 textureCoordinate;
varying vec3 vNormal;
varying vec3 vPosition;
void main()
{
gl_Position = P * MV * position;
textureCoordinate = inputTextureCoordinate;
vPosition = mat3(MV) * vec3(position);
vNormal = mat3(normalMat) * normal;
}
);
NSString *const kQM3DLightFilterFragmentShaderString = SHADER_STRING
(
precision highp float;
varying vec2 textureCoordinate;
varying vec3 vNormal;
varying vec3 vPosition;
uniform sampler2D inputImageTexture;
void main()
{
vec3 lightPos = vec3(5.0, -5.0, 0.0);
vec3 L = normalize(lightPos);
vec3 N = normalize(vNormal);
//ambient
vec4 AmbientLightColor = vec4(0.5, 0.5, 0.5, 1.0);
vec4 AmbientMaterial = vec4(0.2, 0.2, 0.2, 1.0);
vec4 ambientColor = AmbientLightColor * AmbientMaterial;
//diffuse
vec4 DiffuseLightColor = vec4(1.0, 1.0, 1.0, 1.0);
vec4 DiffuseMaterial = vec4(0.8, 0.8, 0.8, 1.0);
vec4 diffuseColor = DiffuseLightColor * DiffuseMaterial * max(0.0, dot(N, L));
// Specular
vec4 SpecularLightColor = vec4(1.0, 1.0, 0.0, 1.0);
vec4 SpecularMaterial = vec4(0.7, 0.7, 0.7, 1.0);
vec3 eye = vec3(1.0, -2.0, 5.0) - vPosition;
vec3 H = normalize(eye + L);
vec4 specularColor = SpecularLightColor * SpecularMaterial * pow(max(0.0, dot(N, H)), 2.5);
// All light
vec4 light = ambientColor + diffuseColor + specularColor;
vec4 color = texture2D(inputImageTexture, textureCoordinate);
gl_FragColor = color * light;
}
);
5、重寫渲染方法 - (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
,在渲染的時(shí)候,開啟了深度測試,頂點(diǎn)數(shù)據(jù)使用VBO存儲(chǔ)。
#pragma mark - Overwrite
- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
{
if (self.preventRendering)
{
[firstInputFramebuffer unlock];
return;
}
[GPUImageContext setActiveShaderProgram:filterProgram];
outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:[self sizeOfFBO] textureOptions:self.outputTextureOptions onlyTexture:NO];
[outputFramebuffer activateFramebuffer];
if (usingNextFrameForImageCapture)
{
[outputFramebuffer lock];
}
[self setUniformsForProgramAtIndex:0];
// Setup depth render buffer
int width, height;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height);
// Create a depth buffer that has the same size as the color buffer.
GLuint depthRenderBuffer;
glGenRenderbuffers(1, &depthRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height);
// Attach color render buffer and depth render buffer to frameBuffer
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER, depthRenderBuffer);
glEnable(GL_DEPTH_TEST);
glClearColor(backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, [firstInputFramebuffer texture]);
glUniform1i(filterInputTextureUniform, 2);
int stride = 8 * sizeof(GLfloat);
const GLvoid* normalOffset = (const GLvoid*)(5 * sizeof(GLfloat));
const GLvoid* texCoordOffset = (const GLvoid*)(3 * sizeof(GLfloat));
glBindBuffer(GL_ARRAY_BUFFER, _tinyOBJModel->getVertexBuffer());
glVertexAttribPointer(filterPositionAttribute, 3, GL_FLOAT, GL_FALSE, stride, 0);
glVertexAttribPointer(filterNormalAttribute, 3, GL_FLOAT, GL_FALSE, stride, normalOffset);
glVertexAttribPointer(filterTextureCoordinateAttribute, 2, GL_FLOAT, GL_FALSE, stride, texCoordOffset);
// Draw the triangles.
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _tinyOBJModel->getIndexBuffer());
glDrawElements(GL_TRIANGLES, _tinyOBJModel->getIndexCount(), GL_UNSIGNED_INT, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisable(GL_DEPTH_TEST);
[firstInputFramebuffer unlock];
if (usingNextFrameForImageCapture)
{
dispatch_semaphore_signal(imageCaptureSemaphore);
}
}
6、使用濾鏡。
- (IBAction)filterButtonTapped:(UIButton *)sender
{
// 加載圖片
GPUImagePicture *picture = [[GPUImagePicture alloc] initWithImage:[UIImage imageNamed:@"3.jpg"]];
QM3DLightFilter *filter = [[QM3DLightFilter alloc] init];
[picture addTarget:filter];
[filter addTarget:_imageView];
[picture processImage];
}
總結(jié)
GPUImageFilter 是所有濾鏡的基礎(chǔ),其它濾鏡大多直接或間接繼承它,這些濾鏡可以混合起來構(gòu)建復(fù)雜的多重濾鏡。因此,梳理它的渲染樹對理解整個(gè) GPUImage 框架有很大的幫助。
源碼地址:GPUImage源碼閱讀系列 https://github.com/QinminiOS/GPUImage
系列文章地址:GPUImage源碼閱讀 http://www.lxweimin.com/nb/11749791