前言
之前兩篇文章介紹了如何利用Tensorflow與Metal來實現圖片個性化處理,其中詳細描述了神經網絡框架,訓練方式以及原理:
本篇文章在以上基礎上結合蘋果在WWDC2017上提出的CoreML
,介紹一種新的實現思路,大大節省iOS端代碼并降低集成難度。
將模型轉換到CoreML
蘋果官方給了幾種三方模型轉換成CoreML model format的轉換工具,由于我們的訓練模型基于Tensorflow,所以我這里用的TensorFlow converter。具體轉換工具的使用可以參看蘋果的官方文檔。
我這里基于工具寫了一個轉換工程:
from matplotlib import pyplot
from matplotlib.pyplot import imshow
from PIL import Image
import tfcoreml
import numpy as np
import image_utils
import os
import tensorflow as tf
from coremltools.proto import FeatureTypes_pb2 as _FeatureTypes_pb2
import coremltools
tf_model_path = '../protobuf/frozen.pb'
mlmodel = tfcoreml.convert(
tf_model_path = tf_model_path,
mlmodel_path = '../mlmodel/stylize.mlmodel',
output_feature_names = ['transformer/expand/conv3/conv/Sigmoid:0'],
input_name_shape_dict = {'input:0':[1,512,512,3], 'style:0':[32]})
# test model
newstyle = np.zeros([32], dtype=np.float32)
newstyle[0] = 1
newImage = np.expand_dims(image_utils.load_np_image(os.path.expanduser("../sample.jpg")), 0)
newImage = newImage.reshape((512,512,3))
imshow(newImage)
pyplot.show()
coreml_image_input = np.transpose(newImage, (2,0,1))
# coreml_image_input = Image.open("../sample.jpg")
# imshow(coreml_image_input)
# pyplot.show()
coreml_style_index = newstyle[:,np.newaxis,np.newaxis,np.newaxis,np.newaxis]
coreml_input = {'input__0': coreml_image_input, 'style__0': coreml_style_index}
coreml_out = mlmodel.predict(coreml_input, useCPUOnly = True)['transformer__expand__conv3__conv__Sigmoid__0']
coreml_out = np.transpose(coreml_out, (1,2,0))
imshow(coreml_out)
pyplot.show()
iOS實現類Prisma軟件(二)
中介紹了網絡結構,并在存儲圖的時候定義了輸入(input&style
)與輸出的節點名字(transformer/expand/conv3/conv/Sigmoid
),所以在調用tfcoreml.convert
的時候直接填寫了參數,生成并保存了stylize.mlmodel
。
iOS中使用CoreML模型
集成mlmodel很簡單,只需要引入進工程,Xcode會自動生成模型的接口方法,方便使用的時候調用。
初始化模型只需要:
#import "stylize.h"
@interface HomeViewController ()///<MTKViewDelegate>
{
stylize *styleModel;
}
@implementation HomeViewController
- (void)viewDidLoad {
[super viewDidLoad];
if (@available(iOS 12.0, *)) {
styleModel = [[stylize alloc] init];
} else {
NSLog(@"Need Run iOS 12.0+");
}
}
使用Predict函數,需要按要求構建輸入與輸出:
- (void)createStyleImage:(UIImage *)source
{
dispatch_async(dispatch_get_global_queue(0, DISPATCH_QUEUE_PRIORITY_DEFAULT), ^{
MLMultiArray *styleArray = [[MLMultiArray alloc] initWithShape:@[@32,@1,@1,@1,@1] dataType:MLMultiArrayDataTypeDouble error:nil];
for (int i = 0; i < styleArray.count; i++) {
[styleArray setObject:@0 atIndexedSubscript:i];
}
[styleArray setObject:@1 atIndexedSubscript:self->currentStyle];
stylizeInput *input = [[stylizeInput alloc] initWithStyle__0:styleArray input__0:[self getImagePixel:source]];
stylizeOutput *output = [styleModel predictionFromFeatures:input error:nil];
dispatch_async(dispatch_get_main_queue(), ^{
self->_styleImageView.image = [self createImage:output.transformer__expand__conv3__conv__Sigmoid__0];
self->isDone = true;
});
});
}
- (MLMultiArray *)getImagePixel:(UIImage *)image
{
int width = image.size.width;
int height = image.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image.CGImage);
CGContextRotateCTM(context, M_PI_2);
UIImage *ogImg = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
dispatch_async(dispatch_get_main_queue(), ^{
self->_ogImageView.image = ogImg;
});
CGContextRelease(context);
MLMultiArray *tmpArray = [[MLMultiArray alloc] initWithShape:@[[NSNumber numberWithInt: wanted_input_channels],[NSNumber numberWithInt:wanted_input_height],[NSNumber numberWithInt:wanted_input_width]] dataType:MLMultiArrayDataTypeDouble error:nil];
for (int y = 0; y < wanted_input_height; ++y) {
for (int x = 0; x < wanted_input_width; ++x) {
unsigned char *in_pixel =
rawData + (y * wanted_input_width * bytesPerPixel) + (x * bytesPerPixel);
for (int c = 0; c < wanted_input_channels; ++c) {
[tmpArray setObject:[NSNumber numberWithUnsignedChar:in_pixel[c]] atIndexedSubscript:c*wanted_input_height*wanted_input_width+y*wanted_input_width+x];
}
}
}
free(rawData);
return tmpArray;
}
- (UIImage *)createImage:(MLMultiArray *)pixels
API_AVAILABLE(ios(11.0)){
unsigned char *rawData = (unsigned char*) calloc(wanted_input_height * wanted_input_width * 4, sizeof(unsigned char));
for (int y = 0; y < wanted_input_height; ++y) {
unsigned char *out_row = rawData + (y * wanted_input_width * 4);
for (int x = 0; x < wanted_input_width; ++x) {
int index = x * wanted_input_width + y;
unsigned char *out_pixel = out_row + (x * 4);
for (int c = 0; c < wanted_input_channels; ++c) {
out_pixel[c] = [[pixels objectAtIndexedSubscript:c*wanted_input_height*wanted_input_width+index] floatValue] * 255;
}
out_pixel[3] = UINT8_MAX;
}
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * wanted_input_width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, wanted_input_width, wanted_input_height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
UIImage *retImg = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
CGContextRelease(context);
free(rawData);
return [UIImage imageWithCGImage:retImg.CGImage scale:1 orientation:UIImageOrientationLeftMirrored];
}
由于我們在轉換CoreML模型的時候沒有設置圖片輸入,所以所有數據都是以MLMultiArray格式處理。如果覺得這樣比較麻煩,可以通過一下方法轉換模型:
mlmodel = tfcoreml.convert(
tf_model_path = tf_model_path,
mlmodel_path = '../mlmodel/stylize.mlmodel',
output_feature_names = ['transformer/expand/conv3/conv/Sigmoid:0'],
input_name_shape_dict = {'input:0':[1,512,512,3], 'style:0':[32]},
image_input_names=['input:0'])
#spec = mlmodel.get_spec()
#output = spec.description.output[0]
#output.type.imageType.colorSpace = #_FeatureTypes_pb2.ImageFeatureType.ColorSpace.Value('RGB')
#output.type.imageType.width = 512
#output.type.imageType.height = 512
#coremltools.models.utils.save_spec(spec, '../mlmodel/stylize.mlmodel')
其中,在調用
tfcoreml.convert
時就配置了輸入圖片的節點image_input_names=['input:0']
,輸出節點也可以轉換成圖片,但是我們保存的模型不適用,如果官方的pb文件是可以的。
使用的圖片輸入后,我們在配置input的時候就可以使用CVPixelBufferRef
來封裝,不用自己考慮圖片轉換成bytes后的矩陣轉置。
運行效果
直接上圖??: