二維碼的生成
從iOS7開始集成了二維碼的生成和讀取功能
此前被廣泛使用的zbarsdk目前不支持64位處理器
生成二維碼的步驟:
1.導(dǎo)入CoreImage框架
2.通過濾鏡CIFilter生成二維碼-
二維碼的內(nèi)容(傳統(tǒng)的條形碼只能放數(shù)字):
- 純文本
- 名片,本質(zhì)是url
- URL,本質(zhì)是字符串即純文本
#import <CoreImage/CoreImage.h>
@implementation ViewController
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
// 1.創(chuàng)建過濾器,QRCode是二維碼的意思
CIFilter *filter = [CIFilter filterWithName:@"CIQRCodeGenerator"];
// 2.恢復(fù)默認(rèn)
[filter setDefaults];
// 3.給過濾器添加數(shù)據(jù)(正則表達(dá)式/賬號(hào)和密碼)
NSString *dataString = @"http://www.520it.com";
NSData *data = [dataString dataUsingEncoding:NSUTF8StringEncoding];
[filter setValue:data forKeyPath:@"inputMessage"];
// 4.獲取輸出的二維碼
CIImage *outputImage = [filter outputImage];
// 5.顯示二維碼
self.imageView.image = [self createNonInterpolatedUIImageFormCIImage:outputImage withSize:200];
}
/**
* 根據(jù)CIImage生成指定大小的UIImage(該方法可以寫成UIImage的分類)
*
* @param image CIImage
* @param size 圖片寬度
*/
- (UIImage *)createNonInterpolatedUIImageFormCIImage:(CIImage *)image withSize:(CGFloat)size
{
CGRect extent = CGRectIntegral(image.extent);
CGFloat scale = MIN(size/CGRectGetWidth(extent), size/CGRectGetHeight(extent));
// 1.創(chuàng)建bitmap;
size_t width = CGRectGetWidth(extent) * scale;
size_t height = CGRectGetHeight(extent) * scale;
CGColorSpaceRef cs = CGColorSpaceCreateDeviceGray();
CGContextRef bitmapRef = CGBitmapContextCreate(nil, width, height, 8, 0, cs, (CGBitmapInfo)kCGImageAlphaNone);
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef bitmapImage = [context createCGImage:image fromRect:extent];
CGContextSetInterpolationQuality(bitmapRef, kCGInterpolationNone);
CGContextScaleCTM(bitmapRef, scale, scale);
CGContextDrawImage(bitmapRef, extent, bitmapImage);
// 2.保存bitmap到圖片
CGImageRef scaledImage = CGBitmapContextCreateImage(bitmapRef);
CGContextRelease(bitmapRef);
CGImageRelease(bitmapImage);
return [UIImage imageWithCGImage:scaledImage];
}
@end
讀取二維碼
讀取二維碼需要導(dǎo)入AVFoundation框架
-
利用攝像頭識(shí)別二維碼中的內(nèi)容(模擬器不行)
1.輸入(攝像頭)
2.由會(huì)話將攝像頭采集到的二維碼圖像轉(zhuǎn)換成字符串?dāng)?shù)據(jù)
3.輸出(數(shù)據(jù))
4.由預(yù)覽圖層顯示掃描場(chǎng)景
#import <AVFoundation/AVFoundation.h>
@interface ViewController () <AVCaptureMetadataOutputObjectsDelegate>
@property (nonatomic, weak) AVCaptureSession *session;
@property (nonatomic, weak) AVCaptureVideoPreviewLayer *layer;
@end
@implementation ViewController
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
// 1.創(chuàng)建捕捉會(huì)話
AVCaptureSession *session = [[AVCaptureSession alloc] init];
self.session = session;
// 2.添加輸入設(shè)備(數(shù)據(jù)從攝像頭輸入)
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
[session addInput:input];
// 3.添加輸出數(shù)據(jù)(實(shí)例對(duì)象-->類對(duì)象-->元類對(duì)象-->根元類對(duì)象)
AVCaptureMetadataOutput *output = [[AVCaptureMetadataOutput alloc] init];
[output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[session addOutput:output];
// 3.1.設(shè)置輸入元數(shù)據(jù)的類型(類型是二維碼數(shù)據(jù))
[output setMetadataObjectTypes:@[AVMetadataObjectTypeQRCode]];
// 4.添加掃描圖層
AVCaptureVideoPreviewLayer *layer = [AVCaptureVideoPreviewLayer layerWithSession:session];
layer.frame = self.view.bounds;
[self.view.layer addSublayer:layer];
self.layer = layer;
// 5.開始掃描
[session startRunning];
}
#pragma mark - 實(shí)現(xiàn)output的回調(diào)方法
// 當(dāng)掃描到數(shù)據(jù)時(shí)就會(huì)執(zhí)行該方法
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection
{
if (metadataObjects.count > 0) {
AVMetadataMachineReadableCodeObject *object = [metadataObjects lastObject];
NSLog(@"%@", object.stringValue);
// 停止掃描
[self.session stopRunning];
// 將預(yù)覽圖層移除
[self.layer removeFromSuperlayer];
} else {
NSLog(@"沒有掃描到數(shù)據(jù)");
}
}
@end