iOS原生實現二維碼掃描

這兩天用到了二維碼掃描功能,網上第三方框架是有,但是想弄清楚其原理,就自己用了系統原生的方法,其中部分地方有點坑,就記錄下來,權當下次使用提醒。

二維碼掃描

不多說,上代碼:

#import <AVFoundation/AVFoundation.h>  //引用AVFoundation框架
@interface ViewController ()<AVCaptureMetadataOutputObjectsDelegate> //遵守AVCaptureMetadataOutputObjectsDelegate協議
@property ( strong , nonatomic ) AVCaptureDevice * device; //捕獲設備,默認后置攝像頭
@property ( strong , nonatomic ) AVCaptureDeviceInput * input; //輸入設備
@property ( strong , nonatomic ) AVCaptureMetadataOutput * output;//輸出設備,需要指定他的輸出類型及掃描范圍
@property ( strong , nonatomic ) AVCaptureSession * session; //AVFoundation框架捕獲類的中心樞紐,協調輸入輸出設備以獲得數據
@property ( strong , nonatomic ) AVCaptureVideoPreviewLayer * previewLayer;//展示捕獲圖像的圖層,是CALayer的子類
@property (nonatomic,strong)UIView *scanView;定位掃描框在哪個位置

初始化對象:

- (AVCaptureDevice *)device
{
    if (_device == nil) {
    // 設置AVCaptureDevice的類型為Video類型
        _device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    }
    return _device;
}

- (AVCaptureDeviceInput *)input
{
    if (_input == nil) {
    //輸入設備初始化
        _input = [AVCaptureDeviceInput deviceInputWithDevice:self.device error:nil];
    }
    return _input;
}

這里設置輸出設備要注意rectOfInterest屬性的設置,一般默認是CGRect(x: 0, y: 0, width: 1, height: 1),全屏都能讀取的,但是讀取速度較慢。注意rectOfInterest屬性的傳人的是比例。比例是根據掃描容器的尺寸比上屏幕尺寸(注意要計算的時候要計算導航欄高度,有的話需減去)。
參照的是橫屏左上角的比例,而不是豎屏。所以我們再設置的時候要調整方向如下面所示。


- (AVCaptureMetadataOutput *)output
{
    if (_output == nil) {
    //初始化輸出設備
        _output = [[AVCaptureMetadataOutput alloc] init];

        // 1.獲取屏幕的frame
        CGRect viewRect = self.view.frame;
        // 2.獲取掃描容器的frame
        CGRect containerRect = self.scanView.frame;
        
        CGFloat x = containerRect.origin.y / viewRect.size.height;
        CGFloat y = containerRect.origin.x / viewRect.size.width;
        CGFloat width = containerRect.size.height / viewRect.size.height;
        CGFloat height = containerRect.size.width / viewRect.size.width;
        //rectOfInterest屬性設置設備的掃描范圍
        _output.rectOfInterest = CGRectMake(x, y, width, height);
    }
    return _output;
}

網上還有一種是根據AVCaptureInputPortFormatDescriptionDidChangeNotification通知設置的,也是可行的,自選一種即可。

 __weak typeof(self) weakSelf = self;
[[NSNotificationCenter defaultCenter]addObserverForName:AVCaptureInputPortFormatDescriptionDidChangeNotification
                                                         object:nil
                                                          queue:[NSOperationQueue mainQueue]
                                                     usingBlock:^(NSNotification * _Nonnull note) {
                                                         if (weakSelf){
                                                             //調整掃描區域
                                                             AVCaptureMetadataOutput *output = weakSelf.session.outputs.firstObject;
                                                             output.rectOfInterest = [weakSelf.previewLayer metadataOutputRectOfInterestForRect:weakSelf.scanView.frame];
                                                         }
                                                     }];

下面初始化AVCaptureSessionAVCaptureVideoPreviewLayer:

- (AVCaptureSession *)session
{
    if (_session == nil) {
        _session = [[AVCaptureSession alloc] init];
    }
    return _session;
}

- (AVCaptureVideoPreviewLayer *)previewLayer
{
    if (_previewLayer == nil) {
    //負責圖像渲染出來
        _previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.session];
        self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    }
    return _previewLayer;
}

接著我們再viewDidLoad中初始化并啟動掃描

- (void)viewDidLoad {
    [super viewDidLoad];
    CGFloat kScreen_Width = [UIScreen mainScreen].bounds.size.width; 
    
      //定位掃描框在屏幕正中央,并且寬高為200的正方形
     self.scanView = [[UIView alloc]initWithFrame:CGRectMake((kScreen_Width-200)/2, (self.view.frame.size.height-200)/2, 200, 200)];
    [self.view addSubview:self.scanView];
    
    //設置掃描界面(包括掃描界面之外的部分置灰,掃描邊框等的設置),后面設置
    TNWCameraScanView *clearView = [[TNWCameraScanView alloc]initWithFrame:self.view.frame];
    [self.view addSubview:clearView];

    [self startScan];
}

- (void)startScan
{
    // 1.判斷輸入能否添加到會話中
    if (![self.session canAddInput:self.input]) return;
    [self.session addInput:self.input];
    
    
    // 2.判斷輸出能夠添加到會話中
    if (![self.session canAddOutput:self.output]) return;
    [self.session addOutput:self.output];
    
    // 4.設置輸出能夠解析的數據類型
    // 注意點: 設置數據類型一定要在輸出對象添加到會話之后才能設置
    //設置availableMetadataObjectTypes為二維碼、條形碼等均可掃描,如果想只掃描二維碼可設置為
    // [self.output setMetadataObjectTypes:@[AVMetadataObjectTypeQRCode]];
    
    self.output.metadataObjectTypes = self.output.availableMetadataObjectTypes;
    
    // 5.設置監聽監聽輸出解析到的數據
    [self.output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
    
    // 6.添加預覽圖層
    [self.view.layer insertSublayer:self.previewLayer atIndex:0];
    self.previewLayer.frame = self.view.bounds;
    
    // 8.開始掃描
    [self.session startRunning];
}

下面是接收掃描結果的代理AVCaptureMetadataOutputObjectsDelegate:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection
{
    [self.session stopRunning];   //停止掃描
    //我們捕獲的對象可能不是AVMetadataMachineReadableCodeObject類,所以要先判斷,不然會崩潰
    if (![[metadataObjects lastObject] isKindOfClass:[AVMetadataMachineReadableCodeObject class]]) {
          [self.session startRunning];
        return;
    }
    // id 類型不能點語法,所以要先去取出數組中對象
    AVMetadataMachineReadableCodeObject *object = [metadataObjects lastObject];
    if ( object.stringValue == nil ){
            [self.session startRunning];
    }

上面就是常用的二維碼掃描部分的代碼,我們下面再簡單的看下它掃描框之外界面的設置(包括掃描界面之外的部分置灰,掃描邊框等)。具體樣式你們可以自己調整添加。

掃描框


@interface TNWCameraScanView : UIView

- (instancetype)initWithFrame:(CGRect)frame;

@end

#import "TNWCameraScanView.h"
@interface TNWCameraScanView()
{
    CGFloat sceenHeight;
    NSTimer *timer;
    CGRect  scanRect;
    CGFloat kScreen_Width;
    CGFloat kScreen_Height;
}

@property (nonatomic,assign)CGFloat lineWidth;
@property (nonatomic,assign)CGFloat height;
@property (nonatomic,strong)UIColor  *lineColor;
@property (nonatomic, assign)CGFloat scanTime;


@end
@implementation TNWCameraScanView

- (instancetype)initWithFrame:(CGRect)frame{
    if (self = [super initWithFrame:frame]) {
    
        self.backgroundColor = [UIColor clearColor]; // 清空背景色,否則為黑
        sceenHeight =self.frame.size.height;
        _height =   200; // 寬高200的正方形
        _lineWidth = 2;   // 掃描框4個腳的寬度
        _lineColor =  [UIColor greenColor]; // 掃描框4個腳的顏色
        _scanTime = 3;      //掃描線的時間間隔設置
        
         kScreen_Width = [UIScreen mainScreen].bounds.size.width;
         kScreen_Height = [UIScreen mainScreen].bounds.size.height;
        [self scanLineMove];
        
        //定時,多少秒掃描線刷新一次
        timer =  [NSTimer scheduledTimerWithTimeInterval:_scanTime target:self selector:@selector(scanLineMove) userInfo:nil repeats:YES];
    }
    return self;
}

- (void)scanLineMove{
    UIView *line = [[UIView alloc]initWithFrame:CGRectMake((kScreen_Width-_height)/2, (sceenHeight-_height)/2, _height, 1)];
    line.backgroundColor = [UIColor greenColor];
    [self addSubview:line];
    [UIView animateWithDuration:_scanTime animations:^{
        line.frame = CGRectMake((kScreen_Width-_height)/2,  (sceenHeight+_height)/2, _height, 0.5);
    } completion:^(BOOL finished) {
        [line removeFromSuperview];
    }];
}

-(void)drawRect:(CGRect)rect{
    CGFloat   bottomHeight =  (sceenHeight-_height)/2;
    CGFloat   leftWidth = (kScreen_Width-_height)/2;
    
    CGContextRef ctx = UIGraphicsGetCurrentContext();
    
    //設置4個方向的灰度值,透明度為0.5,可自行調整。
    CGContextSetRGBFillColor(ctx, 0, 0, 0, 0.5);
    CGContextFillRect(ctx, CGRectMake(0, 0, kScreen_Width, bottomHeight));
    CGContextStrokePath(ctx);
    CGContextFillRect(ctx, CGRectMake(0,bottomHeight, leftWidth, _height));
    CGContextStrokePath(ctx);
    CGContextFillRect(ctx, CGRectMake((kScreen_Width+_height)/2, bottomHeight, leftWidth, _height));
    CGContextStrokePath(ctx);
    CGContextFillRect(ctx, CGRectMake(0,(sceenHeight+_height)/2, kScreen_Width, bottomHeight));
    CGContextStrokePath(ctx);
    
    //掃描框4個腳的設置
    CGContextSetLineWidth(ctx, _lineWidth);
    CGContextSetStrokeColorWithColor(ctx, _lineColor.CGColor);
    //左上角
    CGContextMoveToPoint(ctx, leftWidth, bottomHeight+30);
    CGContextAddLineToPoint(ctx, leftWidth, bottomHeight);
    CGContextAddLineToPoint(ctx, leftWidth+30, bottomHeight);
    CGContextStrokePath(ctx);
    //右上角
    CGContextMoveToPoint(ctx, (kScreen_Width+_height)/2-30, bottomHeight);
    CGContextAddLineToPoint(ctx, (kScreen_Width+_height)/2, bottomHeight);
    CGContextAddLineToPoint(ctx, (kScreen_Width+_height)/2, bottomHeight+30);
    CGContextStrokePath(ctx);
    //左下角
    CGContextMoveToPoint(ctx, leftWidth, (sceenHeight+_height)/2-30);
    CGContextAddLineToPoint(ctx, leftWidth,  (sceenHeight+_height)/2);
    CGContextAddLineToPoint(ctx, leftWidth+30, (sceenHeight+_height)/2);
    CGContextStrokePath(ctx);
    //右下角
    CGContextMoveToPoint(ctx, (kScreen_Width+_height)/2-30, (sceenHeight+_height)/2);
    CGContextAddLineToPoint(ctx,  (kScreen_Width+_height)/2,  (sceenHeight+_height)/2);
    CGContextAddLineToPoint(ctx,  (kScreen_Width+_height)/2, (sceenHeight+_height)/2-30);
    CGContextStrokePath(ctx);
    
    設置掃描框4個邊的顏色和線框。
//    CGContextSetStrokeColorWithColor(ctx, [UIColor whiteColor].CGColor);
//    CGContextSet_lineWidth(ctx, 1);
//    CGContextAddRect(ctx, CGRectMake(leftWidth, bottomHeight, height, height));
//    CGContextStrokePath(ctx);
    scanRect = CGRectMake(leftWidth, bottomHeight, _height, _height);
}

- (void)dealloc{
     //清除計時器
    [timer invalidate];
    timer = nil;
}


掃描相冊中的二維碼

再說一下關于掃描相冊中的二維碼部分,使用CIDetector進行圖片解析,比較簡單。

- (void)choicePhoto{
  //調用相冊
  UIImagePickerController *imagePicker = [[UIImagePickerController alloc]init];
  //UIImagePickerControllerSourceTypePhotoLibrary為相冊
  imagePicker.sourceType = UIImagePickerControllerSourceTypePhotoLibrary;
  
  //設置代理UIImagePickerControllerDelegate和UINavigationControllerDelegate
  imagePicker.delegate = self;
  
  [self presentViewController:imagePicker animated:YES completion:nil];
}
//選中圖片的回調
-(void)imagePickerController:(UIImagePickerController*)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
  //取出選中的圖片
  UIImage *pickImage = info[UIImagePickerControllerOriginalImage];
  NSData *imageData = UIImagePNGRepresentation(pickImage);
  CIImage *ciImage = [CIImage imageWithData:imageData];

  //創建探測器
  //CIDetectorTypeQRCode表示二維碼,這里選擇CIDetectorAccuracyLow識別速度快
  CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeQRCode context:nil options:@{CIDetectorAccuracy: CIDetectorAccuracyLow}];
  NSArray *feature = [detector featuresInImage:ciImage];

  //取出探測到的數據
  for (CIQRCodeFeature *result in feature) {
    NSString *content = result.messageString;// 這個就是我們想要的值
  }
  
  [self dismissViewControllerAnimated:YES completion:nil];
}

常用的二維碼或條形碼就是這些吧,后續再進行補充。

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容