二維碼掃描最近兩年簡直是風靡移動互聯網時代,尤其在國內發展神速。圍繞條碼掃碼功能,首先說說通過本文你可以知道啥。一,如何在你的APP中快速集成一維碼、二維碼掃描功能;二,大概了解條碼掃描功能的實現原理以及理解各個模塊的代碼。但是,本文不包含條碼的編解碼原理,如有興趣,請自行查閱。
條碼掃描功能的快速集成
默認你使用了Android Stuidio進行開發,直接使用這個開源項目SimpleZXing,它是在ZXing庫的作者為Android寫的條碼掃描APP Barcode Scanner的基礎上優化而成的。你可以通過簡單的兩步就可以實現條碼掃描的功能。
1.添加項目依賴
1
compile 'com.acker:simplezxing:1.5'
2.在你想調起條碼掃描界面的地方(比如YourActivity),調起二維碼掃描界面CaptureActivity
1
startActivityForResult(new Intent(YourActivity.this, CaptureActivity.class), CaptureActivity.REQ_CODE)
然后就會打開如下這個界面:
將條碼置于框內,掃描成功后會將解碼得到的字符串返回給調起者,所以你只需要在你的Activity的onActivityResult()方法里拿到它進行后續操作即可。
當然SimpleZXing目前還支持一些設置項,包括攝像頭是否開啟曝光,掃碼成功后是否震動,是否發聲,閃光燈模式自動、常開、常關,屏幕自動旋轉、橫屏、豎屏。
同時,雖然該項目已經在manifest里申明了所需的照相機權限,但是在Android 6.0以上系統中你仍然需要自己處理動態權限管理。所以一個標準的使用方式如以下代碼所示:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
package com.acker.simplezxing.demo;
import android.Manifest;
import android.content.Intent;
import android.content.pm.PackageManager;
import android.os.Bundle;
import android.support.annotation.NonNull;
import android.support.v4.app.ActivityCompat;
import android.support.v4.content.ContextCompat;
import android.support.v7.app.AppCompatActivity;
import android.view.View;
import android.widget.Button;
import android.widget.TextView;
import android.widget.Toast;
import com.acker.simplezxing.activity.CaptureActivity;
public class MainActivity extends AppCompatActivity {
private static final int REQ_CODE_PERMISSION = 0x1111;
private TextView tvResult;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
tvResult = (TextView) findViewById(R.id.tv_result);
Button btn = (Button) findViewById(R.id.btn_sm);
btn.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
// Open Scan Activity
if (ContextCompat.checkSelfPermission(MainActivity.this, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
// Do not have the permission of camera, request it.
ActivityCompat.requestPermissions(MainActivity.this, new String[]{Manifest.permission.CAMERA}, REQ_CODE_PERMISSION);
} else {
// Have gotten the permission
startCaptureActivityForResult();
}
}
});
}
private void startCaptureActivityForResult() {
Intent intent = new Intent(MainActivity.this, CaptureActivity.class);
Bundle bundle = new Bundle();
bundle.putBoolean(CaptureActivity.KEY_NEED_BEEP, CaptureActivity.VALUE_BEEP);
bundle.putBoolean(CaptureActivity.KEY_NEED_VIBRATION, CaptureActivity.VALUE_VIBRATION);
bundle.putBoolean(CaptureActivity.KEY_NEED_EXPOSURE, CaptureActivity.VALUE_NO_EXPOSURE);
bundle.putByte(CaptureActivity.KEY_FLASHLIGHT_MODE, CaptureActivity.VALUE_FLASHLIGHT_OFF);
bundle.putByte(CaptureActivity.KEY_ORIENTATION_MODE, CaptureActivity.VALUE_ORIENTATION_AUTO);
bundle.putBoolean(CaptureActivity.KEY_SCAN_AREA_FULL_SCREEN, CaptureActivity.VALUE_SCAN_AREA_FULL_SCREEN);
bundle.putBoolean(CaptureActivity.KEY_NEED_SCAN_HINT_TEXT, CaptureActivity.VALUE_SCAN_HINT_TEXT);
intent.putExtra(CaptureActivity.EXTRA_SETTING_BUNDLE, bundle);
startActivityForResult(intent, CaptureActivity.REQ_CODE);
}
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
switch (requestCode) {
case REQ_CODE_PERMISSION: {
if (grantResults.length > 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) {
// User agree the permission
startCaptureActivityForResult();
} else {
// User disagree the permission
Toast.makeText(this, "You must agree the camera permission request before you use the code scan function", Toast.LENGTH_LONG).show();
}
}
break;
}
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
switch (requestCode) {
case CaptureActivity.REQ_CODE:
switch (resultCode) {
case RESULT_OK:
tvResult.setText(data.getStringExtra(CaptureActivity.EXTRA_SCAN_RESULT));
//or do sth
break;
case RESULT_CANCELED:
if (data != null) {
// for some reason camera is not working correctly
tvResult.setText(data.getStringExtra(CaptureActivity.EXTRA_SCAN_RESULT));
}
break;
}
break;
}
}
}
以上說了如何通過使用SimpleZXing開源項目來快速實現條碼掃描功能,當然開發者可能會因為一些特定的需求需要修改某些地方的代碼,如UI等等,那么下面我會帶大家大致講解一下這個開源項目的代碼,使大家更進一步了解條碼掃描的實現機制,同時方便大家在它基礎之上進行修改。
SimpleZXing關鍵代碼分析
其實條碼掃描的過程很容易理解,就是將攝像頭捕捉到的預覽幀數組進行處理,發現其中的一維碼或二維碼并進行解碼。但是就是在攝像頭捕捉數據的過程中有幾個重要的地方需要大家注意。我們倒過來分析這個過程。
1.DecodeHandler.class中的decode()方法
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
/**
- Decode the data within the viewfinder rectangle, and time how long it took. For efficiency,
- reuse the same reader objects from one decode to the next.
- @param data The YUV preview frame.
- @param width The width of the preview frame.
- @param height The height of the preview frame.
*/
private void decode(byte[] data, int width, int height) {
long start = System.currentTimeMillis();
if (width < height) {
// portrait
byte[] rotatedData = new byte[data.length];
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++)
rotatedData[y * width + width - x - 1] = data[y + x * height];
}
data = rotatedData;
}
Result rawResult = null;
PlanarYUVLuminanceSource source = activity.getCameraManager().buildLuminanceSource(data, width, height);
if (source != null) {
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
try {
rawResult = multiFormatReader.decodeWithState(bitmap);
} catch (ReaderException re) {
// continue
} finally {
multiFormatReader.reset();
}
}
Handler handler = activity.getHandler();
if (rawResult != null) {
// Don't log the barcode contents for security.
long end = System.currentTimeMillis();
Log.d(TAG, "Found barcode in " + (end - start) + " ms");
if (handler != null) {
Message message = Message.obtain(handler, R.id.decode_succeeded, rawResult);
message.sendToTarget();
}
} else {
if (handler != null) {
Message message = Message.obtain(handler, R.id.decode_failed);
message.sendToTarget();
}
}
}
顯然,這個時候處理解碼的線程對應的Handler已經拿到了預覽幀的byte數組以及預覽幀的高和寬。在這個方法中,我們首先根據此時屏幕是橫屏還是豎屏對預覽幀數組進行了一個預處理。
因為在Android設備上,存在以下幾個概念:
屏幕方向:在Android系統中,屏幕的左上角是坐標系統的原點(0,0)坐標。原點向右延伸是X軸正方向,原點向下延伸是Y軸正方向。
相機傳感器方向:手機相機的圖像數據都是來自于攝像頭硬件的圖像傳感器,這個傳感器在被固定到手機上后有一個默認的取景方向,坐標原點位于手機橫放時的左上角,即與橫屏應用的屏幕X方向一致。換句話說,與豎屏應用的屏幕X方向呈90度角。
相機的預覽方向:由于手機屏幕可以360度旋轉,為了保證用戶無論怎么旋轉手機都能看到“正確”的預覽畫面(這個“正確”是指顯示在UI預覽界面的畫面與人眼看到的眼前的畫面是一致的),Android系統底層根據當前手機屏幕的方向對圖像傳感器采集到的數據進行了旋轉處理,然后才送給顯示系統,因此可以保證預覽畫面始終“正確”。在相機API中可以通過setDisplayOrientation()設置相機預覽方向。在默認情況下,這個值為0,與圖像傳感器一致。因此對于橫屏應用來說,由于屏幕方向和預覽方向一致,預覽圖像不會顛倒90度。但是對于豎屏應用,屏幕方向和預覽方向垂直,所以會出現顛倒90度現象。為了得到正確的預覽畫面,必須通過API將相機的預覽方向旋轉90,保持與屏幕方向一致,如圖3所示。
也就是說,相機得到的圖像數據始終是一個橫屏的姿態,當手機處于豎屏時,即使我們通過設置在屏幕上看到的拍攝畫面是準確的,沒有90度翻轉的,我們通過API得到的圖像數據仍然是基于橫屏的,因此在判斷到width < height即屏幕處于豎屏狀態時,我們首先對byte數組進行一個手動90度旋轉,然后將結果構造為一個PlanarYUVLuminanceSource對象,進行真正的解析處理去了,這里我們就不管了。
然后再看這個預覽幀數據是怎么來的。
2.PreviewCallback.class中的onPreviewFrame()方法
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
@Override
public void onPreviewFrame(byte[] data, Camera camera) {
Point cameraResolution = configManager.getCameraResolution();
Handler thePreviewHandler = previewHandler;
if (cameraResolution != null && thePreviewHandler != null) {
Message message;
Point screenResolution = configManager.getScreenResolution();
if (screenResolution.x < screenResolution.y){
// portrait
message = thePreviewHandler.obtainMessage(previewMessage, cameraResolution.y,
cameraResolution.x, data);
} else {
// landscape
message = thePreviewHandler.obtainMessage(previewMessage, cameraResolution.x,
cameraResolution.y, data);
}
message.sendToTarget();
previewHandler = null;
} else {
Log.d(TAG, "Got preview callback, but no handler or resolution available");
}
}
這個容易理解,就是系統Camera.PreviewCallback接口,并實現了回調方法,每次獲取到預覽幀時將圖像數據進行回調,同樣區分了橫豎屏的情況以方便上文decode時的預處理。這里出現了cameraResolution和screenResolution兩個對象,我們接下來看看它們分別是什么。
3.CameraConfigurationManager.class中的initFromCameraParameters()方法
我們可以看到,上面提到的cameraResolution和screenResolution是在CameraConfigurationManager.class中的initFromCameraParameters()方法中得到的。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
/**
- Reads, one time, values from the camera that are needed by the app.
*/
void initFromCameraParameters(OpenCamera camera) {
Camera.Parameters parameters = camera.getCamera().getParameters();
WindowManager manager = (WindowManager) context.getSystemService(Context.WINDOW_SERVICE);
Display display = manager.getDefaultDisplay();
int displayRotation = display.getRotation();
int cwRotationFromNaturalToDisplay;
switch (displayRotation) {
case Surface.ROTATION_0:
cwRotationFromNaturalToDisplay = 0;
break;
case Surface.ROTATION_90:
cwRotationFromNaturalToDisplay = 90;
break;
case Surface.ROTATION_180:
cwRotationFromNaturalToDisplay = 180;
break;
case Surface.ROTATION_270:
cwRotationFromNaturalToDisplay = 270;
break;
default:
// Have seen this return incorrect values like -90
if (displayRotation % 90 == 0) {
cwRotationFromNaturalToDisplay = (360 + displayRotation) % 360;
} else {
throw new IllegalArgumentException("Bad rotation: " + displayRotation);
}
}
Log.i(TAG, "Display at: " + cwRotationFromNaturalToDisplay);
int cwRotationFromNaturalToCamera = camera.getOrientation();
Log.i(TAG, "Camera at: " + cwRotationFromNaturalToCamera);
// Still not 100% sure about this. But acts like we need to flip this:
if (camera.getFacing() == CameraFacing.FRONT) {
cwRotationFromNaturalToCamera = (360 - cwRotationFromNaturalToCamera) % 360;
Log.i(TAG, "Front camera overriden to: " + cwRotationFromNaturalToCamera);
}
cwRotationFromDisplayToCamera =
(360 + cwRotationFromNaturalToCamera - cwRotationFromNaturalToDisplay) % 360;
Log.i(TAG, "Final display orientation: " + cwRotationFromDisplayToCamera);
int cwNeededRotation;
if (camera.getFacing() == CameraFacing.FRONT) {
Log.i(TAG, "Compensating rotation for front camera");
cwNeededRotation = (360 - cwRotationFromDisplayToCamera) % 360;
} else {
cwNeededRotation = cwRotationFromDisplayToCamera;
}
Log.i(TAG, "Clockwise rotation from display to camera: " + cwNeededRotation);
Point theScreenResolution = new Point();
display.getSize(theScreenResolution);
screenResolution = theScreenResolution;
Log.i(TAG, "Screen resolution in current orientation: " + screenResolution);
cameraResolution = CameraConfigurationUtils.findBestPreviewSizeValue(parameters, screenResolution);
Log.i(TAG, "Camera resolution: " + cameraResolution);
bestPreviewSize = CameraConfigurationUtils.findBestPreviewSizeValue(parameters, screenResolution);
Log.i(TAG, "Best available preview size: " + bestPreviewSize);
}
這個方法中前面的大段通過屏幕方向和攝像頭成像方向來計算預覽畫面的旋轉度數,從而保證預覽得到的畫面隨著手機的旋轉或者前后置攝像頭的更換而保持正確的顯示,當然,我們也可以看到screenResolution是通過Display類來獲取到的Point對象。它的x,y值就分別代表當前屏幕的橫向和縱向的像素值,當然這個是和屏幕方向有關系的。然后可以看到另外兩個Point對象cameraResolution以及bestPreviewSize是通過CameraConfigurationUtils.class中的findBestPreviewSizeValue()方法得到的,那我們再來看這個方法。
4.CameraConfigurationUtils.class中的findBestPreviewSizeValue()方法
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
static Point findBestPreviewSizeValue(Camera.Parameters parameters, Point screenResolution) {
List<Camera.Size> rawSupportedSizes = parameters.getSupportedPreviewSizes();
if (rawSupportedSizes == null) {
Log.w(TAG, "Device returned no supported preview sizes; using default");
Camera.Size defaultSize = parameters.getPreviewSize();
if (defaultSize == null) {
throw new IllegalStateException("Parameters contained no preview size!");
}
return new Point(defaultSize.width, defaultSize.height);
}
// Sort by size, descending
List<Camera.Size> supportedPreviewSizes = new ArrayList<>(rawSupportedSizes);
Collections.sort(supportedPreviewSizes, new Comparator<Camera.Size>() {
@Override
public int compare(Camera.Size a, Camera.Size b) {
int aPixels = a.height * a.width;
int bPixels = b.height * b.width;
if (bPixels < aPixels) {
return -1;
}
if (bPixels > aPixels) {
return 1;
}
return 0;
}
});
if (Log.isLoggable(TAG, Log.INFO)) {
StringBuilder previewSizesString = new StringBuilder();
for (Camera.Size supportedPreviewSize : supportedPreviewSizes) {
previewSizesString.append(supportedPreviewSize.width).append('x')
.append(supportedPreviewSize.height).append(' ');
}
Log.i(TAG, "Supported preview sizes: " + previewSizesString);
}
double screenAspectRatio = screenResolution.x / (double) screenResolution.y;
// Remove sizes that are unsuitable
Iterator<Camera.Size> it = supportedPreviewSizes.iterator();
while (it.hasNext()) {
Camera.Size supportedPreviewSize = it.next();
int realWidth = supportedPreviewSize.width;
int realHeight = supportedPreviewSize.height;
if (realWidth * realHeight < MIN_PREVIEW_PIXELS) {
it.remove();
continue;
}
boolean isScreenPortrait = screenResolution.x < screenResolution.y;
int maybeFlippedWidth = isScreenPortrait ? realHeight : realWidth;
int maybeFlippedHeight = isScreenPortrait ? realWidth : realHeight;
double aspectRatio = (double) maybeFlippedWidth / (double) maybeFlippedHeight;
double distortion = Math.abs(aspectRatio - screenAspectRatio);
if (distortion > MAX_ASPECT_DISTORTION) {
it.remove();
continue;
}
if (maybeFlippedWidth == screenResolution.x && maybeFlippedHeight == screenResolution.y) {
Point exactPoint = new Point(realWidth, realHeight);
Log.i(TAG, "Found preview size exactly matching screen size: " + exactPoint);
return exactPoint;
}
}
// If no exact match, use largest preview size. This was not a great idea on older devices because
// of the additional computation needed. We're likely to get here on newer Android 4+ devices, where
// the CPU is much more powerful.
if (!supportedPreviewSizes.isEmpty()) {
Camera.Size largestPreview = supportedPreviewSizes.get(0);
Point largestSize = new Point(largestPreview.width, largestPreview.height);
Log.i(TAG, "Using largest suitable preview size: " + largestSize);
return largestSize;
}
// If there is nothing at all suitable, return current preview size
Camera.Size defaultPreview = parameters.getPreviewSize();
if (defaultPreview == null) {
throw new IllegalStateException("Parameters contained no preview size!");
}
Point defaultSize = new Point(defaultPreview.width, defaultPreview.height);
Log.i(TAG, "No suitable preview sizes, using default: " + defaultSize);
return defaultSize;
}
可以看出所謂bestPreviewSize就是將相機支持的預覽分辨率都獲取到然后找一個和屏幕分辨率最接近的最為最終的結果,當然,同樣,要有橫豎屏的處理。那么上面步驟3獲取到的這些Point對象等等還有什么用呢,其實它們都將作為相機預覽及顯示的參數設置給Camera對象,如以下這個方法:
5.CameraConfigurationManager.class中的setDesiredCameraParameters()方法
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
void setDesiredCameraParameters(OpenCamera camera, boolean safeMode) {
Camera theCamera = camera.getCamera();
Camera.Parameters parameters = theCamera.getParameters();
if (parameters == null) {
Log.w(TAG, "Device error: no camera parameters are available. Proceeding without configuration.");
return;
}
Log.i(TAG, "Initial camera parameters: " + parameters.flatten());
if (safeMode) {
Log.w(TAG, "In camera config safe mode -- most settings will not be honored");
}
initializeTorch(parameters, safeMode);
CameraConfigurationUtils.setFocus(
parameters,
true,
true,
safeMode);
if (!safeMode) {
CameraConfigurationUtils.setBarcodeSceneMode(parameters);
CameraConfigurationUtils.setVideoStabilization(parameters);
CameraConfigurationUtils.setFocusArea(parameters);
CameraConfigurationUtils.setMetering(parameters);
}
parameters.setPreviewSize(bestPreviewSize.x, bestPreviewSize.y);
theCamera.setParameters(parameters);
theCamera.setDisplayOrientation(cwRotationFromDisplayToCamera);
Camera.Parameters afterParameters = theCamera.getParameters();
Camera.Size afterSize = afterParameters.getPreviewSize();
if (afterSize != null && (bestPreviewSize.x != afterSize.width || bestPreviewSize.y != afterSize.height)) {
Log.w(TAG, "Camera said it supported preview size " + bestPreviewSize.x + 'x' + bestPreviewSize.y +
", but after setting it, preview size is " + afterSize.width + 'x' + afterSize.height);
bestPreviewSize.x = afterSize.width;
bestPreviewSize.y = afterSize.height;
}
}
其實很明了了,這個方法是將獲取到的那些參數整合成Parameters對象set到Camera里。至此,我們大概說了兩個問題,一個是如何獲取并給相機設置參數,另一個是如何獲取攝像頭的預覽數據并進行處理,接下來還有一個很重要的點需要說明,那就是我們雖然獲取了整個預覽幀的數據準備對其解析,但實際上,對于條碼掃描來說,真正被處理的其實只是掃描框內的那部分圖片或者說數據,所以我們在掃描的時候也必須將條碼置于框框內,那么這就涉及到了兩個部分,一個是在屏幕上繪制這樣一個矩形框,另一個是在預覽幀里提取框內的數據。這兩點分別由以下方法實現。
6.CameraManager.getFramingRect()方法
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
/**
- Calculates the framing rect which the UI should draw to show the user where to place the
- barcode. This target helps with alignment as well as forces the user to hold the device
- far enough away to ensure the image will be in focus.
- @return The rectangle to draw on screen in window coordinates.
*/
public synchronized Rect getFramingRect() {
if (framingRect == null) {
if (camera == null) {
return null;
}
Point screenResolution = configManager.getScreenResolution();
if (screenResolution == null) {
// Called early, before init even finished
return null;
}
int width = findDesiredDimensionInRange(screenResolution.x, MIN_FRAME_WIDTH, MAX_FRAME_WIDTH);
int height = findDesiredDimensionInRange(screenResolution.y, MIN_FRAME_HEIGHT, MAX_FRAME_HEIGHT);
int leftOffset = (screenResolution.x - width) / 2;
int topOffset = (screenResolution.y - height) / 2;
framingRect = new Rect(leftOffset, topOffset, leftOffset + width, topOffset + height);
Log.d(TAG, "Calculated framing rect: " + framingRect);
}
return framingRect;
}
這個是UI中的方框Rect對象的構造,很簡單了,就是根據屏幕分辨率然后按照一個固定的比例來設置方框大小。這個方法在方框的自定義View繪制時調用。
7.CameraManager.class的getFramingRectInPreview()方法
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
/**
- Like {@link #getFramingRect} but coordinates are in terms of the preview frame,
- not UI / screen.
- @return {@link Rect} expressing barcode scan area in terms of the preview size
*/
public synchronized Rect getFramingRectInPreview() {
if (framingRectInPreview == null) {
Rect framingRect = getFramingRect();
if (framingRect == null) {
return null;
}
Rect rect = new Rect(framingRect);
Point cameraResolution = configManager.getCameraResolution();
Point screenResolution = configManager.getScreenResolution();
if (cameraResolution == null || screenResolution == null) {
// Called early, before init even finished
return null;
}
if (screenResolution.x < screenResolution.y) {
// portrait
rect.left = rect.left * cameraResolution.y / screenResolution.x;
rect.right = rect.right * cameraResolution.y / screenResolution.x;
rect.top = rect.top * cameraResolution.x / screenResolution.y;
rect.bottom = rect.bottom * cameraResolution.x / screenResolution.y;
} else {
// landscape
rect.left = rect.left * cameraResolution.x / screenResolution.x;
rect.right = rect.right * cameraResolution.x / screenResolution.x;
rect.top = rect.top * cameraResolution.y / screenResolution.y;
rect.bottom = rect.bottom * cameraResolution.y / screenResolution.y;
}
framingRectInPreview = rect;
}
return framingRectInPreview;
}
這個是預覽幀方框Rect對象的構造,其實也很簡單,就是因為相機盧蘭幀分辨率和屏幕顯示分辨率可能不一致,因此首先計算這兩者的比例,然后再按比例對步驟6中的UI方框進行縮放,同樣,計算比例的時候要區分橫豎屏。這個方法是在buildLuminanceSource()中調用的,也就是步驟1中的構造PlanarYUVLuminanceSource對象時,其實還傳入了這一Rect對象,來代表有效數據。
看完是不是有點點亂,因為本文沒有系統的講解,只是將所涉及內容的一些關鍵點比如Android Camera的使用,以及相應的橫豎屏的區別處理做了介紹,真正核心的條碼解碼算法并沒有深入,“淺嘗輒止”了,就醬紫吧,有什么問題歡迎大家討論。
另,轉載請注明出處!文中若有什么錯誤希望大家探討指正!