About Augmented Reality and ARKit(面關于增強現實和ARKit)

Discover supporting concepts, features, and best practices for building great AR experiences.

發現構建好的AR體驗的支持概念,功能和最佳實踐。


Overview


The basic requirement for any AR experience—and the defining feature of ARKit—is the ability to create and track a correspondence between the real-world space the user inhabits and a virtual space where you can model visual content. When your app displays that content together with a live camera image, the user experiences augmented reality: the illusion that your virtual content is part of the real world.

In all AR experiences, ARKit uses world and camera coordinate systems following a right-handed convention: the y-axis points upward, and (when relevant) the z-axis points toward the viewer and the x-axis points toward the viewer's right.

Session configurations can change the origin and orientation of the coordinate system with respect to the real world (seeworldAlignment). Each anchor in an AR session defines its own local coordinate system, also following the right-handed, z-towards-viewer convention; for example, theARFaceAnchorclass defines a system for locating facial features.

概述

任何AR體驗的基本要求(以及ARKit的定義特征)都是能夠創建和跟蹤用戶處在的真實世界空間與可以為視覺內容建模的虛擬空間之間的對應關系。 當您的應用程序將該內容與實時相機圖像一起顯示時,用戶會體驗到增強現實:虛擬內容是真實世界的一部分。

在所有AR體驗中,ARKit遵循右手習慣使用世界和攝像機坐標系統:y軸指向上方,(相關時)z軸指向觀察者,x軸指向觀察者的右側。

Session 配置可以相對于現實世界改變坐標系的原點和方向(請參閱worldAlignment)。 AR Session中的每個錨點都定義了自己的本地坐標系統,也遵循慣用右手的z向觀察者慣例; 例如,ARFaceAnchor類定義了一個用于定位面部特征的系統。

How World Tracking Works

To create a correspondence between real and virtual spaces, ARKit uses a technique calledvisual-inertial odometry. This process combines information from the iOS device’s motion sensing hardware with computer vision analysis of the scene visible to the device’s camera. ARKit recognizes notable features in the scene image, tracks differences in the positions of those features across video frames, and compares that information with motion sensing data. The result is a high-precision model of the device’s position and motion.

World tracking also analyzes and understands the contents of a scene. Use hit-testing methods (see theARHitTestResultclass) to find real-world surfaces corresponding to a point in the camera image. If you enable theplaneDetectionsetting in your session configuration, ARKit detects flat surfaces in the camera image and reports their position and sizes. You can use hit-test results or detected planes to place or interact with virtual content in your scene.

世界追蹤系統如何運作

為了創建真實和虛擬空間之間的對應關系,ARKit使用了一種稱為視覺 - 慣性測距法的技術。 該過程將來自iOS設備的運動傳感硬件的信息與設備攝像頭可見的場景的計算機視覺分析相結合。 ARKit識別場景圖像中的顯著特征,跟蹤視頻幀中這些特征的位置差異,并將該信息與運動感測數據進行比較。 結果是設備位置和運動的高精度模型。

世界追蹤也分析和理解場景的內容。 使用命中測試方法(請參閱ARHitTestResult類)來查找與相機圖像中的點相對應的真實世界曲面。 如果在會話配置中啟用planeDetection設置,則ARKit會檢測攝像機圖像中的平面并報告其位置和大小。 您可以使用命中測試結果或檢測到的飛機在場景中放置或與虛擬內容交互。

Best Practices and Limitations

World tracking is an inexact science. This process can often produce impressive accuracy, leading to realistic AR experiences. However, it relies on details of the device’s physical environment that are not always consistent or are difficult to measure in real time without some degree of error. To build high-quality AR experiences, be aware of these caveats and tips.

Design AR experiences for predictable lighting conditions. World tracking involves image analysis, which requires a clear image. Tracking quality is reduced when the camera can’t see details, such as when the camera is pointed at a blank wall or the scene is too dark.

Use tracking quality information to provide user feedback. World tracking correlates image analysis with device motion. ARKit develops a better understanding of the scene if the device is moving, even if the device moves only subtly. Excessive motion—too far, too fast, or shaking too vigorously—results in a blurred image or too much distance for tracking features between video frames, reducing tracking quality. TheARCameraclass provides tracking state reason information, which you can use to develop UI that tells a user how to resolve low-quality tracking situations.

Allow time for plane detection to produce clear results, and disable plane detection when you have the results you need.Plane detection results vary over time—when a plane is first detected, its position and extent may be inaccurate. As the plane remains in the scene over time, ARKit refines its estimate of position and extent. When a large flat surface is in the scene, ARKit may continue changing the plane anchor’s position, extent, and transform after you’ve already used the plane to place content.

最佳實踐和限制

世界追蹤是一種不精確的科學。 這個過程往往能產生令人印象深刻的準確性,從而產生逼真的AR體驗 但是,它依賴于設備的物理環境的細節并不總是一致的,或者難以實時測量而沒有一定程度的錯誤。 要打造高品質的AR體驗,請注意這些注意事項和提示。

為AR體驗設計可預測的照明條件。世界追蹤涉及圖像分析,這需要一個清晰的圖像。 當相機看不到細節時(如相機指向空白墻壁或場景太暗)時,跟蹤質量會降低。

使用跟蹤質量信息以便來提供用戶反饋。世界追蹤將圖像分析與設備運動關聯起來。 即使設備只是巧妙地移動,ARKit也能更好地了解設備正在移動的場景。 過度運動 - 太快,太快或抖動過于劇烈 - 會導致圖像模糊或跟蹤視頻幀之間的特征的距離過長,從而降低跟蹤質量。 ARCamera類提供了跟蹤狀態原因信息,您可以使用該信息開發UI,告知用戶如何解決低質量的跟蹤情況。

允許一定時間以便進行平面檢測以產生清晰的結果,并在您獲得所需結果時禁用平面檢測。 平面檢測結果隨時間變化 - 當首次檢測到飛機時,其位置和范圍可能不準確。 隨著飛機長時間停留在現場,ARKit提煉出其位置和范圍的估計。 當場景中出現大面積平坦表面時,ARKit可能會在您已經使用飛機放置內容后繼續更改飛機錨點的位置,范圍和變形。

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容