Creating Face-Based AR Experiences

Use the information provided by a face tracking AR session to place and animate 3D content.
使用人臉跟蹤AR會話提供的信息放置和動畫3D內(nèi)容。


Overview

This sample app presents a simple interface allowing you to choose between four augmented reality (AR) visualizations on devices with a TrueDepth front-facing camera (see?iOS Device Compatibility Reference).

1.The camera view alone, without any AR content.

2.The face mesh provided by ARKit, with automatic estimation of the real-world directional lighting environment.

3.Virtual 3D content that appears to attach to (and be obscured by parts of) the user’s real face.

4.A simple robot character whose facial expression is animated to match that of the user.

Use the “+” button in the sample app to switch between these modes, as shown below.


概述

此示例應用程序提供了一個簡單的界面,允許您通過TrueDepth前置攝像頭在設備上選擇四個增強現(xiàn)實(AR)可視化臉模型(請參閱iOS設備兼容性參考)。

1.單獨攝像頭視圖,沒有任何AR內(nèi)容。

2. ARKit提供的人臉網(wǎng)格,可自動估計真實世界的定向照明環(huán)境。

3.虛擬3D內(nèi)容似乎附著于用戶的真實臉部(并被其部分遮擋)。

4.一個簡單的機器人角色,其面部表情動畫以匹配用戶的面部表情。

使用示例應用程序中的“+”按鈕在這些模式之間切換,如下所示。

Start a Face Tracking Session in a SceneKit View

Like other uses of ARKit, face tracking requires configuring and running a session (an?ARSession?object) and rendering the camera image together with virtual content in a view. For more detailed explanations of session and view setup, see?About Augmented Reality and ARKitand?Building Your First AR Experience. This sample uses SceneKit to display an AR experience, but you can also use SpriteKit or build your own renderer using Metal (see?ARSKView?and?Displaying an AR Experience with Metal).

Face tracking differs from other uses of ARKit in the class you use to configure the session. To enable face tracking, create an instance of?ARFaceTrackingConfiguration, configure its properties, and pass it to the?runWithConfiguration:options:?method of the AR session associated with your view, as shown below.

在SceneKit視圖中啟動臉部跟蹤會話

和其他ARKit使用一樣,人臉追蹤需要配置和運行一個Session(一個ARSession對象),并將相機圖像與視圖中的虛擬內(nèi)容一起渲染。 有關Session和視圖設置的更詳細說明,請參閱關于增強現(xiàn)實和ARKit并且構建您的第一個AR體驗。 此示例使用SceneKit顯示AR體驗,但您也可以使用SpriteKit或使用Metal構建您自己的渲染器(請參閱ARSKView和顯示帶金屬的AR體驗)。

臉部跟蹤與用于配置Session的類中的其他ARKit用法不同。 要啟用人臉跟蹤,請創(chuàng)建ARFaceTrackingConfiguration實例,配置其屬性,并將其傳遞給與您的視圖關聯(lián)的AR會話的runWithConfiguration:options:方法,如下所示。

Before offering your user features that require a face tracking AR session, check the?isSupported?property on the?ARFaceTrackingConfiguration?class to determine whether the current device supports ARKit face tracking.

在提供需要人臉跟蹤AR Session的用戶功能之前,請檢查ARFaceTrackingConfiguration類的isSupported屬性,以確定當前設備是否支持ARKit人臉跟蹤。

Track the Position and Orientation of a Face

When face tracking is active, ARKit automatically adds?ARFaceAnchor?objects to the running AR session, containing information about the user’s face, including its position and orientation.

Note

ARKit detects and provides information about only one user’s face. If multiple faces are present in the camera image, ARKit chooses the largest or most clearly recognizable face.

In a SceneKit-based AR experience, you can add 3D content corresponding to a face anchor in the?renderer:didAddNode:forAnchor:?method (from the?ARSCNViewDelegateprotocol). ARKit adds a SceneKit node for the anchor, and updates that node’s position and orientation on each frame, so any SceneKit content you add to that node automatically follows the position and orientation of the user’s face.

跟蹤人臉的位置和方向

面部跟蹤處于活動狀態(tài)時,ARKit會自動將ARFaceAnchor對象添加到正在運行的AR Session中,其中包含有關用戶面部的信息,包括其位置和方向。

注意

ARKit檢測并提供關于一個用戶臉部的信息。 如果相機圖像中存在多個人臉,ARKit會選擇最大或最清晰可識別的人臉。

在基于SceneKit的AR體驗中,可以在渲染器中添加與面部錨點對應的3D內(nèi)容:didAddNode:forAnchor:method(來自ARSCNViewDelegate協(xié)議)。 ARKit為定位點添加一個SceneKit節(jié)點,并更新該節(jié)點在每個框架上的位置和方向,因此添加到該節(jié)點的任何SceneKit內(nèi)容都會自動跟隨用戶臉部的位置和方向。

In this example, the?renderer:didAddNode:forAnchor:?method calls the?setupFaceNodeContent?method to add SceneKit content to the?faceNode. For example, if you change the?showsCoordinateOrigin?variable in the sample code, the app adds a visualization of the x/y/z axes to the node, indicating the origin of the face anchor’s coordinate system.

在此示例中,渲染器:didAddNode:forAnchor:方法調(diào)用setupFaceNodeContent方法以將SceneKit內(nèi)容添加到faceNode。 例如,如果您在示例代碼中更改showsCoordinateOrigin變量,則該應用程序?qū) / y / z軸的可視化文件添加到節(jié)點,以指示面部錨點坐標系的原點。

Use Face Geometry to Model the User’s Face

ARKit provides a coarse 3D mesh geometry matching the size, shape, topology, and current facial expression of the user’s face. ARKit also provides the?ARSCNFaceGeometry?class, offering an easy way to visualize this mesh in SceneKit.

Your AR experience can use this mesh to place or draw content that appears to attach to the face. For example, by applying a semitransparent texture to this geometry you could paint virtual tattoos or makeup onto the user’s skin.

To create a SceneKit face geometry, initialize an?ARSCNFaceGeometry?object with the Metal device your SceneKit view uses for rendering:

使用面幾何來建模用戶的面部

ARKit提供與用戶臉部的大小,形狀,拓撲和當前面部表情相匹配的粗糙三維網(wǎng)格幾何圖形。 ARKit還提供了ARSCNFaceGeometry類,提供了一種在SceneKit中可視化該網(wǎng)格的簡單方法。

您的AR體驗可以使用此網(wǎng)格來放置或繪制看起來附著在臉上的內(nèi)容。 例如,通過對此幾何圖形應用半透明紋理,您可以在用戶的皮膚上繪制虛擬紋身或化妝。

要創(chuàng)建SceneKit面幾何,請使用SceneKit視圖用于渲染的Metal設備初始化ARSCNFaceGeometry對象:

The sample code’s?setupFaceNodeContent?method (mentioned above) adds a node containing the face geometry to the scene. By making that node a child of the node provided by the face anchor, the face model automatically tracks the position and orientation of the user’s face.

To also make the face model onscreen conform to the shape of the user’s face, even as the user blinks, talks, and makes various facial expressions, you need to retrieve an updated face mesh in the?renderer:didUpdateNode:forAnchor:?delegate callback.

示例代碼的setupFaceNodeContent方法(如上所述)將包含面幾何的節(jié)點添加到場景中。 通過使該節(jié)點成為面部錨點提供的節(jié)點的子節(jié)點,面部模型自動跟蹤用戶面部的位置和方向。

為了使屏幕上的臉部模型符合用戶臉部的形狀,即使用戶在閃爍,說話并制作各種臉部表情時,也需要在呈現(xiàn)器中檢索更新后的臉部網(wǎng)格:didUpdateNode:forAnchor:delegate回調(diào)。


Then, update the?ARSCNFaceGeometry?object in your scene to match by passing the new face mesh to its?updateFromFaceGeometry:?method:

然后,通過將新面部網(wǎng)格傳遞給其updateFromFaceGeometry:方法來更新場景中的ARSCNFaceGeometry對象以進行匹配:


Place 3D Content on the User’s Face

Another use of the face mesh that ARKit provides is to create?occlusion geometry?in your scene. An occlusion geometry is a 3D model that doesn’t render any visible content (allowing the camera image to show through), but obstructs the camera’s view of other virtual content in the scene.

This technique creates the illusion that the real face interacts with virtual objects, even though the face is a 2D camera image and the virtual content is a rendered 3D object. For example, if you place an occlusion geometry and virtual glasses on the user’s face, the face can obscure the frame of the glasses.

To create an occlusion geometry for the face, start by creating an?ARSCNFaceGeometryobject as in the previous example. However, instead of configuring that object’s SceneKit material with a visible appearance, set the material to render depth but not color during rendering:

將3D內(nèi)容放置在用戶的臉上

ARKit提供的臉部網(wǎng)格的另一個用途是在場景中創(chuàng)建遮擋幾何體。 遮擋幾何是一種3D模型,它不會呈現(xiàn)任何可見內(nèi)容(允許攝像機圖像顯示),但會阻擋相機查看場景中其他虛擬內(nèi)容的視圖。

即使臉部是2D照相機圖像,而虛擬內(nèi)容是渲染的3D對象,該技術也會創(chuàng)建真實臉部與虛擬對象交互的錯覺。 例如,如果您在用戶的臉上放置遮擋幾何圖形和虛擬眼鏡,則臉部可能會遮擋眼鏡的框架。

要為面部創(chuàng)建遮擋幾何圖形,請先按照前面的示例創(chuàng)建ARSCNFaceGeometry對象。 但是,不要使用可見外觀配置該對象的SceneKit材質(zhì),而在渲染過程中應將材質(zhì)設置為渲染深度而不是顏色:


Because the material renders depth, other objects rendered by SceneKit correctly appear in front of it or behind it. But because the material doesn’t render color, the camera image appears in its place. The sample app combines this technique with a SceneKit object positioned in front of the user’s eyes, creating an effect where the object is realistically obscured by the user’s nose.

由于材質(zhì)渲染深度,SceneKit渲染的其他對象正確顯示在其前面或后面。 但由于材質(zhì)不呈現(xiàn)顏色,相機圖像顯示在其位置上。 示例應用程序?qū)⒋思夹g與位于用戶眼睛前方的SceneKit對象結合在一起,從而創(chuàng)建一種效果,使用者的鼻子逼真地遮擋物體。

Animate a Character with Blend Shapes

In addition to the face mesh shown in the above examples, ARKit also provides a more abstract model of the user’s facial expressions in the form of a?blendShapes?dictionary. You can use the named coefficient values in this dictionary to control the animation parameters of your own 2D or 3D assets, creating a character (such as an avatar or puppet) that follows the user’s real facial movements and expressions.

As a basic demonstration of blend shape animation, this sample includes a simple model of a robot character’s head, created using SceneKit primitive shapes. (See the?robotHead.scn?file in the source code.)

To get the user’s current facial expression, read the?blendShapes?dictionary from the face anchor in the?renderer:didUpdateNode:forAnchor:?delegate callback:

使用混合形狀動畫人物

除了上面示例中顯示的人臉網(wǎng)格外,ARKit還以blendShapes字典的形式提供了用戶面部表情的更抽象模型。 您可以使用此字典中的命名系數(shù)值來控制您自己的2D或3D資源的動畫參數(shù),創(chuàng)建一個跟隨用戶真實面部動作和表情的角色(例如化身或木偶)。

作為混合形狀動畫的基本演示,此示例包含一個機器人角色頭部的簡單模型,使用SceneKit原始形狀創(chuàng)建。 (請參閱源代碼中的robotHead.scn文件。)

要獲取用戶的當前面部表情,請從渲染器中的face anchor中讀取blendShapes字典:didUpdateNode:forAnchor:delegate callback:


Then, examine the key-value pairs in that dictionary to calculate animation parameters for your model. There are 52 unique?ARBlendShapeLocation?coefficients. Your app can use as few or as many of them as neccessary to create the artistic effect you want. In this sample, the?RobotHead?class performs this calculation, mapping the?ARBlendShapeLocationEyeBlinkLeft?and?ARBlendShapeLocationEyeBlinkRight?parameters to one axis of the?scale?factor of the robot’s eyes, and the?ARBlendShapeLocationJawOpen?parameter to offset the position of the robot’s jaw.

然后,檢查該字典中的鍵值對,以計算模型的動畫參數(shù)。 有52個獨特的ARBlendShapeLocation系數(shù)。 您的應用程序可以使用盡可能少的或盡可能多的必要條件來創(chuàng)建您想要的藝術效果。 在此示例中,RobotHead類執(zhí)行此計算,將ARBlendShapeLocationEyeBlinkLeft和ARBlendShapeLocationEyeBlinkRight參數(shù)映射到機器人眼睛比例因子的一個軸,并使用ARBlendShapeLocationJawOpen參數(shù)來偏移機器人頜部的位置。


?著作權歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內(nèi)容

  • rljs by sennchi Timeline of History Part One The Cognitiv...
    sennchi閱讀 7,418評論 0 10
  • 世界這么大,你想去看什么? 這句話一經(jīng)問出便會喚起人們?nèi)缌魈K遮掩般隱約可見的憧憬。有的人說,想一身輕松地重回故里,...
    穹靈閱讀 204評論 0 0
  • 第一眼看到2017年《環(huán)球人物》第10期封面那個年輕英俊的徐志摩像,便準備將《徐志摩,被隱去的另一面》拿來一讀。 ...
    王根云閱讀 266評論 0 1