text
stringlengths
0
1.11k
Get Started with Meta Quest Development in Unity
Unity
All-In-One VR
PC VR
Quest
Rift
The platform for which this article is written (Unity) does not match your preferred platform (Nativa).
Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.
The process of developing an app in Unity for the first time follows the basic pattern shown below.
Unity docs mental model and flow
The topics in this guide are arranged to help you get started with Meta Quest app development in Unity. The order forms a structured process in itself, which will help you kick-start the development setup and familiarize yourself with development fundamentals for Meta Quest in Unity. Depending on your need and level of expertise with Unity and Meta Quest app development, you can choose to follow the order from start to finish, or use it as a standalone reference. You can use the sidebar navigation on the left to jump to any section, at any time.
Environment and Headset Setup - One-stop-shop guide to setup the Unity editor, Meta Quest headset, and the debug environment.
Download SDKs - Meta XR SDKs are available as individual packages for developers who want specific functionality or as part of an all-in-one bundle of SDKs.
Core Blocks - Develop with Meta XR Core SDK to build immersive VR and MR experiences. It contains many essential functionalities, including the camera rig, and much more.
Input and Interactions - Explore several ways to immersively interact with Meta Quest apps. Go with controllers, hand tracking, experience user-friendly interactions with Interaction SDK, or enable text input via an immersive virtual keyboard.
Mixed Reality - Presence Platform offers ways to build dynamic experiences that blend virtual content realistically with real world surroundings so users can maintain a sense of presence. Explore Passthrough, Spatial Anchors, Scene, and much more to let users explore rich mixed reality capabilities.
Developer Tools - Expore a suite of developer tools, such as Meta Quest Developer Hub and Meta Quest Link, that streamline Meta Quest app development.
Design Resources - Design your app by following best design practices from several areas of app development. It considers health and safety of the user as well as provides guidelines to create an immersive experience for the users to enjoy.
App Distribution and Monetization - Outlines thestep-by-step process to share your app with the world. It contains a set of guidelines and approaches to grow your user base and engagement.Set Up Development Environment and Headset
Unity
All-In-One VR
PC VR
Quest
Rift
ODH
The platform for which this article is written (Unity) does not match your preferred platform (Nativa).
Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.
This topic provides instructions and describes important considerations about the following:
Hardware and software requirements for Meta Quest development with Unity
Setting up your development environment and necessary tools
Setting up your Meta Quest headset for development
Development Environment Setup
Hardware Requirements
Meta Device:
Meta Quest 2
Meta Quest Pro
Meta Quest 3
Minimum System Requirements:
2.0+ GHz processor
2 GB system RAM
Software Requirements
Operating System (any one):
Windows 10 (64-bit versions only)
macOS Sierra 10.10 or higher (x86 only) (supported with limited features)
Development Software:
Unity Editor
Set Up Meta Headset
Download and install the Oculus mobile app from Google Play or the App Store.
Follow the in-app instructions to sign in with your developer account credentials.
Pair your headset.
Wear your headset and follow the instructions in the headset to finish the setup.
For detailed setup instructions and troubleshooting, go to Getting started with your Meta Quest support page.
Install Unity Editor
Unity Editor versions, whether free or professional, support Windows and Android development. If you are just getting started as a Unity developer, spend time learning the basics with Unity’s documentation and tutorials.
The minimum supported Unity version for Meta app development is 2021 LTS. It is highly recommended to use the minimum supported Unity version or higher.
The Unity installation steps are included below in a condensed version. However, for detailed information about installation, go to Installing Unity in the Unity documentation.
To build and run Android apps, in addition to the Unity editor, you must install the Android Build Support module, Android Software Development Kit (SDK) and Native Development Kit (NDK), and OpenJDK.
Go to the Unity Download page, click Download Unity Hub, and then install it.
On the Installs tab, click Install Editor, and then select the Unity version from the list.
On the Add modules window, under Platforms, select Android Build Support checkbox, and then select Android SDK & NDK Tools and OpenJDK checkboxes.
Complete the installation.
If you’ve already installed Unity without Android support, you can still add Android tools from Unity Hub. On the Installs tab, click the gear icon next to the Unity version to which you want to add the Android tools, and then click Add Modules. Select Android Build Support checkbox, and then select Android SDK & NDK Tools and OpenJDK checkboxes.
Test Installation
Open Unity Hub.
On the Projects tab, click New Project. If you’ve installed multiple Unity versions, select the version you want to use to create the project.
Select the 3D core template, enter project name and location, and then click Create Project.
After you’ve created the project, Unity adds it to Unity Hub from where you can manage the project.
After creating a project in Unity Hub, click the Editor Version for the project and select Apple Silicon if you have an M1 or greater macOS computer. Click Install Other Editor Version if you don’t see the Apple Silicon option.
Headset Setup

Get Started with Meta Quest Development in Unity Unity

All-In-One VR

PC Vr

Quest

Rift

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

The process of developing an app in Unity for the first time follows the basic pattern shown below.

Unity docs mental model and flow

The topics in this guide are arranged to help you get started with Meta Quest app development in Unity. The order forms a structured process in itself, which will help you kick-start the development setup and familiarize yourself with development fundamentals for Meta Quest in Unity. Depending on your need and level of expertise with Unity and Meta Quest app development, you can choose to follow the order from start to finish, or use it as a standalone reference. You can use the sidebar navigation on the left to jump to any section, at any time.

Environment and Headset Setup - One-stop-shop guide to setup the Unity editor, Meta Quest headset, and the debug environment. Download SDKs - Meta XR SDKs are available as individual packages for developers who want specific functionality or as part of an all-in-one bundle of SDKs. Core Blocks - Develop with Meta XR Core SDK to build immersive VR and MR experiences. It contains many essential functionalities, including the camera rig, and much more. Input and Interactions - Explore several ways to immersively interact with Meta Quest apps. Go with controllers, hand tracking, experience user-friendly interactions with Interaction SDK, or enable text input via an immersive virtual keyboard. Mixed Reality - Presence Platform offers ways to build dynamic experiences that blend virtual content realistically with real world surroundings so users can maintain a sense of presence. Explore Passthrough, Spatial Anchors, Scene, and much more to let users explore rich mixed reality capabilities. Developer Tools - Expore a suite of developer tools, such as Meta Quest Developer Hub and Meta Quest Link, that streamline Meta Quest app development. Design Resources - Design your app by following best design practices from several areas of app development. It considers health and safety of the user as well as provides guidelines to create an immersive experience for the users to enjoy. App Distribution and Monetization - Outlines thestep-by-step process to share your app with the world. It contains a set of guidelines and approaches to grow your user base and engagement.Set Up Development Environment and Headset Unity

All-In-One VR

PC VR

Quest

Rift

ODH

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

This topic provides instructions and describes important considerations about the following:

Hardware and software requirements for Meta Quest development with Unity Setting up your development environment and necessary tools Setting up your Meta Quest headset for development Development Environment Setup

Hardware Requirements Meta Device:

Meta Quest 2 Meta Quest Pro Meta Quest 3 Minimum System Requirements:

2.0+ GHz processor 2 GB system RAM Software Requirements Operating System (any one):

Windows 10 (64-bit versions only) macOS Sierra 10.10 or higher (x86 only) (supported with limited features) Development Software:

Unity Editor Set Up Meta Headset Download and install the Oculus mobile app from Google Play or the App Store. Follow the in-app instructions to sign in with your developer account credentials. Pair your headset. Wear your headset and follow the instructions in the headset to finish the setup. For detailed setup instructions and troubleshooting, go to Getting started with your Meta Quest support page.

Install Unity Editor Unity Editor versions, whether free or professional, support Windows and Android development. If you are just getting started as a Unity developer, spend time learning the basics with Unity’s documentation and tutorials.

The minimum supported Unity version for Meta app development is 2021 LTS. It is highly recommended to use the minimum supported Unity version or higher.

The Unity installation steps are included below in a condensed version. However, for detailed information about installation, go to Installing Unity in the Unity documentation.

To build and run Android apps, in addition to the Unity editor, you must install the Android Build Support module, Android Software Development Kit (SDK) and Native Development Kit (NDK), and OpenJDK.

Go to the Unity Download page, click Download Unity Hub, and then install it. On the Installs tab, click Install Editor, and then select the Unity version from the list. On the Add modules window, under Platforms, select Android Build Support checkbox, and then select Android SDK & NDK Tools and OpenJDK checkboxes. Complete the installation. If you’ve already installed Unity without Android support, you can still add Android tools from Unity Hub. On the Installs tab, click the gear icon next to the Unity version to which you want to add the Android tools, and then click Add Modules. Select Android Build Support checkbox, and then select Android SDK & NDK Tools and OpenJDK checkboxes.

Test Installation Open Unity Hub. On the Projects tab, click New Project. If you’ve installed multiple Unity versions, select the version you want to use to create the project. Select the 3D core template, enter project name and location, and then click Create Project. After you’ve created the project, Unity adds it to Unity Hub from where you can manage the project.

After creating a project in Unity Hub, click the Editor Version for the project and select Apple Silicon if you have an M1 or greater macOS computer. Click Install Other Editor Version if you don’t see the Apple Silicon option.

Headset Setup

Testing your app on a real headset prior to releasing it to users is an imperative step in the software development lifecycle. This section describes how to set up the Meta Quest headset for testing and debugging the app on a real headset.

In general, connecting the headset through a USB cable is the basic way to test the app on the real headset. In addition, you can use Android Debug Bridge (ADB) to perform advanced-level testing and debugging activities for Android apps or connect the headset via Meta Quest Developer Hub.

Test Connection through USB You must set the headset into developer mode and connect it to the computer with a USB cable.

Put on the Meta Quest headset and sign into the developer account you want to use for development. On the headset, go to Settings > System > Developer, and then turn on the USB Connection Dialog option. (Alternatively, you can open the Meta Quest Mobile App, select the headset from the list, and then turn on the Developer Mode option Connect the headset to the computer using a USB-C cable and put on the headset again. Click Allow when prompted to allow access to data.

To verify the connection, open the Unity project, and then on the menu, go to File > Build Settings. In the Platform list, select Android, and then click Switch Platform. If the target platform is already set to Android, skip to the next step. In the Run Device list, select the Meta headset. If you don’t see the Meta headset in the list, click Refresh.

Connect Headset From Build Settings

Click Build And Run to run the app on the Meta headset. Connect Headset via Meta Quest Developer Hub Meta Quest Developer Hub is a standalone companion development tool that positions the headset in the development workflow. For more information, see the Set Up section.

Enable Android Debug Bridge Debugging Android Debug Bridge (ADB) is a command-line utility that lets you perform a variety of actions such as install and debug apps, copy files, or run several shell commands on the headset. It is included with the Android SDK tools installation and located inside the /Android/SDK/platform-tools/ folder.

Connect device with USB. If you’re developing on Windows, download the OEM USB driver. If you’re developing on macOS, skip to step 3 as you don’t need any additional USB drivers.

a. Extract the oculus-adb-driver-2.0 zip file, go to the /oculus-go-adb-driver-2.0/usb_driver/ folder, and double-click the android_winusb.inf file.

Open Terminal on your computer and run the following command to check the connected device:

adb devices

Output:

List of devices attached 1PASH9BB939351 device Install APK via ADB When you build an app in Unity, it creates a .apk file, which is an Android executable file. You can install the .apk file manually on your headset for testing and debugging purposes by using ADB commands.

Open any Unity project that can build and go to File > Build Settings. In the Platform list, select Android, and click Switch Platform. If the target platform is already set to Android, skip to the next step. Click Build. Open Terminal on your computer and run the following command to install the app: adb install -r

Sample command:

adb install -r /Users/username/Unity-Sample-Projects/ballgame/playgame.apk

Output:

Performing Streamed Install Success Put on the headset, go to Library > Unknown Sources, and then run the app. Additional Documentation for Using ADB ADB is a versatile tool that lets you perform several debugging activities. There are a variety of ADB commands that you can run on your headset based on your requirement. For more information about learning ADB in detail and using its commands, go to Android Debug Bridge in the Android documentation.

ADB Troubleshooting Computer is unable to detect the Oculus device.

There can be many reasons why your computer is unable to detect the headset. To begin with:

Ensure that you have turned on the developer mode in the Meta Quest companion app on your mobile. Check if the issue is caused due to a faulty USB cable. Connect the device by using a secondary USB cable. If you do not have the secondary USB cable, connect any other Android device to verify if the issue is with the USB cable. Ensure that your computer has all the necessary permissions to access your headset. Typically, when you connect your device with the computer over a USB cable, you see a prompt to permit your computer to access the device. By mistake, if you have denied permission, disconnect the USB cable, restart the device, and then connect the cable again. When prompted for permission, click Allow. Terminal returns an error: adb command not found.

Begin by verifying whether adb is correctly installed. Go to /Android/SDK/platform-tools/ folder and check for the adb tool. If the tool is missing, download the standalone Android SDK Platform-Tools package. Check whether you’ve set the environment variables correctly. You can run adb from the /Android/SDK/platform-tools/ folder by prefixing ./ to the adb command. For example, instead of adb devices, use ./adb devices. Build and Run an App

To set up, test, build a simple Unity project and run it on your headset, follow the Hello VR on Meta Quest Headset tutorial.Use Meta Quest Developer Hub and Meta Quest Link ODH

Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Meta Quest Developer Hub (MQDH) is a development app that helps you develop apps for the Meta Quest line of headsets. With it you can:

Use either a cable or WIFI connection between your headset and the computer Disable the proximity sensor and Boundary (Guardian) for an uninterrupted testing workflow View the device logs to help with debugging Capture screenshots and record videos of what you see in the headset (see Record a Video for more information.) Deploy apps directly to your headset from your computer Upload apps to the developer dashboard for store distribution Share your VR experience by casting the headset display to the computer Download the latest Meta Quest tools and SDKs you need to build apps Meta Quest Link is a feature of Meta Quest headsets that turn them into PC VR headsets. With this feature, you can connect your development PC via to the headset via cable or airlink over WiFi. When connected, you can debug your app running on your headset from within your development environment.

Set Up

Setting up MQDH with the Meta Quest headset is quick and easy. It’s a cross-platform desktop tool, that runs on the macOS and Windows operating systems. Before you begin the setup process, get your Meta Quest headset and a USB-C cable.

Install MQDH Download the macOS or Windows installer. Install the application. Open the application and log in using your developer credentials, which must be the same as you’ve used for logging in the headset. META ACCOUNTS As of January 1st, 2023, you must use a Meta account to log into Meta Quest developer surfaces, such as MQDH, and devices. For more information, read the blog post, Introducing Meta Accounts: What Developers Need to Know.

Connect the Headset to MQDH To use MQDH features, you must connect a Meta Quest device to the computer.

Put on the Meta Quest headset and sign into the developer account you want to use for development. On the headset, go to Settings > System > Developer, and then turn on the USB Connection Dialog option. (Alternatively, you can open the Meta Quest Mobile App, select the headset from the list, and then turn on the Developer Mode option See Set Up Development Environment and Headset for information on the Oculus mobile app. Connect the headset to the computer using a USB-C cable and put on the headset again. Click Allow when prompted to allow access to data.

Connect your headset to the computer using a USB-C cable and wear the headset. Accept Allow USB Debugging and Always allow from this computer when prompted on the headset.

Open MQDH. On the MQDH navigation pane, choose Device Manager. All the devices you have set up are displayed in the main pane. Each device is shown with its status, which includes the device ID and connection status. The active device shows the green Active designator.

MQDH Connected Device Status

Select Headset Connected to MQDH If there are multiple headsets connected to your computer, you can use the drop-down menu in the upper-right corner to select the headset currently connected to MQDH.

Device select drop-down

Set Up Headset from MQDH You can use MQDH to set up new Meta Quest headsets, and factory reset them.

Make sure that the headset is charged and turned on. Bluetooth must be enabled on your computer. Open MQDH. On the navigation pane, choose Device Manager. Then in the upper right of the main pane, click Set Up New Device. If you already have the headset set up in MQDH, the headset is available in the list located in the upper-right corner of MQDH. Read the guidance and click Next. Select the type of headset. After MQDH has discovered the headset, click the headset ID, and then click Next. Sign in with your developer account. In the WiFi window, select the WiFi network you want to use. This should be the same network as your development computer or casting won’t work. On the Complete Setup from Your Headset window, put on the headset, and then follow the prompts. After the headset reboots and MQDH reconnects to it, use the slider to turn on Developer Mode. Your developer account must be verified to turn on Developer Mode. The final screen tells you how to enable the headset to use MQDH features. You must: Connect the headset to the computer via USB. Allow access when prompted in the headset. To set up multiple headsets, repeat the steps to connect other headsets.

Enable Android Developer Bridge (ADB) over WiFi ADB is a command-line tool bundled with MQDH which enables you to communicate with your Meta Quest headset during development. With MQDH, you can connect the headset wirelessly to the computer using ADB over WiFi.

After you’ve connected the headset to MQDH over a USB-C cable, on MQDH Device Manager, on Device Actions, turn on ADB over Wi-Fi.

The status under ADB over WiFi changes to enabled.

MQDH ADB Status

Disconnect the USB cable from the headset to continue using your headset wirelessly.

For more information on ADB, see Use ADB with Meta Headsets.

Multiple Instances of ADB ADB is a utility that is part of the Android SDK and is also part of the MQDH install. If you have previously installed the Android SDK, you may have a different ADB version already running on your computer. The different ADB instances can conflict with each other, giving you unexpected results. When MQDH does detect adb paths on startup, it displays a warning message. If the warning message appears when you start MQDH, you can use the existing ADB version installed on your computer by changing the ADB path in MQDH.

To change the ADB path in MQDH

On the navigation pane, click Settings. Modify the ADB path to let MQDH use the ADB instance located on your computer. Turn off Proximity Sensor and Boundary For development and debugging purposes, you can turn off the proximity sensor and Boundary (Guardian). However, we strongly recommend to turn them on when the device is no longer in development mode.

The proximity sensor is enabled to ensure that the headset goes to sleep when not in use. Boundary creates a virtual boundary to ensure your safety when you’re immersed in the VR experience. During development, you often debug directly in the dev environment without wearing the headset, so it’s safe to turn off these features. Turning them off makes it easy to capture screenshots, record videos, and step through code.

Note: Turning off the proximity sensor means your device will not go to sleep, so your battery won’t last as long. We highly recommend that you either turn the headset off, leave the device charging, or turn the proximity sensor back on when you don’t need it to be off.

On the MQDH Device Manager, under Device Actions, turn off Proximity Sensor and Guardian to keep the headset in the active state. You can also press the CTRL + Shift + P and CTRL + Shift + G keyboard shortcuts to turn on or off the proximity sensor and Boundary, respectively. Change the Device Name With MQDH, you can give your devices nicknames and make them easier to identify.

On the MQDH Device Manager, under Settings, click Change. In the Device Nickname dialog, enter a new nickname for the device that’s currently connected with MQDH. Click Save. Set Up Meta Quest Link MQDH offers the options to configure the Meta Quest Link. From the MQDH Device Manager, you can do the following:

Enable and disable Link mode Switch between Cable and Air Link Link mode causes the device to behave like a PC VR headset until the mode is explicitly turned off.

To switch between Air Link and Cable modes:

On the navigation pane, choose Device Manager. In the Device Actions pane, click the Select Mode dropdown and choose between Air Link and Cable. This sets the mode for the Active headset. Video and Image Files You can use the MQDH dashboard to capture images and screen recordings. To find them after you have recorded them, in the navigation pane, click the File Manager tab. When you open the MQDH File Manager and click on Videos or Images all the media of that type is synchronized from the device to the PC desktop. This synchronization is not continuous, so you need to open the MQDH File Manager and click on Videos or Images whenever you want do download new content. Note: Video files will not be automatically downloaded from MQDH. You will need to open the file in order to download it.

MQDH Settings To view and update MQDH settings, open MQDH. Then, in the navigation pane, select Settings. You’ll find three tabs:

General, for general settings, including ADB Path, Download Large Device Files, and more. About tab, which provide various information, such as MQDH Version, and Terms of Service. Notifications, where you can adjust your notification settings. Updates

MQDH maintains a regular update cadence to ship new features and important bug fixes. It supports auto-update and you will be prompted to install the new release when it becomes available.

Next Steps

Once you’ve configured MQDH and the headset, these topics will help you learn more about MQDH features:

Debugging Tools: Learn how to use media tools cast headset display on computer, record videos, and capture screenshots. Create Custom Commands: Customize commands that are a combination of multiple ADB commands or unique shortcuts for repetitive tasks. Performance and Metrics: Capture real-time performance graph and information. Package Manager: One-stop-shop to download all the necessary Meta Quest SDKs and tools. Deploy Build on Headset: Drag and drop .APK file from computer to MQDH to install it on your headset, or upload to the developer dashboard.Use Meta Quest Link for App Development Unity

All-In-One VR

PC VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Meta Quest Link helps you decrease your iteration time by launching the app you develop in Unity or Unreal directly in the Editor when you click Play(►). This eliminates the need to build the app on PC and deploy it to a Meta Quest headset every time you test your app during development.

This topic:

Outlines Meta Quest Link as a developer tool. References resources for Link setup and basic usage. Discusses useful settings and troubleshooting practices while testing your apps over Link. Link is compatible with Meta Quest 2 and Meta Quest Pro headsets.

Cable

The Meta Quest Link cable is a fiber-optic cable accessory. It connects your PC to your Meta Quest headset over Link and provides 5m (~16ft) of play range. For product details and specs, see Link Cable and Where to purchase Meta Quest Link.

Alternatively to using Meta Quest Link cable, you can connect your PC to your headset over Link by using a high-quality USB cable. For an optimal and comfortable experience, we recommend using a USB-C to USB-C cable with proven performance and length of at least 3m (~10ft).

D-Link VR Air Bridge wireless adapter

Besides cable, you also have the option to use a wireless adapter that creates a dedicated Wi-Fi network between your Meta Quest headset and your PC. For more information, read D-Link VR Air Bridge.

Limitations of Meta Quest Link as a developer tool

Use Meta Quest Link for development purposes only. Keep in mind that:

Released apps on Meta Quest Store don’t have access to development/experimental features, even if these features are enabled in Link settings.

Important: To make sure all features work as intended prior to releasing to the Meta Quest Store, you must check your app on device first.

The visual appearance and performance characteristics of an app running over Link may differ from running it on a Meta Quest headset.

Link requirements, setup, and basic usage

To confirm your environment meets compatibility requirements, see Requirements to use Meta Quest Link. To use Link, you must download and install the Oculus App. For setup and basic usage, read Use Quest Link with Meta Quest headsets. Settings for development over Link and troubleshooting steps

Note: Before getting started, you must sign in to your developer account in the Oculus App and in your Meta Quest headset.

Open the Oculus App on your desktop and make sure you have the following settings.

Step 1. Activate OpenXR Runtime Go to Settings > General. Next to OpenXR Runtime, select Set Oculus as active. If active, it’ll be grayed out.

Set Oculus as active

Step 2. Toggle on Developer Runtime Features While on Oculus App, go to Settings > Beta. Ensure Developer Runtime Features option is toggled on.

Developer Runtime Features

Step 3. Ensure feature support over Link Assume that you want to test over Link an app that supports passthrough, eye tracking, and natural face expressions.

While on Oculus App, go to Settings > Beta. Ensure over Link support for these features is toggled on.

Important: The toggles of the features appear only after you toggle on Developer Runtime Features. The complete flow is the following:

Settings Beta

After you toggle any of these features, read the dialog box. In the example below, the dialogue box is asking for consent of eye tracking. Click Turn On if you consent.

Dialog Box

Note: Similar dialog boxes may appear when you turn on other features, such as face tracking.

If your Unity or Unreal project is already open, restart the Editor after enabling the toggles. Step 4. Ensure proper cable connection Connect your headset using the Link cable and perform the following in the Oculus App:

Click Devices and ensure your headset is showing up. Select the connected device and click Device Setup in the right menu.

Device Setup*

Click Link Cable > Connect Your Headset > Continue. Select Test Connection.

Test Connection

Ensure you get a Compatible connection message, after the test is complete.

Compatible connection

If this test returns an Incompatible connection message, try a different PC cable. Step 5. Check bandwidth For Color Passthrough, the USB connection should provide an effective bandwidth of at least 2 Gbps. You can always measure the connection speed by using the USB speed tester built into the Oculus App:

Go to Devices. Select the connected device. Click USB Test and then Test Connection. If the connection speed is low, try using a different cable and USB adapter.

Step 6. Check Link connection on headset To ensure your headset connects to Link properly, follow these steps on your headset:

Go to Settings > System. Next to Quest Link, toggle on access.

Check connection on device

Basic Link usage for app development

As a developer, you can use Link in two modes:

Directly run the scene in the Unity Editor by hitting Play(►).

Run your project as a standalone PC app.

Regardless of the mode, the app collects full tracking data from your headset.

Running your app on PC over Link for most cases is similar to running on the headset. While running the app on PC via Link (for “Play in Editor” and standalone modes) you, as a user, see a 3D screen of the app inside your viewport (on headset) as well as the normal screen of the app (on PC screen).

Note: Make sure you enable a Plug-in Provider (E.g. Oculus XR Plug-in) under Edit > Project Settings > XR Plug-in Management > Windows, Linux and Mac settings

Link Log gathering from Unity Editor and OculusLogGather tool

If you need to collect Link logs, you can do so through Unity Editor and the Oculus App.

In Unity Editor, go to Window > General > Console.

Unity Console

Click Play in the Editor while your headset is in PC Link. Click on the tri-dot (three dots) icon in the console’s top right and select Open Editor Log.

Open Editor Log

Locate OculusLogGather.exe on your hard disk. Location depends on where you have installed the Oculus App. Typically, it should be in your C:\Program Files\Oculus\Support\oculus-diagnostics folder. Run the OculusLogGather.exe tool.Key Terms (Glossary) Unity

All-In-One VR

PC VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

You should familiarize yourself with these common terms and concepts used in the Meta Quest domain.

Term

Definition

App Lab A way for Meta Quest 3rd party developers to distribute apps directly to consumers safely and securely, via direct links or platforms like SideQuest, without requiring store approval and without sideloading. App Lab apps are not promoted within the Quest Store and other discovery surfaces. Augmented Reality (AR) Integration of digital information with the user’s environment in real time. Unlike virtual reality (VR), which creates a totally artificial environment, AR users experience a real-world environment with generated information overlaid on top of it. Avatars Using the Meta Avatars SDK, developers can provide user-created Meta Avatars to increase social presence and enhance VR immersion. Avatars are provided as modularized full-bodied torsos that allow users the flexibility to create their own unique identity persistent across the Meta ecosystem. Leveraging advanced body tracking, realistic Avatar poses can be extrapolated from the Meta headset and Touch controllers to provide users a sense of self in a VR world. Hand tracking Hand tracking enables the use of hands as an input method for the Meta Quest headsets. When using hands as input modality, hand tracking delivers a new sense of presence, enhances social engagement, and delivers more natural interactions with fully tracked hands and articulated fingers. Meta Quest link Connects Quest hardware to the PC via USB or Wifi. Useful for both consumer and Development workflows. Meta XR Audio SDK Contains everything needed to create audio experiences for XR applications that properly localize sounds in 3D space and create a sense of space in the virtual environment, allowing users to be fully immersed in the auditory scene. The spatial audio rendering and room acoustics functionality included in the SDK improves and expands upon the legacy Oculus Spatializer, and this feature set will expand in future releases. Meta XR All-in-One SDK An all-in-one source for core features, components, scripts, and plugins to ease the app development process. It comes as a package that contains multiple SDKs. Meta XR Core SDK A package that contains essentials for developing in VR with Meta XR, including controllers, the boundary system, splash screens, mixed reality APIs such as Passthrough, Scene, and Spatial Anchors. Meta XR Interaction SDK A library of modular, composable components that allows developers to implement a range of robust, standardized interactions (including grab, poke, raycast, and more) for controllers and hands. Interaction SDK also includes tooling to help developers build their own hand poses. Meta Movement SDK Supports the Avatar SDK by extending capabilities to 3P developers that don’t have embodied avatars or choose to not adopt the Meta’s Avatar capabilities. Meta XR Voice SDK Voice SDK enables you to enhance your user’s app experience with voice for a more natural and flexible way to interact with it. For example, voice commands can shortcut controller actions with a single phrase, or interactive conversation can make the app more engaging. Oculus Developer Hub (ODH) A desktop companion app for Windows and macOS that streamlines Quest development. This tool simplifies common development tasks like device management, tool discovery and installation, and much more. See below for a few of the highlights, and be sure to check out the ODH Documentation. Mixed Reality (MR) A view of the real world where anywhere from 0% of 100% of the view is covered by rendered pixels. When 0% of the view is covered by rendered pixels, it’s human vision. When 100% is covered by pixels, it’s virtual reality. Everything in between falls under mixed reality. Native Creating a native app means building an engine from the ground up with programming languages and tools that are specific to a single platform. Oculus The former name of the Meta Quest line of products. Oculus Start A program created for developers who have either launched a VR application, or are in the process of releasing a VR application. Before applying to Oculus Start, be sure to be logged into your Oculus Developer account and have a build ready in your developer organization to get started. This is not a program for beginner VR developers. OpenXR An open standard for accessing VR, AR, and MR platforms and devices. OVRCameraRig The OVRCameraRig Unity prefab provides the transform object to represent the Oculus tracking space. It contains a tracking space game object to fine-tune the relationship between the head tracking reference frame and your world. Passthrough A feature that provides a real-time 3D visualization of the physical world in the Meta Quest headsets. Presence Platform Enables VR/AR developers to build mixed reality experiences on Meta Quest, opening up new possibilities through environmental understanding, content placement and persistence, standardized hard interactions, and voice interaction capabilities. We have packaged, positioned and named a suite of SDKs and APIs to form a cohesive offering focused on three areas of developer value: mixed reality, hand interactions, and voice. Touch Controller Self-tracked controllers feature a compact design, realistic feel, and precision controllers. Works with Meta Quest 2 and comes in-box with Meta Quest Pro. Compact Charging Dock comes with purchase. Extended reality (XR) A view of reality that includes all real-and-virtual combined environments and human-machine interactions. It includes representative forms such as augmented reality (AR), mixed reality (MR) and virtual reality (VR) and the areas interpolated among them. The levels of virtuality range from partially sensory inputs to immersive virtuality. ChatGPT OK.

User Configure Unity Settings Unity

All-In-One VR

PC VR

Quest

Rift

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

This topic describes basic Unity settings that help you optimize the app performance and quality, and utilize Meta features to ease the app development process. We recommend that you apply these settings as outlined in this guide to meet the minimum technical requirements as defined by Meta Quest store policies and guidelines. This topic does not cover all the settings from the Unity editor.

The Unity settings are project-specific and you need to configure these settings in a Unity project. Before you begin with the settings configuration, create a new Unity project or open the project in which you want to configure these settings.

Set Target Platform

Before you proceed with any other project settings, set the target platform for the app as each platform has unique settings.

The target platform for Oculus Quest and Quest 2 is Android. To submit your app to Oculus Store or App Lab, you need to create a release-ready package that users can install and run on their Oculus Quest headsets. The release-ready package contains the APK file, which has compiled source code, resources, assets, manifest file, and so on and an APK expansion file (OBB), if enabled.

On the menu, go to File > Build Settings. Under Platform, select Android. Click Switch Platform. Note: Select Development Build to test and debug the app. When you’re ready for the final build, clear the selection as it may impact the app performance.

General Settings

Add general details that uniquely identify your company and the app in the Unity editor. You only need to set this as all the platforms share this information.

On the menu, go to Edit > Project Settings > Player. In Company Name, type the name of your company. Unity uses this to locate the preferences file. In Product Name, type the name of your app or product that you want it to appear on the menu bar when the app is running. Unity also uses this to locate the preferences file. In Version, type the version number that identifies the iteration. For subsequent iterations, the number must be greater than the previous version number. Package Identification Settings

The build system uses the package identification attributes to uniquely identify the app in the Meta Quest ecosystem. It sets the package name as the application ID in the build.gradle file and as a value to the package attribute in the Android manifest file.

On the menu, go to Edit > Project Settings > Player, and then expand the Other Settings tab. On the Other Settings tab, under Identification, do the following:

a. In Package Name, enter a unique package name. It must be unique within the Oculus ecosystem and the structure should be com.CompanyName.AppName. Feel free to deviate from the app’s title and choose a package name arbitrarily, if needed. For example, if the company’s name is Jane Doe Inc. and the app’s title is Move Along with Colors, the package name can be com.JaneDoeInc.colorgame.

b. In Version, type the version number that identifies the iteration. For subsequent iterations, the number must be greater than the previous version number. If you’ve set the version number in product details, Unity automatically populates the version number.

c. In Bundle Version Code, increment the existing version code. This version is used internally to determine whether one version is more recent than another, with higher numbers indicating more recent versions.

d. In Minimum API Level, set the minimum Android version to Android 10 (API level 29) for Oculus Quest and Oculus Quest 2.

e. In Target API Level, select Automatic (highest installed) to let the app support the highest Android version available.

Configuration Settings

Do the following configuration settings to fulfill the Meta Quest Store requirements.

On the menu, go to Edit > Project Settings > Player, and then expand the Other Settings tab. On the Other Settings tab, under Configuration, do the following:

a. In the Scripting Backend list, select IL2CPP as it provides better support for apps across a wider range of platforms.

b. Clear the ARMv7 checkbox and instead select the ARM64 checkbox. Only 64-bit apps are accepted on the Meta Store.

c. In the Install Location list, select Automatic to indicate that your app can be installed on the external storage, but you don’t have a preference of install location.

XR Management

Meta streamlines XR support via Unity’s XR Plugin Management package. To build immersive Meta Quest apps, we strongly recommend using Oculus XR Plugin in combination with the Meta XR Core SDK, which is available individually or as part of the Meta XR All-in-One SDK. The Install XR Plugin topic has detailed information about the XR plugin framework and instructions to install the plugin.

Rendering Settings

There are a variety of options and settings that let you optimize rendering. You can define a specific set of Graphics APIs, choose color space property, or enable multithreaded rendering to optimize performance.

On the menu, go to Edit > Project Settings > Player, and then expand the Other Settings tab On the Other Settings tab, under Rendering, do the following:

a. Set the Color Space property to Linear for realistic rendering. It lets colors supplied to shaders within your scene brighten linearly as light intensities increase. For more information about selecting the correct color space for your project, Unity has outlined the gamma and linear color space workflow to understand the differences.

b. Clear Auto Graphics API to manually pick and set the order in which the Graphics APIs are consumed. We now recommend developers to use the Vulkan API as some of the newer features for Meta Quest devices are only supported with that API.

c. Select Multithreaded Rendering to move graphics API calls from the main thread to a separate worker thread.

d. Select Low Overhead Mode to skip error checking in release versions of an app. This is applicable to apps that use OpenGL ES API.

Define Quality Settings

Several quality options let you define the graphical quality.

On the menu, go to Edit > Project Settings > Quality. In Pixel Light Count, set the maximum number of pixel light count to one. In the Texture Quality list, select Full Res to display textures at maximum resolution. In the Anisotropic Textures list, select Per Texture. In the Anti Aliasing list, select 4x. Unlike non-VR apps, VR apps must set the multisample anti-aliasing (MSAA) level appropriately high to compensate for stereo rendering, which reduces the effective horizontal resolution by 50%. You can also let OVRManager automatically select the appropriate multisample anti-aliasing (MSAA) level based on the headset.

Known Issue: If you are using Universal Render Pipeline (URP), you need to manually set the MSAA level to 4x. We are aware of the issue that URP does not set the MSAA level automatically. Once the fix is published, we will announce it on our Release Notes page.

Clear the Soft Particles check box. Select the Realtime Reflections Probes checkbox to update reflection probes during gameplay. Select the Billboards Face Camera checkbox to force billboards to face the camera while rendering instead the camera plane. Generate Android Manifest File

Every Meta Quest app built for Meta Quest headsets must contain the AndroidManifest.xml file. It is a vital part of an Android app as it contains essential metadata such as permissions, package details, hardware and software support, supported Android version, and other important configurations.

Meta Quest has automated the process of adding the metadata in the manifest file and you need to generate the Android Manifest file from Unity. This means that you don’t need to manually update the manifest file to add app details or specific hardware and software support that request permission.

For example, there’s no need to manually add the package name or the minimum supported Android version in the manifest file. When you set the package name and select the minimum Android API level from Project Settings, say API level 19, Meta Quest automatically adds the element and the package attribute and its value.

From the hardware standpoint, all apps that target the Meta Quest headset are automatically compatible to run on Meta Quest 2. To retrieve the correct headset that the app is running on, it’s ideal to set Meta Quest and Meta Quest 2 as target headsets for the app. Based on the device type, Meta Quest automatically adds the element for Meta Quest and the element for both Meta Quest and Meta Quest 2 in the manifest file. There is no need to update the Android Manifest file manually.

Similarly, when you enable hand tracking from Unity, Meta Quest automatically adds the feature and sets the permission value based on the setting you’ve opted in Unity.

To generate the Meta Quest store-compatible Android manifest file, in the menu, go to Oculus > Tools > Create store-compatible AndroidManifest.xml. Install, Uninstall, and Upgrade XR Plugin Unity

All-In-One VR

PC VR

Quest

Rift

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

XR is an umbrella term that covers Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR). Meta streamlines XR support via Unity’s XR Plugin Management package. The package provides deep platform integration across XR platforms and streamlines the development process of XR applications.

How does this work?

Unity provides the Oculus XR Plugin to develop Unity applications for Meta Quest headsets. It exposes settings for common lifecycle management and runtime settings such as rendering modes, depth buffer sharing, and latency optimization. Essential utilities for VR development are available in the Meta XR Core SDK available on the Unity Asset Store and the NPM Registry. This SDK contains the OVRPlugin, which provides built-in editor support and several additional features. The OVRPlugin combined with Unity’s Oculus XR Plugin allows Unity to talk to the OpenXR, VRAPI, and CAPI backends on Meta Quest headsets. To accelerate OpenXR adoption and allow you to seamlessly target a wide range of AR/VR headsets, Meta has made OpenXR runtime the default backend. Starting with the Meta XR SDKs version 31, all new features are available on the OpenXR backend only.

Note: The OpenXR backend mentioned in this topic is different than Unity OpenXR plugin.

Meta XR Architecture Diagram

Why use the Meta XR Core SDK? While Unity’s Oculus XR Plugin provides the base functionality for getting the XR application running on the Meta Quest headset, we recommend using the Meta XR Core SDK package if the project requires the latest up-to-date features. It contains the latest version of OVRPlugin as well as handy C# scripts that expose Meta Quest features that are not yet provided through Unity’s APIs. For example, advanced features such as presence platform, voice, hand tracking, interaction, and many more are surfaced through the Meta XR Core SDK and not through Unity API.

Install Oculus XR Plugin

Note: Installing Oculus XR Plugin does not download and import the Meta XR SDKs in to your project. Make sure you download the Meta XR SDKs you need from the Unity Asset Store.

There are two ways you can install the Oculus XR plugin from the Unity editor: from Unity Package Manager or via XR Plugin Management interface. This section describes both the methods in detail.

From Unity Package Manager Unity Package Manager hosts a collection of several packages, from where you can easily install, uninstall, or update packages. The Oculus XR Plugin package ships with built-in XR Management support. To install Oculus XR Plugin from the package manager:

On the menu, go to Window > Package Manager. From the Packages list, select Unity Registry to view all the packages available in the Unity package registry. Select and expand the Oculus XR Plugin package from the list of packages. Click the See other versions link to see the list of all available versions, and then select the latest version. In the detailed view, click Install. From XR Plugin Management Interface On the menu, go to Edit > Project Settings, and then select XR Plug-in Management. Click Install XR Plugin Management if the package is not installed already.

On the Android tab, select the Oculus checkbox to install the Oculus XR plugin. On the menu, go to Windows > Package Manager, and then expand Oculus XR Plugin. If you can’t find Oculus XR Plugin in the list, from the Packages list, select Unity Registry to see all the available packages from the registry. Click the See other versions link and select the latest version from the available list.

Note: There is a possibility that Unity may not install the latest Oculus XR Plugin version. Instead, it installs the Oculus XR plugin version that is verified to work with the Unity editor version that you are using. To highlight the verified version, Unity adds an indicator by adding Verified or R next to the version in the list. In such case, it is safe to update the plugin to use the latest version even if it is pending verification.

In the detailed view, click Update. Upgrade or Switch to Another Oculus XR Plugin Version

You can upgrade to the latest available Oculus XR Plugin version or switch to any available version from the Package Manager window. A few runtime settings maybe available on the latest version only.

On the menu, go to Window > Package Manager. From the Packages list, select Unity Registry, and then expand the Oculus XR Plugin package. Click the See other versions link to see the list of all available versions, and then select the version you want to upgrade. In the detailed view, click Upgrade. Uninstall Oculus XR Plugin

Disabling the Oculus XR Plugin package from the XR Plugin Management interface doesn’t automatically uninstall the plugin. You need to remove the plugin from the Package Manager window.

On the menu, go to Window > Package Manager. From the Packages list, select In Project, and then select the Oculus XR Plugin package. In the detailed view, click Remove.Add Camera Rig Using OVRCameraRig Unity

All-In-One VR

PC VR

Quest

Rift

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

The Meta XR Core SDK contains the OVRCameraRig prefab that provides the transform object to represent the Oculus tracking space. It contains a tracking space game object to fine-tune the relationship between the head tracking reference frame and your world. Under the tracking space object, you will find a center eye anchor, which is the main Unity camera, two anchor game objects for each eye, and left and right hand anchors for controllers. It also contains a custom VR camera, which replaces Unity’s conventional camera.

How Does This Work?

When you enable VR support in Unity, your headset automatically passes the head and positional tracking reference to Unity. This lets the camera position and orientation finely match with the user position and orientation in the real world. The head-tracked pose values overrides the camera’s transform values, which means the camera is always in a position relative to the player object.

In a typical first- or third-person setup, instead of having a stationary camera, you may want the camera to follow or track the player object. The player object can be a character in motion, such as an avatar, a car, or a gun turret. To move the camera to follow the player object, you can either make the camera a child of the player object or have an object track the player, and the camera, in turn, follows that object. Based on your app design, you may want to create a script that references the player object and attach the script to OVRCameraRig.

Add OVRCameraRig in the Scene

OVRCameraRig is a replacement to Unity’s main camera, which means you can safely delete Unity’s main camera from the Hierarchy tab. The primary benefit of using OVRCameraRig is that it provides access to OVRManager, which provides the main interface to the VR hardware. Before you add OVRCameraRig, make sure you have downloaded the Meta XR Core SDK as an individual package or as part of the Meta XR All-in-One SDK and enabled the VR support.

From the Hierarchy tab, right-click Main Camera, and click Delete. In the Project tab, expand the Assets > Oculus > VR > Prefab folder, and drag and drop the OVRCameraRig prefab into the scene. You can also drag and drop it in the Hierarchy tab. Configure Settings

There are two main scripts attached to the OVRCameraRig prefab: OVRCameraRig.cs and OVRManager.cs. Each script provides settings for camera, display, tracking, quality, and performance of your app.

To begin with settings, in the Hierarchy tab, select OVRCameraRig, and in the Inspector tab, review the following settings:.

OVRCameraRig Settings OVRCameraRig.cs is a component that controls stereo rendering and head tracking. It maintains three child anchor transforms at the poses of the left and right eyes, as well as a virtual center eye that is halfway between them. It is the main interface between Unity and the cameras. and attached to a prefab that makes it easy to add comfortable VR support to a scene.

Note: All camera control should be done through this component.

Use Per Eye Camera: Select this option to use separate cameras for left and right eyes. Use Fixed Update For Tracking: Select this option to update all the tracked anchors in the FixedUpdate() method instead of Update() method to favor physics fidelity. However, if the fixed update rate doesn’t match the rendering frame rate, which is derived by using OVRManager.display.appFramerate, the anchors visibly judder. Disable Eye Anchor Cameras: Select this option to disable the cameras on the eye anchors. In this case, the main camera of the game is used to provide the VR rendering and the tracking space anchors are updated to provide reference poses. OVRManager Settings OVRManager.cs is the main interface to the VR hardware and is added to the OVRCameraRig prefab. It is a singleton that exposes the Oculus SDK to Unity, and includes helper functions that use the stored Meta variables to help configure the camera behavior. It can be a part of any app object and should only be declared once.

Target Devices All apps that target Meta Quest are automatically compatible to run on Meta Quest 2. However, when you query the headset type that the app is running on, Meta returns Meta Quest even if the headset is Meta Quest 2 for the best compatibility. If you precisely want to identify the headset type, select Meta Quest and Meta Quest 2 as target devices. In this case, when you query the headset type, Meta returns the exact headset that the app is running on. Based on the target headset, Meta automatically adds the element for Meta Quest, and the element for both Meta Quest and Meta Quest 2 in the Android Manifest file. There is no need to update the Android Manifest file manually.

When apps target both the headsets, check the headset type to optimize the app. Call OVRManager.systemHeadsetType() to return the headset type that the app is running on. For example, the method returns Oculus_Quest or Oculus_Quest_2 depending on the headset type.

Performance and Quality Use Recommended MSAA Level: True, by default. Select this option to let OVRManager automatically choose the appropriate MSAA level based on the Meta device. For example, the MSAA level is set to 4x for Meta Quest. Currently supported only for Unity’s built-in render pipeline.

Note: For Universal Render Pipeline (URP), manually set the MSAA level to 4x. We are aware of this issue that URP does not set the MSAA level automatically. We will announce the fix in the Release Notes page, when available.

Monoscopic: If true, both eyes see the same image rendered from the center eye pose, saving performance on low-end devices. We do not recommend using this setting as it doesn’t provide the correct experience in VR.

Min Render Scale: Sets minimum bound for Adaptive Resolution (default value is 0.7). Max Render Scale (Rift only): Sets maximum bound for Adaptive Resolution (default value is 1.0). Head Pose Relative Offset Rotation: Sets the relative offset rotation of head poses. Head Pose Relative Offset Translation: Sets the relative offset translation of head poses. Profiler TCP Port: The TCP listening port of Oculus Profiler Service, which is activated in debug or development builds. When the app is running on editor or device, go to Oculus > Tools > Oculus Profiler Panel to view the real-time system metrics. Tracking Tracking Origin Type: Sets the tracking origin type.

Eye Level tracks the position and orientation relative to the device’s position.

Floor Level tracks the position and orientation relative to the floor, whose height is decided through boundary setup.

Stage also tracks the position and orientation relative to the floor. On Quest, the Stage tracking origin will not directly respond to user recentering.

Use Positional Tracking: When enabled, head tracking affects the position of the virtual cameras. Use IPD in Positional Tracking: When enabled, the distance between the user’s eyes affects the position of each OVRCameraRig’s cameras. Allow Recenter: Select this option to reset the pose when the user clicks the Reset View option from the universal menu. You should select this option for apps with a stationary position in the virtual world and allow the Reset View option to place the user back to a predefined location (such as a cockpit seat). Do not select this option if you have a locomotion system because resetting the view effectively teleports the user to potentially invalid locations. Late Controller Update: Select this option to update the pose of the controllers immediately before rendering for lower latency between real-world and virtual controller movement. If controller poses are used for simulation/physics, the position may be slightly behind the position used for rendering (~10ms). Any calculations done at simulation time may not exactly match the controller’s rendered position. Display Remaster your app by setting the specific color space at runtime to overcome the color variation that may occur due to different color spaces in use.

From the Color Gamut list, select the specific color space. For more information about the available color gamut primaries, go to the Set Specific Color Space topic. Quest Features There are certain settings that are applicable to Meta Quest only.

Focus Aware: Select this option to allow users to access system UI without context switching away from the app. For more information about enabling focus awareness, go to the Enable Focus Awareness for System Overlays topic. Hand Tracking Support: From the list, select the type of input affordance for your app. For more information about setting up hand tracking, go to the Set Up Hand Tracking topic. Hand Tracking Frequency: From the list, select the hand tracking frequency. A higher frequency allows for better gesture detection and lower latencies but reserves some performance headroom from the application’s budget. For more information, go to the Set High Frequency Hand Tracking section. Requires System Keyboard: Select this option to allow users to interact with a system keyboard. For more information, go to the Enable Keyboard Overlay in Unity topic. System Splash Screen: Click Select to open a list of 2D textures and select the image you want to set as the splash screen. Allow Optional 3DoF Head Tracking: Select this option to support 3DoF along with 6DoF and let the app run without head tracking, for example, under low-lighting mode. When your app supports 3DoF, Meta automatically sets the headtracking value to false in the Android Manifest by changing . When the checkbox is not selected, in other words the app supports only 6DoF, the headtracking value is set to true. Android Build Settings The shader stripping feature lets you skip unused shaders from compilation to significantly reduce the player build time. Select Skip Unneeded Shaders to enable shader stripping. For more information about understanding different tiers and stripping shaders, go to the Strip Unused Shaders topic.

Security Custom Security XML Path: If you don’t want Meta to generate a security XML and instead use your own XMl, specify the XML file path. Disable Backups: Select this option to ensure private user information is not inadvertently exposed to unauthorized parties or insecure locations. It adds the allowBackup="false" flag in the AndroidManifest.xml file. Enable NSC Configuration: Select this option to prevent the app or any embedded SDK from initiating cleartext HTTP connections and force the app to use HTTPS encyrption. Mixed Reality Capture Mixed Reality Capture (MRC) places real-world objects in VR. In other words, it combines images from the real world with the virtual one. To enable the mixed reality support, select Show Properties, and then select enableMixedReality. For more information about setting up mixed reality capture, go to the Unity Mixed Reality Capture guide.

Build and Configuration Overview Unity

All-In-One VR

PC VR

Quest

Rift

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

This topic outlines instructions and describes important considerations about the following:

Build settings for a Unity project Quality and rendering settings to optimize the build time and app performance in Unity For quick hands-on instructions on creating a very simple project that builds and runs on a Meta Quest headset, see Hello VR on Meta Quest Headset tutorial.

Select Build Platform

This section describes how to set target platform and customize settings for your build. Before you proceed with any other project settings, set the target platform for the app as each platform has unique settings.

The target platform for Oculus Quest and Quest 2 is Android and the final output is a .apk file.

On the menu, go to File > Build Settings. Under Platform, select Android. Set Texture Compression to ASTC. From Run Device, select the Oculus headset to load the app on the headset. The device is listed only if it is connected to your computer over USB. This is an optional step and is usually helpful to test the app on a real headset. Clear the Development Build selection for the final build as it may impact performance. Click Switch Platform.

Add Scenes

The Scenes In Build pane displays a list of scenes from the project that Unity includes in the build.

In Scenes In Build pane, click Add Open Scenes. Alternatively, you can drag scenes from the Assets folder into this window. Clear the scene selection for the ones you want to exclude from the build. This excludes the scene from the build and not from the list. To adjust the order of the scenes, drag them up or down the list. Numbers on the right indicates the scene index, which is mostly used in scripting APIs. Configure Settings

Before you generate the build, there are a variety of additional settings you can also consider. These settings define VR support, package details to uniquely identify your app in the Meta Quest store, and optimize app performance.

Note: If you have followed instructions in Configure Settings, you can skip this step.

Enable VR support. Add package details to provide package name and the Android version your app supports. Perform quality settings to define graphical quality such as texture quality, pixel light count, or anti-aliasing level. Perform rendering settings to select a specfic set of Graphics APIs, choose color space property, or enable multithreaded rendering. Strip Unused Shaders

Before you generate the final build, we highly recommend that you strip unused shaders, which significantly reduces the build time.

Generate Build

To generate the final build, you have two options:

To build your app into Player, click Build and provide location to save your build. This option builds the ready-to-upload executable. It does not automatically deploy or run your app on the headset. To build your app into Player and run it on the headset, click Build and Run and provide location to save your build. The Build and Run option requires you to connect the headset to your computer over a USB-C cable. As mentioned in step 4 of Select Build Platform, select the device from the Run Device list. If your device is not connected or has connection or detection issues, the Build and Run option fails in generating build.

Improve Build Times

Before you are ready for the final build, your app undergoes many changes, which often require you to rebuild the app. In general, Unity’s build options are time consuming as it compiles the entire project instead of generating deltas from your previous build.

To reduce build times, we highly recommend that you use OVR Build APK. If you are building your app exclusively for debug purposes, you can also use OVR Quick Scene Preview. These tools are specifically designed for Android apps and significantly expedite build iterations.Optimize Build Iterations and Use Quick Scene Preview Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Build process is one of the significant processes of the app development lifecycle. The time the system takes to build...deploy...run...repeat is known as iteration time. Before you can test the smallest change on the Meta Quest headset, the system needs to package and deploy that change on the headset. Faster iteration time is pivotal when it comes to making changes in the app, considering the number of iterations an app can undergo before the final build.

To expedite the iteration process, use OVR Build APK and OVR Quick Scene Preview, which are both part of the Meta XR Core SDK. This can be downloaded individually as a standalone package or bundled as part of the Meta XR All-in-One SDK.

Prerequisites

Prior to using these tools, check the following prerequisites.

Use Unity version 2021 LTS or higher Use the Windows operating system for development (macOS development platform does not support these features.) OVR Build APK

The OVR Build APK tool runs a command that utilizes the gradle cache to speed up the build process. It starts by launching Unity’s build and export functionality, and when the initial build is compiled, it uses the gradle’s cache to only update the delta between builds. This way, it doesn’t rebuild files that are not a part of the change, and therefore, reduces the build and deploy time by 10 to 50% compared to Unity’s build time. There is no change in the final .APK file and it is identical to the one that Unity’s build produces.

To use OVR Build APK:

In the menu, go to File > Build Settings > select scenes that you want to build. Make sure Android is the target build platform. If not, select Android and click Switch Platform. On the menu, go to Oculus > OVR Build > OVR Build APK....

Screengrab of OVR Build APK menu This opens the OVR Build APK window. This tool contains several options to configure your build:

Built APK Path - The generated .APK file will be copied to this location. Version Number - This corresponds to the version listing on your app in the Developer Dashboard. To upload an app to the store, it must have a higher version number than the currently-uploaded version. Auto-Increment? - If true, the version number is incremented by 1 upon a successful build. Install & Run on Device? - If true, upon build completion, the generated .APK file is installed and launched on the connected device, listed to right. Development Build? - This is equivalent to the Development Build checkbox in Unity’s build settings. If true, the build offers debug functionality, but cannot be uploaded to the Meta Quest store. Save Keystore Passwords? - If true, the password is required to sign .APK builds and saved across instances of the Unity editor. Otherwise, Unity’s default behavior is performed, erasing keystore passwords between instances of the Unity editor.

Screengrab the OVR Build APK menu If desired, you can also use the Oculus > OVR Build > OVR Build APK & Run command, which automatically generates a build and launches the APK on the connected headset.

Auto Increment Version Code When uploading a new build to a release channel, it must have a different version than the build it is replacing. Not incrementing the version code is a common cause of failures when uploading your APK. Unity includes functionality that automatically increments the version code by 1 every time a build is created. It is highly recommended that you use enable this feature. In Unity, it can be done so in two ways from the menu bar:

Go to Oculus > OVR Build > OVR Build APK... or OVR Build and Run APK. In the OVR Build APK window that opens, check Auto Increment? next to Version Number. Go to Oculus > Tools and select Auto Increment Version Code?. Both methods do the same thing, and enabling it in one place does so in the other.

OVR Quick Scene Preview

OVR Scene Quick Preview uses Unity’s Asset Bundle system to reduce the deployment time by hot reloading changes. The first time it builds the .APK file, the file contains project’s code along with the asset bundle loader script. Based on the asset type, it breaks assets into individual asset bundles and deploys them to an external folder on the device. For example, asset bundles can be models, textures, audio, or entire scene. Next time, when you make a change to an asset, it builds and deploys only the bundles that contain changes, and therefore reduces the overall iteration time.

To use OVR Quick Scene Preview:

To build and deploy a transition scene APK to your device, click Build and Deploy App. A transition scene loads scenes as asset bundles. To add scenes you’re developing, click Open Build Settings. Single scenes work best, but if your project loads scenes additively or you want to see the transition between two scenes, add all the scenes. For multiple scenes, ensure that your project loads scenes by name and not by index. If scenes are loaded by index, consider changing it to name. To build your scenes into your device, click Build and Deploy Scene(s). The first time you build, the process is slower than the subsequent builds. You have the option to force restart the app, if needed. By default, the scene in the transition APK tries to hot reload your changes. Preview the scene in your Meta Quest headset. Next time, when you make changes to a scene, save everything and click Build and Deploy Scene(s).

Screengrab of OVR Quick Scene Preview The tool also contains several helpful options in the Utilities section:

Delete Device Bundles - Delete all asset bundles that are deployed on the headset. Delete Local Bundles - Delete all asset bundles that are built locally. Use optional APK package name - If checked, changes the Transition APK package name to com.your.project.transition. Launch App - Launches the app on the headset. Open Build Settings - Opens Unity’s Build Settings window. Uninstall APK - Removes the transition APK from the headset. Clear Log - Clears the log in the tool.Strip Unused Shaders Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Unity’s default Built-In Render Pipeline offers three graphics tiers, Tier 1, Tier 2, and Tier 3, to customize the built-in shader compilation and rendering quality, and they correspond to target platforms for which you’re developing apps.

Understand Shader Stripping

Android apps developed in Unity, by default, load shaders from only Tier 2. Though shaders from Tier 1 and Tier 3 are built, which increases the build time, the app does not load any shaders from those tiers. The shader stripping mechanism lets you easily skip unused shaders from compilation to significantly reduce the player build time.

If you choose to strip unused shaders, at the time of shader compilation, the system starts by checking the Meta Quest platform. If Android, it modifies the list of shaders to remove Tier 1 and Tier 3 shaders and compiles shaders from only Tier 2.

Shader stripping when using a Scriptable Render Pipeline, such as the prebuilt Universal Render Pipeline, may remove shaders unintentionally and should be avoided.

Enable Shader Stripping

The shader stripping feature is only available for Meta Quest headsets that run on Android and supported on Unity versions 2018.2 and higher. Make sure that the target platform is set to Android.

From Hierarchy View, select OVRCameraRig. If you’re using OVRPlayerController, expand it, and then select OVRCameraRig. For more information about using the OVRCameraRig prefab, go to the Add Camera Rig using OVRCameraRig topic. From Inspector View, in the OVR Manager script, under Android Build Settings, select Skip Unneeded Shaders.Tutorial - Create Your First VR App on Meta Quest Headset Unity

All-In-One VR

PC VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

This tutorial describes the essential steps to:

Set up a Unity project that can build and run on a Meta Quest headset. Develop a basic “hello world” Unity project comprising a single scene with a cube GameObject rendered on the headset. Configure Camera settings. Build, run, and test a project on a Meta Quest headset. This tutorial is the primary reference for starting Meta Quest development quickly.

Hello World app running on a Meta Quest 2

Hello World app running on a Meta Quest 2

Prerequisites

Before reading further, ensure you have followed all steps in the Set Up Development Environment and Meta Quest Developer Hub guides. By following these guides, your development environment will be ready for your first project in Unity and your Meta Quest headset will be accessible in the Meta Quest Developer Hub.

Glossary Meta XR All-in-One SDK - An all-in-one source for core features, components, scripts, and plugins to ease the app development process. OVRPlugin - A plugin that allows Unity to communicate with the VR Runtime of Meta Quest. Oculus XR Plugin - An extension of the Unity VR subsystem that communicates with OVRPlugin. Note: The tutorial uses Unity Editor version 2021.3.20f1 and Meta XR All-in-One SDK v59. Screenshots might differ if you are using other versions, but functionality is similar.

Step 1. Connect headset over USB

You must set the headset into developer mode and connect it to your computer with a USB cable.

Put on your Meta Quest headset and sign into the account you want to use for development. On the headset, go to Settings > System > Developer, and turn on the USB Connection Dialog option. Connect the headset to your computer using a USB-C cable. When prompted to allow access to data, select Allow.

Allow connection to headset

Step 2. Create new project

Leave your headset aside for now and follow this process:

Open Unity Hub. On the Projects tab, select New project. If you have installed multiple Unity versions, select the version you want for this project. Select the 3D Core template. Under Projects Settings, enter HelloWorld as the Project name, choose the location to store it, and select Create project.

Create Unity project

Step 3. Create a Cube GameObject to display to user

Although not essential, this step will help you confirm that a basic primitive renders on your Meta Quest headset without any issues.

In Unity Editor, go to GameObject > 3D Object > Cube.

Add Cube GameObject

Update the cube’s Position to [0, 0, 2] and Scale to [0.2, 0.2, 0.2].

Update Cube GameObject

Save your project.

You should now have a cube GameObject in your SampleScene.

Step 4. Import Meta XR All-in-One SDK from the Unity Asset Store

Later, you will learn how to import only the SDKs you need so that you reduce the size of your app, but for now, import all of it.

Navigate to Window > Package Manager to open the Unity Package Manager pane.

Go to the Unity Asset Store page and log in if needed.

Select Add to My Assets button.

Select Open in Unity to start the integration process with the Package Manager in Unity. If asked, allow Asset Store Links to be opened by Unity.

Wait for the Unity Package Manager window to open.

On the “Meta XR All-in-One SDK” pane, click Install.

When prompted to restart Unity, click Restart Editor.

Restart

Step 5. Set up Android device Build Settings

The target platform for Meta Quest headsets is Android and the final output is an .apk file.

In Unity Editor, go to File > Build Settings. In the Platform list, select Android, and select Switch Platform.

Switch Platform

While still in the Build Settings window, focus on the Run Device list and select your Meta Quest headset. If you don’t see the headset in the list, ensure you have connected it properly and select Refresh.

Run Device settings

Step 6. Run Unity Project Setup Tool

Navigate to Oculus > Tools > Project Setup Tool. The Unity Project Setup Tool has rules for creating a new application with Meta Quest in Unity.

In the checklist under the Android icon tab of the Project Setup Tool, select Fix All.

Project Setup Tool Fix All This applies the required settings for creating Meta Quest XR apps, including setting the minimum API version, using ARM64, and installing the Oculus XR Plug-in and XR Plug-in Management package.

If you still see Recommended Items in the list, select Apply All.

Project Setup Tool Apply All This optimizes project settings for Meta Quest Unity apps, including texture and graphics settings.

Repeat the process of Step 6 for Windows, Mac, Linux settings as needed until you see no recommendations.

Step 7. Add OVRCameraRig to scene

Meta XR Core SDK contains the OVRCameraRig prefab which is a replacement to Unity’s main camera. This means you can safely delete Unity’s main camera from the Hierarchy tab.

The primary benefit of using OVRCameraRig is that it offers the OVRManager component (OVRManager.cs script). OVRManager provides the main interface to the VR hardware.

Follow this process:

Under Hierarchy tab, right-click Main Camera, and select Delete. In the Project tab, search for Camera Rig, and drag the OVRCameraRig prefab into your scene. Alternatively, drag it in the Hierarchy tab.

Add OVRCameraRig

Ensure your project’s hierarchy looks like this:

OVRCameraRig Hierarchy

Select OVRCameraRig in the Hierarchy tab. With the OVRCameraRig selected in the Inspector, under the OVR Manager component, ensure your headset is selected under Target Devices.

OVRCameraRig Devices

Step 8. Build and run project on headset

Go to File > Build Settings and, under Scenes, select Add Open Scenes. This should list your open scene.

Add Open Scenes

Select Build And Run and choose a name for your .apk file, for example, HelloWorld.apk. Put on your Meta Quest to experience your Hello World app. Note: For faster iteration times, you can use Meta Quest Link which helps you test your Unity projects without building your project. For details, read Use Meta Quest Link for App Development.

Additional troubleshooting steps

Error message Execution failed for task ':launcher:packageRelease'. A failure occurred while executing com.android.build.gradle.internal.tasks.Workers$ActionFacade...

You might have conflicts with a prior Android Studio installation on your computer. To resolve this, try deleting or renaming the debug.keystore and debug.keystore.lock files in your \Users<username>.android folder. Restart Unity, and build your project again.

For additional troubleshooting, read the Troubleshooting guide.

Did you find this page helpful?Integrate Boundary The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

To build an immersive app, the app needs to track and translate a user’s real-world movement in the virtual world. It needs to know when the user leaves the viewing area of the tracking camera and loses position tracking, which can be a very jarring experience for the user.

The boundary system ensures user safety and offers an uninterrupted experience, whenever the user puts on the headset in a new environment. It prompts users to map out a boundary, which should be an unobstructed floor space. Based on the perimeter they draw in space to define an outer boundary, Meta Quest automatically calculates an axis-aligned bounding box known as play area. When a user’s head or controllers approach the play area, the boundary system provides visual cues. It imposes an in-app wall and floor markers in form of a translucent mesh grid and the passthrough camera view fades in to help them avoid real-time objects outside of the play area. The following image shows an active boundary system, where the user’s hands that are holding controllers protrude through the boundary.

How Does This Work?

The outer boundary is the complex set of points and interconnecting lines that the user draws when they set up the boundary. The play area is an automatically generated rectangle, an axis-aligned bounding box, within the outer boundary.

Positional tracking requires users to define the outer boundary. The boundary system validates the user-defined boundary for the minimum required space. Roomscale experience requires a minimum amount of unobstructed floor space so that the user can move around freely through immersive experience. We recommended a 9ft x 9ft space with a 6ft x 6ft playable area free of obstructions. Stationary experience, also known as seated experience, is a compact alternative designed for smaller spaces. It does not promote much movement beyond reaching with arms or leaning from the torso to interact with in-app objects.

There are two types of tracking space: local and stage. The default tracking space is local, which means re-centering works as normal. This is the correct behavior for most apps. Some apps may want to remain anchored to the same space for the duration of the experience because they lay out according to the user’s boundary bounds. For example, an app may dynamically lay out furniture to take advantage of the full play area defined by the user. These apps may want to use the stage tracking space. It has its origin on the floor at the center of the play area with its forward direction pointed towards one of the edges of the bounding box. This is not changed by the user-initiated re-center. However, it may still change mid-app if a user walks from one play area to another, so you may want to double-check the bounds when the app returns from a paused state.

Interact with Boundary System

The OVRBoundary class provides access to the boundary system. There are several ways you can interact with the boundary system to create an immersive experience.

Note: With Oculus Integration v31 to v57 and Meta XR Core SDK v59 and up, there are some APIs that are deprecated for the OpenXR backend. We strongly discourage you from using the deprecated APIs as we will no longer upgrade or support them. They will continue to remain available for legacy apps and produce compiler warnings. The deprecated APIs are: enum value OVRBoundary.BoundaryType.OuterBoundary, struct OVRBoundary.BoundaryTestResult, OVRBoundary.TestNode(), OVRBoundary.TestPoint(), OVRBoundary.GetVisible(), and OVRBoundary.SetVisible().

Use GetGeometry and GetDimensions to query the outer boundary or play area and pass the boundary type as either BoundaryType.OuterBoundary, which closely matches the user’s mapped out boundary, or BoundaryType.PlayArea, which is an axis-aligned bounding inset within the outer boundary.

GetGeometry returns an array of up to 256 points that define the outer boundary area or play area in a clockwise order at floor level. All points are returned in local tracking space shared by tracked nodes and accessible through OVRCameraRig’s trackingSpace anchor. GetDimensions returns a Vector3 containing the width, height, and depth in tracking space units, with height always returning 0. Use this information to set up a virtual world with boundaries that align with the real world, and render important and interactive scene objects within the play area. Possible use cases include pausing the game if the user leaves the play area, or placing geometry in the world based on boundary points to create a natural integrated barrier with in-scene objects. For example, adjust the size of a cockpit based on the play area or display the user’s origin point in the scene to help users position themselves.

The boundary is displayed whenever the headset or one of the controllers is violating the boundary. To check the boundary’s current visibility status, call GetVisible(), which returns true if the boundary is actively displayed. You can use this information to hide objects if the boundary is visible. Call SetVisible() and pass the visibility value as true or false to show or hide boundaries. Consider exceptional cases when setting the boundary visibility. Meta Quest will override app requests under certain conditions. For example, setting boundary area visibility to false will fail if a tracked device is close enough to trigger the boundary’s automatic display, and setting the visibility to true will fail if the user has disabled the visual display of the boundary system.

To query the location of controllers and headset relative to the specified boundary type, call OVRBoundary.BoundaryTestResult TestNode() and pass the node and the boundary type. Node values are Node.HandLeft for left-hand controller, Node.HandRight for right-hand controller, or Node.Head for the head. The boundary type is either BoundaryType.OuterBoundary or BoundaryType.PlayArea. Apps can also query arbitrary points relative to the boundary by using OVRBoundary.BoundaryTestResult TestPoint(), which takes the point coordinates in the tracking space as a Vector3 and boundary type as arguments. Both these methods return the OVRBoundary.BoundaryTestResult structure, which includes the following fields:

IsTriggering - A boolean that indicates whether or not there is a triggering interaction between the node and the boundary ClosestDistance - The distance of the node from the boundary ClosestPoint - The closest point to the node that resides on the surface of the specified boundary ClosestPointNormal - The geometrical normal vector for the Closest Point, relative to the boundary surface ChatGPT OK.

User Mixed Reality Experience and Use Cases The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Click here to view the version of this page for your preferred platform.

Mixed Reality, or MR, uses the physical environment around users to take your app and bring it to the world directly around them. It offers combined and unique experiences that blend virtual objects and the interaction with them with the real world around users.

Mixed Reality is a foundational pillar of the current and future Meta Quest headsets and is a key part of delivering next-level experiences to users in the Meta Quest ecosystem. In this document you will explore:

Some of the use cases for MR Relevant SDKs and APIs that you can use to enable MR attributes to your apps How MR furthers the vision of an interconnected metaverse for Meta Quest The Meta Quest Presence Platform contains SDKs and APIs tailormade to help build MR experiences on Meta devices. Blending design and use case considerations with these features will help you get started with your MR projects.

Before continuing, it’s important to distinguish between VR, AR, and MR:

Virtual Reality (VR) aims to fully immerse the user in a world build from the ground up for virtual experiences. In VR a user wearing a Meta Quest headset is fully transported to a virtual place to enjoy virtual experiences. The following shows a virtual creature in an entirely virtual space that a user can interact with.

World Beyond VR Sample

Augmented Reality (AR) involves augmenting, or supplementing, the physical world around a user by superimposing virtual elements. AR utilizes the existing reality around a user to generate experiences that appear in the real world.

Mixed Reality (MR) is a combination and evolution of both VR and AR. It is inclusive of both technologies while also offering a new medium of interaction by allowing virtual elements to interact with both the user and the physical space around the user. It’s important to note that, because MR is inclusive of VR and AR, MR experiences can combine a varying amount of each technology to create an MR experience.

World Beyond MR Sample

Types of MR Experiences Static MR Experiences Static MR allows users to enjoy experiences while seated at their coffee table, in their bedroom, or even at their work desk.

Scenarios: With a combination of augmented reality (AR) and VR the flat surface of their desk can house a miniaturized strategy game, or host a collocated card or board game like Chess. Users can lean back and enjoy a large-scale digital event, like a concert, with a screen the size of their wall while their friends’ Avatars are collocated in the room with them.

With Static MR, developers can build around users being in a relatively fixed location. This location can be transformed and altered with the use of AR, and by displaying, and interacting with virtual objects.

Dynamic MR Experiences Dynamic MR includes experiences that push the boundaries of the physical space where MR experiences can be contained.

Scenarios: Imagine a house transformed into a haunted house through MR, fending off an invasion breaking through the walls of a user’s bedroom, or active full body exercise experiences in a living room with a personal trainer.

These Dynamic MR experiences are built around engaging the user physically or involve movement between multiple rooms or locations.

2D & Classic Style Experiences

2D & Classic styled MR, typically, includes static experiences based on interacting with 2D surfaces and planes.

Scenarios: A user can play a strategy game with units moving around their coffee table, a card game with their living room in full view, or a puzzle game with the pieces of the puzzle floating in places around their play area.

Users can enjoy a sense of awareness of their surroundings while enjoying these familiar experiences, and even navigate their play area by transforming these games and experiences with MR.

Some popular examples of bringing an otherwise 2D, seated or static virtual experience into MR, include:

Solving puzzles by using pieces floating in various, nearby spaces in their play area. Playing a card game, either solo or with other users, and bringing the table and stakes into your their area. Watching a hit TV show on a massive virtual screen while on vacation Placing a virtual game table on top of their own table to roll dice, move characters, and complete quests. One of the first steps to bringing a 2D experience into MR is to a user’s physical environment is using the cameras on a Meta Quest headset to view the surrounding environment. This can be accomplished with the Passthrough API.

Passthrough API At a base level, MR experiences start with a combination of virtual and augmented reality. Using Quest APIs like the Passthrough API allows you to begin combining these two technologies to bring familiar experiences to users in a new light.

The Passthrough API provides a real-time visualization of the world around a user while wearing a Meta Quest headset. The external facing cameras generate a passthrough layer of the user’s physical environment, which is then replaced with a rendition of the environment by an XR compositor.

Integrating the Passthrough API is the first step in combining virtual and augmented reality experiences that users can enjoy. Having a user mark a surface during passthrough setup allows you to build experiences around those reliably defined surface areas.

An example of this would be having a user define their desk, then using that defined surface to build MR experiences. Users could play a strategy game on their desk or enjoy a top down experience like classic arcade games. Using Passthrough in this way also enables users to interact with virtual objects placed on their defined surface area.

For more information about the Passthrough API, see the Unity, Unreal, or Native documentation.

Audio SDK Once a virtual element is blended into a user’s physical environment, convincing audio can help increase the plausibility of the MR experience. A user hearing a sound coming from an element in their physical environment can prompt curiosity, engagement, and the exploration of virtual elements in their space.

The Audio SDK supports spatialization features that transforms monophonic sounds and changes their origination point in the area around the user. Well designed audio creates an additional level of presence for the virtual elements in a user’s physical space and is a crucial component of persuading users to interact with the virtual elements in their environment.

Virtual Companions & Objects

You can transform familiar 2D experiences with MR, as interacting with virtual objects and companions brings an additional level of familiarity.

Scenarios: This can take the form of a player playing with a virtual companion hanging out in their room. This companion is aware of their living real-world space and can respond to voice and hand gestures. Such experiences can also include a virtual companion interacting with virtual objects, or virtual representations of real objects in virtual space.

A common throughline of MR experiences is the ability to candidly interact with virtual spaces, objects, or companions. After using Passthrough to identify surfaces and objects in their play space, these objects can dynamically interact with the area. This gives players a chance to interact with it through the various input forms available on Meta Quest headsets.

It’s important to clearly define how users interact with both virtual companions and objects in an MR experience. The different interactions available to a user ultimately defines their relationship to the virtual elements in the physical space around them.

Interaction SDK The Interaction SDK contains a library of components that allow you to add controller and hand based interactions to your experience. By combining features like Passthrough and the components of the Interaction SDK, you can enable your users to interact with virtually placed objects or companions.

Placing a virtual object or a responsive virtual companion within a user’s space can serve to heighten their engagement with the experience you’ve created for them.

The first and most common form of interaction in VR and MR are through controllers. Integrating controllers enables experiences like petting a dog, wielding a sword, piloting a spaceship, or painting a beautiful 3D canvas in the room around them. The Meta Quest and Meta Quest Pro controllers offer eight combined buttons and triggers, plus 6DOF to simulate various interactions. Integrating controllers into your MR experience allows for precise inputs through buttons, triggers, and movements and haptic feedback when interacting with virtual objects or companions.

Using features like hand tracking allows users to use their own hands to interact with the virtual objects to do things like dribble a ball, roll dice, or solve a puzzle. Combining this with Passthrough can allow users to pick up and place virtual objects in their personal space.

The Voice SDK can allow your users to have a conversation with a virtual companion or give responsive commands that have recognizable and results in the world around them.

Mixed Reality Multiplayer

While solitary MR experiences can be extremely engaging, cooperative or competitive experiences with friends and rivals can make memorable experiences that can consistently bring users back.

Scenarios: Being able to share MR experiences with friends and rivals can consistently bring users back to play and compete again and again. Competing in a ping pong game, or playing with shared virtual objects in local multiplayer experiences through Passthrough can create some truly engaging and memorable moments.

Platform SDK Platform SDK is the basis for multiplayer on Meta Quest, and allows you to easily join users together in your experiences with APIs like Group Presence, App to App Travel, Leaderboards, and more.

Scenarios: By integrating Platform SDK in your MR app, users will be able to find and invite friends, travel to an experience together in a group, or invite friends to specific destination-based experiences within your app.

Movement SDK Integrating the Movement SDK allows you to track your user’s body, face, and eyes to further immerse users and their friends in a MR experience, allowing things like gestures and facial expressions to be communicated while using an app. It can also be used for fitness experiences to track the user’s body and guide them through exercises and other workouts.

Scenarios: Integrating the Movement SDK for your experience, allows users to more reliably interact in an MR environment. Face tracking allows users to truly embody a character or their avatar in a social environment with genuinely reflected facial expressions and body tracking allows for more immersive active experiences like fitness or party games.

Shared Spatial Anchors Combining Platform SDK features with MR-specific tools like Shared Spatial Anchors can transform and augment a user’s local multiplayer experience. Shared Spatial Anchors can be created by a user and then virtually shared with other users collocated in the same physical space. This results in a shared virtual frame of reference for multiple users, who can share the same perspective in terms of where and how digital objects attached to shared anchors are placed.

Scenarios: Building on the previous examples, multiple users can create spatial anchors then share them with collocated friends to bounce a virtual ball off their living room wall. Any user that joins the room with these shared anchors would be able to join in the experience and interact with the virtual objects.

The Shared Spatial Anchors Sample provides a tutorial on incorporating them into a Unity project and can be used as a foundation to build upon for your Unity based MR apps.

Building an MR Scene

After bringing virtual objects into a user’s play area through MR, the next step would be to physically incorporate the walls, floors, and objects around the user and bring them into the MR experience. Doing so, expands the concept of a play area into true room-scale MR experiences.

Scenarios: Imagine interacting with alien ships flying in from the walls, floors, or through the ceiling. Or building an amusement park that goes from the bedroom through the kitchen and around all the furniture in the living room.

One of the most common ideas present in the world of Mixed Reality is users interacting with something coming from their walls, floors, or ceiling. With the Scene API you can build an experience that includes all of these surfaces, plus objects like a user’s desk, couch, chair, windows, and more. With these in place, you can blend virtual content into the user’s physical space.

Scene API Scene API includes Scene Capture and Scene Model which both work together to build an up-to-date representation of the user’s physical space, including walls, furniture, and so on. These can be indexed and queried. The user’s scene model consists of Scene Anchors that consists of geometric components and semantic labels.

An example of a room-scaled MR game that uses scene capabilities, passthrough, voice, and hand tracking is the World Beyond Sample App which is available on App Lab.

Mixed Reality use cases With all of the building blocks in place, these SDKs and APIs can be combined to create an experience for your users that uniquely combines the virtual world and the physical world.

Educational MR Experiences

By integrating MR in a classroom through shared spatial anchors that reflect a historical setting and various historical artifcats, allows students to pick them up and interact with them to learn more about a specific time in history. Or those same colocated students could take notes in a physical notebook while virtually dissecting a frog for a science lesson on biology.

Your users can engage in mixed reality educational experiences that leverage SDKs and APIs like the Platform SDK and Spatial Anchors to build, interact with, and learn from persistent virtual objects in their classroom.

Entertainment MR Experiences

Transforming a living room or a bedroom into a concert venue, complete with a dynamic screen on a user’s wall, is just one of the ways MR can transform and innovate entertainment experiences.

Entertainment in Mixed Reality can be done in a wide variety of ways. Some additional examples include:

Users watching their favorite sports team play a match from end to end on their office desk or their bedroom wall Virtually joining an esports event and bringing the crowd sound into their room with the players playing for a championship right in their living room Enhancing local entertainment experiences by using co-presence to join another user watching a VR experience of a concert or live sporting event Fitness and Wellness MR Experiences

You can work on Fitness MR experiences by using Passthrough to bring a personal trainer’s avatar into someone’s living space to help them exercise. Additionally, objects like hand weights or other exercise equipment can have virtual equivalents that users can exercise with.

Future updates could include things like hand weights to incorporate a wider variety of exercises that users can partake in from the comfort of their own home.

Fitness and Wellness experiences can be further enhanced by incorporating the Movement SDK for body tracking, or hand tracking for accurate hand positioning tracking.

Productivity MR Experiences

You can achieve and enhance productivity in MR in a variety of ways. For example, giving users a virtual office space, like Horizon Workrooms, can allow them to customize where they choose to work.

The Tracked Keyboard SDK allows users to view their own keyboard in their virtual environment. By combining this with the hand tracking and Meta Avatars, you can allow users to fully express themselves even while in a productive, professional setting. ChatGPT OK.

User Tutorial - Unity SharedSpaces Showcase App Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

The SharedSpaces showcase app demonstrates how to combine features in the Platform SDK with a networking solution to get users together in your app quickly. This showcase app uses the Photon Realtime networking stack and Unity Netcode for GameObjects.

By following and completing this tutorial you will:

Enable key Platform features like Destinations in the Meta Quest Dev Center and complete the Data Use Check Up Connect a networking solution to your Meta Quest app See an example of connecting a networking solution to your Meta Quest app and Unity Upload an app to a release channel and distribute it to registered users SharedSpaces Showcase Breakdown

The SharedSpaces showcase is divided into three distinct layers that work together to build a multiplayer experience on your Meta Quest headset.

The Meta Quest layer handles the Platform APIs (Group Presence, Destinations, lobbies, and match ids). This layer utilizes the presence information to find and connect with followers.

The Photon Realtime layer provides the transport layer for sending messages to other players.

The Netcode for GameObjects layer handles the replication of GameObjects.

The Meta Quest Layer

On the Meta Quest Layer, there are several steps you must take. You must enable Destinations for your app and use the lobby_session_id identifier for the user’s Group Presence. You can also use the match_session_id to more precisely group users together in your app.

The Photon Realtime Layer

With your destinations created and Group Presence enabled, you now need to set up the transport layer which sends messages to other players. The transport layer handles the routing of packets between users in a shared app experience.

In this showcase, Photon uses the concept of rooms, where users in the same match or lobby instance will be placed in the same Photon room. Each Photon room has a unique name, which is pulled directly from either the MatchSessionID string or LobbySessionID string. Photon rooms keep track of the oldest member in the room, called the master client who acts as the primary peer.

The primary peer acts as the listen-server and can accept connections from other players attempting to join the room.

In the following example, the user with a star next to their name is designated as the primary peer, and becomes the listen-server for the Photon room.

Shared Spaces Server Demo

When the primary leaves the Photon room, host migration occurs and a new master client will be selected.

Shared Spaces Host Migration

Read Unity Use Case Networking for a use case describing the setup and use of a the 3rd party Photon networking solution.

The Unity Netcode Layer

With Photon Networking in place the final layer is the Game Replication Layer. This layer handles communication from the transport layer (Photon in this showcase) to the connected players in the Photon room.

Extensive information about Netcode for GameObjects can be found on the Unity Documentation site.

Setting up the Shared Spaces Showcase App To begin setting up Shared Spaces app you will first need to clone the SharedSpaces repo, create and configure Destinations and other settings in the Meta Quest Developer Center, and create a free Photon account.

Clone the SharedSpaces Showcase App

You can clone Shared Spaces app directly from the SharedSpaces Github repo. Ensure you have Git LFS installed and run this command in the terminal:

git lfs install

You can then clone the SharedSpaces repo using the following command:

git clone

After cloning the repo, you can run the showcase by opening the project folder in Unity and opening the scene located at Assets/SharedSpaces/Scenes/Startup.

Set up a new Quest Dev Center App

After opening the showcase in your Unity project, you will need to set up a new app in the Meta Quest Developer Center.

After logging into your organization, select Create a New App to open the window.

Enter a name for your app and, because this is a showcase app, it is recommended that you select the Quest (App Lab) platform.

Dev Center App Type

Once finished, click Create and your new app will be created and added to your organization.

Next, select the API tab and to view your App ID. The App ID for your created app will be used when initializing the Platform SDK for your app.

Destinations Now that you’ve cloned SharedSpaces app, and created a corresponding app in your Meta Quest Developer Center, you will need to create the associated destinations for the showcase app.

Destinations are designated places in your app such as a level, map, multiplayer server, or a specific configuration of an activity that can be deep linked to for users to immediately join an experience.

Destinations in your app can be configured to provide the most relevant context based on the user’s activity and whether they are in a joinable state or not.

To begin setting up Destinations for your app, navigate to the Meta Quest Developer Center and select Platform Services.

Dev Center Platform Services

In Platform Services you will see a grid with all of the currently available Platform Services that can be enabled for your selected app.

Platform Services Grid

Use the following steps to create a new destination for your app:

Select Add Service under the Destinations option, then select either Create a Single Destination or Create Multiple Destinations. When selecting Create a Single Destination you will be taken to the New Destination page. New Destination Setup Enter the details for your destination into the available fields and ensure that the Deeplink Type is set to Enabled and the Audience is set to Everyone. Select Submit for Review when finished. For this example, your Destinations should consist of: The Lobby, The Red Room, The Green Room, The Blue Room, and The Purple Room. The Purple Room is set as a public room in this app.

Complete Destinations Setup

If you choose to create multiple destinations, you can upload a .TSV file containing the details for all the destinations in your app. A template for creating multiple destinations can be downloaded from the Create Multiple Destinations window.

In the template, it’s important to only use approved destination headers. More information about the approved destinations headers can be found on Destinations Implementation.

Once your destinations are created, you can view and edit them on the Destinations screen.

Reporting Services Any multiplayer application in the Quest ecosystem must have reporting capabilities for users to report behavior that conflicts with the Code of Conduct for Virtual Experiences. The available options for user reporting are the User Reporting Plugin and the User Reporting Service.

The User Reporting Plugin can be implemented into your app by connecting your pre-existing reporting flow to your Meta Quest app. Once implemented, users can access your reporting flow by pressing the Meta Quest button button.

Check the User Reporting Plugin documentation for detailed implementation information.

The User Reporting Service is housed in the Meta Quest Developer Center, and can be configured for your app after it is created. Check the User Reporting Service documentation for detailed information about the service, or use the following steps for quick set up instructions.

Use the following steps to set up the User Reporting for your app:

Select the User Reporting tab from the left hand menu to open the Customize User Reporting screen. User Reporting In the Receive reports by email field enter an email address that user reports will be directed to. These emails are encrypted on the Meta servers and deleted when the email is sent to protect user information. Select the Report reasons you’d like users to select from when creating a report. It is recommended that you select between 5 and 7 reasons to ensure users are able to make an accurate choice for the report they want to send. You can optionally customize the reporting flow by adding both custom reasons or localization and a branding image to your reporting flow. Custom Reports Once finished, click Activate to enable the User Reporting Service for your app. Data Use Checkup Next you will have to complete a Data Use Checkup to enable Platform Services for your app. The Data Use Checkup is required for your app to access certain platform features for multiplayer experiences. Your app’s access to platform features is contingent upon a successful review of its Data Use Checkup.

Use the following steps to setup and complete the app’s Data Use Checkup:

Select the Data Use Checkup tab from the left hand Quest menu. In the Request to Access Platform Features section select the following options and usage options: User ID, User Profile, Deep Linking, Friends, Invites Use Destinations, Group Presence and App Deep Linking Use Multiplayer with Photon/Playfab View Oculus username Display friends to invite Enable users to invite others Once finished click Submit Requests and a summary of your selected choices will appear in the request window. Data Use Checkup Window You must have a privacy policy in place prior to completing your Data Use Checkup. Paste the link for your privacy policy into the Privacy Policy URL field. Select the compliance check-box then click Submit for Review to submit your Data Use Checkup requests. Once finished, the selected Platform features will be set to Approved.

Setup Photon Account for Networking

After setting up Group Presence in your project, you will need to select and implement a networking solution for your app. In Shared Spaces, this solution is Photon networking.

First you’ll need to configure the NetDriver with your own Photon account. To set up a Photon account, do the following:

Visit photonengine.com and create a user account. After creating your account, you will be on the Photon dashboard. Click Create a New App. Ensure that the Application Type is set to Multiplayer Game and the Photon SDK is set to Realtime. After finishing the new app form, set the type to “Photon Realtime” then click Create. Next you will need the App ID from your created app. You can find and copy the App ID string from the Photon dashboard where your newly created Photon Realtime app will be displayed.

Note: Be sure to keep your Photon App ID somewhere secure.

Paste your App ID in the Assets/Photon/Resources/PhotonAppSettings folder and the Photon Realtime transport should now be enabled. You can verify that the transport is working by checking for networking traffic on the dashboard on your Photon account.

Build SharedSpaces in Unity After completing setup in the Meta Quest Developer Center and setting up networking with Photon, you are now ready to configure the SharedSpaces showcase app in Unity and run it on a Quest headset.

Before proceeding it is recommended that you have completed all of the steps in the Setup Development Environment and Meta Quest Develop Hub guides. This will ensure that you are ready to develop in Unity.

Import the Integration SDK from Meta Quest Developer Center

The Integration SDK contains all of the general SDKs necessary for developing in Unity including:

OVRPlugin Audio SDK Platform SDK Lipsync SDK Voice SDK Interaction SDK While each of these SDKs aren’t necessary for SharedSpaces, downloading the Integration SDK ensures that you have everything you need to configure and run the app.

For information on importing the Integration SDK into your Unity project, complete Step 4 of the Hello VR on Meta Quest Headset tutorial.

Configure Unity Project Build Settings

Because SharedSpaces runs on a Meta Quest headset, there are some Unity Build Settings you will need to configure to ensure your project can successfully run.

Follow the steps detailed in Step 5 of the Hello VR on Meta Quest Headsettutorial.

Connect to the App ID and Set Target Platform

After installing the Integration SDK, you can begin working with the SharedSpaces showcase and connecting it to the Destinations that you setup earlier.

Navigate to Assets > SharedSpaces > Scenes and select the Startup scene to load it into Unity.

If you receive a popup window titled TMP Importer, click Import TMP Essentials to import the necessary TextMesh Pro resources. TMP Importer Next SharedSpaces needs to be connected to the App ID for your app in the Meta Quest Developer Center.

Use the following steps to do so:

Navigate to Assets > Resources and select the OculusPlatformSettings.asset file. Input your App ID into the Oculus Go/Quest or Gear VR field. You can also input your App ID into the Oculus Rift field if you plan to build for Windows. Platform Settings Inspector Update your Bundle Identifier to a unique name for your SharedSpaces showcase app. Next in the Project folder, select OVRPlatformToolSettings.asset and set the target platform to Quest. Select Target Platform Connect to Photon

After connecting the Unity app to your App ID in the Meta Quest Developer Center, you will need to connect the pre-packaged networking solutions to the previously created Photon App ID.

Use the following steps to do so:

From the Project folder navigate to Assets > Photon > Resources and select the PhotonAppSettings.asset file. In the Inspector, copy your Photon App ID and paste it into the App Id Realtime, App Id Chat, and App Id Voice fields. Photon App Settings Once finished, save your Unity project to save your changes. Create a Keystore

Before uploading your app, you will need to create a new keystore for the app in Unity settings.

Use the following steps to create a new keystore:

Navigate to File > Build Settings and select Android as the platform. Select Player Settings then select Player from the Project Settings list. Player Settings Select the Android icon and under Settings for Android expand the Publishing Settings menu. Select Keystore Manager to open the keystore manager window. Keystore Manager In the Keystore Manager window select Keystore > Create New and save to either Anywhere or In Dedicated Location. Create New Keystore After selecting a location for your keystore create a password and alias for the Key Values. Once finished click Add Key to create the keystore for your app. Upload the SharedSpaces Build and run SharedSpaces After saving your project, you’re now ready to build your app and run it on a Quest headset.

First you will need to build the .apk file for your app and upload it to a release channel in the Meta Quest Developer Hub.

Note: You must be logged into the Developer Hub with the same ID used for the Meta Quest Developer Center

To build your app’s .apk file and upload it to the Developer Center:

Navigate to File > Build Settings and select Build. Select or create a folder to save your .apk file in, then click Save to save your project to the selected location. Log into the Meta Quest Developer Hub with your developer account and select App Distribution. MQDH App Distribution Select your SharedSpaces app and click Upload under the Alpha release channel. In the Upload Build window use the Upload button to find and select your app’s .apk file. MQDH Upload Build Click through the next options and select Upload to upload the .apk to the Alpha Release Channel. On the Meta Quest Developer Center open your SharedSpaces app and select Distribution > Release Channels. Select the Alpha release channel then select Users. Enter the emails associated with the Meta Quest accounts of the users that will test the showcase app. Manage Channel Users Select the I agree checkbox then select Send Invite to send the app to your selected users. The SharedSpaces showcase app will now be accessible on your Quest headset. Alternatively you can directly upload your .apk file to your Quest headset by opening the Meta Quest Developer Hub app, connecting your headset to your computer, and dragging the file into the Meta Quest Developer Hub window.Learn Mixed Reality Development through Discover Showcase Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Discover highlights key features of the Meta Quest Mixed Reality APIs. Its main goal is to showcase how to achieve player colocation in a Mixed Reality experience. The code for this project can be found on Github.

Colocation is the ability of multiple users to occupy the same physical and virtual space in an MR experience, allowing them to connect with each other in real-time. Colocated users see and interact with the same digital objects in the digital world. To have a colocated experience, users’ real-world position and rotation must correspond to their virtual space in the same way for all users. Colocation is a powerful paradigm that enables users to collaborate, compete, and interact.

The key to giving people a frictionless shared MR experience is to enable users to quickly and simply start a colocated experience with others. Currently, manual steps are required to invite users to join and participate in a colocated experience. In most apps, users have to manually communicate a specific room code to their peers, so they can all join the same room. However, they won’t be able to interact with shared digital objects placed in a common physical space. Because of this, there’s little sense that users are actually present together. To counter this, Discover offers a colocation package that enables users to seamlessly interact with other nearby users, along with digital objects as if these were placed in the common physical room.

In the Discover project, colocation is used in the MRBike and DroneRage game experiences contained within Discover, allowing multiple players to compete or collaborate in various games and activities.

How the Colocation Package works in Discover

The Colocation package introduced in Discover enables you to build colocated experiences.

the discover showcase app cover image showing a room with various surfaces that can be utilized by the Scene API

To create a colocated experience, the virtual space must correspond with the real world position and orientation of all users in the same way. The Colocation package handles coordination with the spatial anchors and helps determine the users’ relative transformation against them. Once the Colocation package is integrated, you just hook into the users’ camera rig, the root VR GameObjects, and the networking layer. Discover then handles the colocation by placing players in the same space.

Assuming your app already handles networking logic, the Colocation package can help you transform your traditional multiplayer experience into a shared mixed reality experience. Discover is built this way, but your app can target any architecture you prefer, by reusing Discover’s packages and parts of its logic.

Colocation package uses the Shared Spatial Anchors API. If your goal is to share spatial anchors to orient users in the same space for a Mixed Reality experience, you can use this Unity Shared Spatial Anchors API directly. However, if your goal is to focus on colocation, the Colocation package can be a valuable tool for you.

Spatial anchors and the Colocation package overview

The Colocation package uses shared spatial anchors to bring multiple users together in a shared digital space, so they can experience things at the same time, in the same way, and at the same physical space. It isn’t intended to continue this behavior between sessions or in a persistent manner, unlike a normal (non-shared) spatial anchor. The difference is that shared spatial anchors are mainly used for colocation like in Discover, while normal spatial anchors are used for persisting objects in a real-life scene.

With Discover, this logic starts with the “host” player, who starts the gameplay session. You can also think of this player as a “master client” or a “server” player who hosts the session for all other players and has rights over the gameplay session. The host player sets up the scene and the colocated “client” players join it.

In this sense, the Colocation package handles shared spatial anchors so that their mechanics are behind the scenes for client players and only the host players are aware of them.

You can think of shared spatial anchors as the North star. It’s the only point that all players’ headsets understand in common with reference to the physical world. When the first player creates a shared spatial anchor, it says to the other players something equivalent to: “we will all use a star to help us coordinate”. Using this analogy, this shared spatial anchor, and many other spatial anchors, are visible to all players, similar to how the sky is full with stars.

So, the knowledge that this shared spatial anchor exists is handled by the Shared Spatial Anchors API. The API also guarantees that a shared spatial anchor has a unique ID for all users. If they get that ID, they will all mean the same shared spatial anchor.

All spatial anchors are given a UUID (universally unique identifier), which means no two different spatial anchors will have the same ID. If I give you the ID of my spatial anchor, it will only ever identify my spatial anchor. However, just because you have my anchor’s UUID, it doesn’t mean you can directly access my spatial anchor. Due to the private nature of your headset’s environment map, you can only access my spatial anchors if I explicitly share them with you. This happens through the Shared Spatial Anchors API. While the API only exposes anchor points, what it’s actually doing is uploading the entire environment map from my headset to the cloud, and allowing others to download the whole map. Once you have my map, I can give you the ID of the spatial anchor, and you can use that ID to find the same position in the same environment.

The first player must somehow transmit which of all the shared spatial anchors is the one that all peers must use, and that shared spatial anchor should be unique. In the star analogy, this means that the actual star must be commonly identifiable by all players. The transmission of the shared spatial anchor’s ID happens through a networking solution. Discover uses Photon Fusion. When a headset receives that ID, the shared spatial anchor in their room will be identified.

Assuming all players know which spatial anchor they must refer to, there is an extra step in the flow. All digital object transforms or the player’s transforms must be relative to that shared spatial anchor. Again in the star analogy, if the first player sees a ship in front of them, in order to transmit that ship’s position and rotation, they must transmit the relative position and orientation of the ship to the north star (that is the relative transform to the shared spatial anchor). Similar to that, to broadcast their position to others they must do that in reference to the north star.

Therefore, this comprises a second, critical piece of data that must be transmitted continuously in a collocated experience through a networking solution. The relative transforms of all players’ Camera rigs or the relative transform of the digital objects to the shared spatial anchors. Then, by inverting the transform, each player’s headset will be able to place these objects at the common physical space.

The Colocation package handles the above flow and simplifies its complexity for developers.

Discover’s DroneRage

the drone rage app image featuring three drones with weapons primed and the app title drone rage beneath them

DroneRage is a subapp within the Discover showcase. This game modifies the host player’s scene and transforms their room into a high-octane battlefield and throws them into an electrifying fight against relentless drones for survival.

You can test the game out on Applab - Discover.

Gif of DroneRage being played where the player is shooting at drones in a Mixed Reality setting

DroneRage is a colocated experience built on top of Discover. It can support up to four collocated players shooting at the drones in the same physical room.

The DroneRage app doesn’t call the colocation APIs explicitly. Instead once users are colocated, everything is networked everything in world space, like you normally would in any other game. For example when the enemy drones around flying around, they aren’t cognizant about any spatial anchors in use. They are simply networking in a world space like a standard multiplayer game. The shared spatial anchor is also used to orient the client players and everything is using a common world space.

User flow for starting the colocated experience Discover uses a room code system where the host user creates a room, and gives it an explicit code (or is assigned a random 6-digit code). To join a room, a client player selects Join from the Menu and enters that code. It works this way both for remote players and colocated players.

The way that the menu works is about deciding whether to “join” or “join remote”. If the player joins and the app can’t find a shared spatial anchor, it joins them remotely.Set up Colocation Package Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

When the scene sets up, players are synchronized. Everything else is normal Unity networking.

The most important part of a colocated experience is ensuring the client player is in the right place relative to the scene transmitted by the host player.

To do this, the host player accesses the Scene API and spawns all the objects in the scene. When the client player then joins that session, they can start interacting with these objects. The rest of this logic is a matter of placing the client players in the correct spot relative to the host player, and that’s where the shared spatial anchors come from. Once these are set up, everything else falls into place and it all converts to a network scene.

From the client player’s perspective, the host player sends them two things:

An object (such as a Unity transform) that represents it relative to the spatial anchor. In Discover, this is sent through Photon Fusion. An Anchor ID that the host player is using through the Shared Spatial Anchors API to get its transform relative to the headset. The Colocation package then uses these to reorient every player’s rig to the transform that the host has sent to the client players. This means that the server sends a transform and the ID, first. And then, by using that ID, there is another transform that represents where the player actually is in the first player’s space. Under the hood, the Spatial Anchors API returns the transform relative to the headset.

In this way, there is an absolute coordinate system which is the common world space used by the host and client players. Because of this, when you replicate the digital objects in an app by using the Colocation package, you don’t need to align the objects to the player as well. You are simply moving the root of the player’s camera rig.

Outline of how the Colocation package is set up

This process is demonstrated by the ColocationDriverNetObj class in /Assets/Discover/Scripts/Colocation/ColocationDriverNetObj.cs. Here is how the flow works in this class.

Set up objects In the UniTask SetupForColocation() function, the class sets up all the objects in the scene, as seen in the following code snippet:

         m_colocationLauncher.Init(
         m_oculusUser?.ID ?? default,
         m_headsetGuid,
         NetworkAdapter.NetworkData,
         NetworkAdapter.NetworkMessenger,
         sharedAnchorManager,
         m_alignmentAnchorManager,
         overrideEventCode
     );

Initialize colocation launcher After the objects are set up, it initializes the colocation launcher:

         m_colocationLauncher.RegisterOnAfterColocationReady(OnAfterColocationReady);
     if (HasStateAuthority)
     {
         m_colocationLauncher.CreateColocatedSpace();
     }
     else
     {
         // Don't try to colocate if we join remotely
         if (SkipColocation)
         {
             OnColocationCompletedCallback?.Invoke(false);
         }
         else
         {
             m_colocationLauncher.CreateAnchorIfColocationFailed = false;
             m_colocationLauncher.OnAutoColocationFailed += OnColocationFailed;
             m_colocationLauncher.ColocateAutomatically();
         }
     }

This provides a OnAfterColocationReady callback. It also checks to see if the connecting device is the host device with the HasStateAuthority, and if so, it creates the collocated space with:

m_colocationLauncher.CreateColocatedSpace(); Otherwise, it calls the ColocateAutomatically method.

This ensures the spatial anchors are set up from both the Oculus cloud side (with the the Shared Spatial Anchor API), and the Unity objects network side (as with a networking solution like Photon Fusion).

When the client player connects, the class calls the ColocateAutomatically method. This gets the information from the host, which then sets up everything to ensure players are oriented correctly.Deep Dive into Discover's Colocation Package Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

In the Discover GitHub repo, the ColocationDriverNetObj class provides the backbone of the flow and is defined in /Assets/Discover/Scripts/Colocation/ColocationDriverNetObj.cs.

The following dissects the key aspects of this flow.

The async Init() function represents how the flow starts.

    private async void Init()
    {
        m_ovrCameraRigTransform = FindObjectOfType<OVRCameraRig>().transform;
        m_oculusUser = await OculusPlatformUtils.GetLoggedInUser();
        m_headsetGuid = Guid.NewGuid();
        await SetupForColocation();
    }

This function is common to both client players and the host player. The Init function invokes the SetupForColocation() member function of the same class.

    private async UniTask SetupForColocation()
    {
        if (HasStateAuthority)
        {
            Debug.Log("SetUpAndStartColocation for host");
            _ = Runner.Spawn(m_networkDataPrefab).GetComponent<PhotonNetworkData>();
            _ = Runner.Spawn(m_networkDictionaryPrefab).GetComponent<PhotonPlayerIDDictionary>();
            _ = Runner.Spawn(m_networkMessengerPrefab).GetComponent<PhotonNetworkMessenger>();
        }

... SetupForColocation() is also an async function. It spawns all the necessary Photon pieces if it is invoked by the host. (This status is checked with HasStateAuthority.) These are different prefabs and they represent the interfaces provided by the Colocation APIs and the Colocation package. The SetupForColocation() function spawns those classes and forwards the colocation data through Photon Fusion.

Spawned data

In the Colocation package, the INetworkMessenger interface is defined in /Packages/com.meta.xr.sdk.colocation/NetworkUtils/NetworkMessenger/INetworkMessenger.cs. This interface is how Discover gets messages to and from the network.

namespace ColocationPackage {

public interface INetworkMessenger {

public void SendMessageUsingOculusId(byte eventCode, ulong oculusId, object messageData = null);
public void SendMessageUsingNetworkId(byte eventCode, int networkId, object messageData = null);
public void SendMessageUsingHeadsetId(byte eventCode, Guid headsetId, object messageData = null);
public void SendMessageToAll(byte eventCode, object messageData = null);
public void RegisterEventCallback(byte eventCode, Action<object> callback);
public void UnregisterEventCallback(byte eventCode);

} } In the Colocation package, there is also the INetworkData interface of the Colocation package. It gets the data that passes through the networking system. It’s defined in /Packages/com.meta.xr.sdk.colocation/NetworkUtils/NetworkData/INetworkData.cs.

For example, the first class members represent and retrieve information about players:

public interface INetworkData { public void AddPlayer(Player player); public void RemovePlayer(Player player); public Player? GetPlayer(ulong oculusId); public List GetAllPlayers();

public Player? GetFirstPlayerInColocationGroup(uint colocationGroup);


public void AddAnchor(Anchor anchor);
public void RemoveAnchor(Anchor anchor);


public Anchor? GetAnchor(FixedString64Bytes uuid);
public List<Anchor> GetAllAnchors();


public uint GetColocationGroupCount();


public void IncrementColocationGroupCount();

} } Photon Fusion attributes to RPCs

The PhotonNetworkMessenger class is defined in /Assets/Discover/Scripts/Colocation/PhotonNetworkMessenger.cs. It contains the FindRPCToCallServerRPC member function, which relates to the host player (also called the server here).

    [Rpc(RpcSources.All, RpcTargets.StateAuthority)]


    private void FindRPCToCallServerRPC(byte eventCode, int playerId, ulong oculusIdAnchorOwner,
        ulong oculusIdAnchorRequester, Guid headsetIdRequester, string uuid,
        NetworkBool anchorFlowSucceeded, RpcInfo info = default)
    {
        Debug.Log("FindRPCToCallServerRPC");
        PlayerRef playerRef = playerId;
        FindRPCToCallClientRPC(playerRef, eventCode, oculusIdAnchorOwner, oculusIdAnchorRequester, headsetIdRequester, uuid, anchorFlowSucceeded);
    }


    [Rpc(RpcSources.All, RpcTargets.StateAuthority)]


    private void FindRPCToCallServerRPC(byte eventCode, int playerId)
    {
        Debug.Log("FindRPCToCallServerRPC: Null");
        PlayerRef playerRef = playerId;
        FindRPCToCallClientRPC(playerRef, eventCode);
    }

There is also the corresponding FindRPCToCallClientRPC function for client players:

    [Rpc(RpcSources.All, RpcTargets.All)]


    private void FindRPCToCallClientRPC(
        [RpcTarget] PlayerRef player,
        byte eventCode,
        ulong oculusIdAnchorOwner, ulong oculusIdAnchorRequester, Guid headsetIdRequester,
        string uuid, NetworkBool anchorFlowSucceeded)
    {
        Debug.Log("FindRPCToCallClientRPC");
        var data = new ShareAndLocalizeParams(oculusIdAnchorOwner, oculusIdAnchorRequester, headsetIdRequester, uuid)
        {
            anchorFlowSucceeded = anchorFlowSucceeded
        };
        m_callbackDictionary[eventCode](data);
    }

    [Rpc(RpcSources.All, RpcTargets.All)]


    private void FindRPCToCallClientRPC(


        [RpcTarget] PlayerRef player,
        byte eventCode
    )
    {
        Debug.Log("FindRPCToCallClientRPC: null");
        m_callbackDictionary[eventCode](null);
    }

Depending on the messages, these are all Photon Fusion attributes to RPCs.

Server and client messages The INetworkMessenger interface invokes the SendMessageUsingOculusId function. This is a member function of the PhotonNetworkMessenger class, as defined in /Assets/Discover/Scripts/Colocation/PhotonNetworkMessenger.cs. It uses a dictionary and implements the PhotonPlayerIDDictionary player ID dictionary class.

When using the SendMessageUsingOculusId function, the following translates the association between the Oculus ID and the Photon Fusion ID:

var networkId = (int)m_idDictionary.GetNetworkId(oculusId); By using this, Discover sends any referenced RPC by this message data object (called messageData):

        if (messageData != null)
        {
            var data = (ShareAndLocalizeParams)messageData;
            FindRPCToCallServerRPC(eventCode, networkId, data.oculusIdAnchorOwner, data.oculusIdAnchorRequester, data.headsetIdAnchorRequester, data.uuid.ToString(), data.anchorFlowSucceeded);
        }

This forwards all networking information through the network.

ColocationDriverNetObj initialization walkthrough

Focusing back on theColocationDriverNetObj class discussed above, here is a more in-depth look of the process followed in the class.

It sets up the following objects: m_ovrCameraRigTransform m_oculusUser m_headsetGuid It adds the current Oculus user ID to the dictionary: AddToIdDictionary(m_oculusUser?.ID ?? default, Runner.LocalPlayer.PlayerId, m_headsetGuid); This is run either by the client or the host. This gets the Oculus user ID and associates it with the local player ID, as well as with a headset Guid, which is also used for referencing the Oculus player.

It initializes the Messenger class with: messenger.Init(PlayerIDDictionary); It then starts the colocation flow. It creates an alignment anchor manager called m_alignmentAnchorManager with the following:

m_alignmentAnchorManager = Instantiate(m_alignmentAnchorManagerPrefab).GetComponent<AlignmentAnchorManager>();
```   2. It then creates a colocation launcher as follows:

m_colocationLauncher = new ColocationLauncher(); ``` 3. This is initialized with this invocation with the Init function of the ColocationLauncher class from the Colocation package:

           m_colocationLauncher.Init(
            m_oculusUser?.ID ?? default,
            m_headsetGuid,
            NetworkAdapter.NetworkData,
            NetworkAdapter.NetworkMessenger,
            sharedAnchorManager,
            m_alignmentAnchorManager,
            overrideEventCode
        );

Using the AlignmentAnchorManager class

The AlignmentAnchorManager class plays a significant role in the overall flow because it handles player alignment to the spatial anchor. It is defined in /Packages/com.meta.xr.sdk.colocation/Anchors/AlignmentAnchorManager.cs.

The AlignmentAnchorManager class contains the AlignPlayerToAnchor function, as shown here:

public void AlignPlayerToAnchor(OVRSpatialAnchor anchor) {
  Debug.Log("AlignmentAnchorManager: Called AlignPlayerToAnchor");
  if (_alignmentCoroutine != null) {
    StopCoroutine(_alignmentCoroutine);
    _alignmentCoroutine = null;
  }


  _alignmentCoroutine = StartCoroutine(AlignmentCoroutine(anchor, 2));
}

Invoking the AlignPlayerToAnchor function gets the spatial anchor used by the Colocation package. It connects to the Spatial Anchors API in order to align a player to the anchor.

ColocationLauncher class

Back in the Colocation package, in the ColocationLauncher class, defined in /Packages/com.meta.xr.sdk.colocation/ColocationLauncher/ColocationLauncher.cs.

ExecuteAction function This class contains the ExecuteAction function:

private void ExecuteAction(ColocationMethod colocationMethod) {


  switch (colocationMethod) {
    case ColocationMethod.ColocateAutomatically:
      ColocateAutomaticallyInternal();
      break;
    case ColocationMethod.ColocateByPlayerWithOculusId:
      ColocateByPlayerWithOculusIdInternal(_oculusIdToColocateTo);
      break;
    case ColocationMethod.CreateColocatedSpace:
      CreateColocatedSpaceInternal();
      break;
    default:
      Debug.LogError($"ColocationLauncher: Unknown action: {colocationMethod}");
      break;
  }
}

ColocationAutomaticallyInternal function The ExecuteAction function can, in turn, call the ColocateAutomaticallyInternal() function. This is where the shared spatial anchors come to play:

private async void ColocateAutomaticallyInternal() {


  Debug.Log("ColocationLauncher: Called Init Anchor Flow");
  var successfullyAlignedToAnchor = false;

  List<Anchor> alignmentAnchors = GetAllAlignmentAnchors();
  foreach (var anchor in alignmentAnchors)
    if (await AttemptToShareAndLocalizeToAnchor(anchor)) {
      successfullyAlignedToAnchor = true;
      Debug.Log($"ColocationLauncher: successfully aligned to anchor with id: {anchor.uuid}");
      _networkData.AddPlayer(new Player(_myOculusId, anchor.colocationGroupId));
      AlignPlayerToAnchor();
      break;
    }


  if (!successfullyAlignedToAnchor) {
    if (CreateAnchorIfColocationFailed)
    {
      CreateNewColocatedSpace().Forget();
    }
    else
    {
      OnAutoColocationFailed?.Invoke();
    }
  }
}

This gets all the alignment anchors by calling the List GetAllAlignmentAnchors() function, as shown here:

private List<Anchor> GetAllAlignmentAnchors() {


  var alignmentAnchors = new List<Anchor>();
  List<Anchor> allAnchors = _networkData.GetAllAnchors();
  foreach (var anchor in allAnchors)
    if (anchor.isAlignmentAnchor) {
      alignmentAnchors.Add(anchor);
    }


  return alignmentAnchors;
}

The important line here is:

List allAnchors = _networkData.GetAllAnchors(); This line, in turn, calls GetAllAnchors() and gets the anchors list.

The GetAllAnchors function The GetAllAnchors function retrieves data from the AnchorList and is defined in /Assets/Discover/Scripts/Colocation/PhotonNetworkData.cs as part of the PhotonNetworkData class:

    public List<Anchor> GetAllAnchors()


    {
        var anchors = new List<Anchor>();
        foreach (var photonAnchor in AnchorList) anchors.Add(photonAnchor.Anchor);


        return anchors;
    }

This function retrieves the data from the AnchorList, already populated by previously calling the AddAnchorRpc function:

    private void AddAnchorRpc(PhotonNetAnchor anchor)
    {
        AddNetAnchor(anchor);
    }

Which in turn calls the AddNetAnchor function:

   private void AddNetAnchor(PhotonNetAnchor anchor)
    {
        if (HasStateAuthority)
        {
            AnchorList.Add(anchor);
        }
        else
        {
            AddAnchorRpc(anchor);
        }
    }

This produces all the spatial anchors in a list.

AttemptToSharedAndLocalizeToAnchor function The AttemptToShareAndLocalizeToAnchor function is defined in /Packages/com.meta.xr.sdk.colocation/ColocationLauncher/ColocationLauncher.cs of the Colocation package, within the ColocateAutomaticallyInternal() definition. The following part is critical:

  foreach (var anchor in alignmentAnchors)


    if (await AttemptToShareAndLocalizeToAnchor(anchor)) {
      successfullyAlignedToAnchor = true;
      Debug.Log($"ColocationLauncher: successfully aligned to anchor with id: {anchor.uuid}");
      _networkData.AddPlayer(new Player(_myOculusId, anchor.colocationGroupId));
      AlignPlayerToAnchor();
      break;
    }

... The invoked AttemptToShareAndLocalizeToAnchor function is defined as:

private UniTask<bool> AttemptToShareAndLocalizeToAnchor(Anchor anchor) {


  Debug.Log(
    $"ColocationLauncher: Called AttemptToShareAndLocalizeToAnchor with id: {anchor.uuid} and oculusId: {_myOculusId}"
  );
  _alignToAnchorTask = new UniTaskCompletionSource<bool>();
  var anchorOwner = anchor.ownerOculusId;
  if (anchorOwner == _myOculusId)
  {
    // In the case a player returns and wants to localize to an anchor they created
    // we simply localize to that anchor
    var sharedAnchorId = new Guid(anchor.uuid.ToString());
    LocalizeAnchor(sharedAnchorId);
    return _alignToAnchorTask.Task;
  }

Which, in turn, calls the LocalizeAnchor function defined as:

private async void LocalizeAnchor(Guid anchorToLocalize) {


  Debug.Log($"ColocationLauncher: Localize Anchor Called id: {_myOculusId}");
  IReadOnlyList<OVRSpatialAnchor> sharedAnchors = null;
  Guid[] anchorIds = {anchorToLocalize};
  sharedAnchors = await _sharedAnchorManager.RetrieveAnchors(anchorIds);


  if (sharedAnchors == null || sharedAnchors.Count == 0) {
    Debug.LogError("ColocationLauncher: Retrieving Anchors Failed");
    _alignToAnchorTask.TrySetResult(false);
  } else {
    Debug.Log("ColocationLauncher: Localizing Anchors is Successful");
    // For now we will only take the first anchor that was shared
    // This should be refactored later to be more generic or for cases where we have multiple alignment anchors if that case comes up
    _myAlignmentAnchor = sharedAnchors[0];
    _alignToAnchorTask.TrySetResult(true);
  }
}

This gets a list of all the anchors with this line:

sharedAnchors = await _sharedAnchorManager.RetrieveAnchors(anchorIds); SharedAnchorManager class The SharedAnchorManager class retrieves the anchors. This class is defined in the /Packages/com.meta.xr.sdk.colocation/Anchors/SharedAnchorManager.cs in the Colocation package.

public async UniTask<IReadOnlyList<OVRSpatialAnchor>> RetrieveAnchors(Guid[] anchorIds) {


  Assert.IsTrue(anchorIds.Length <= OVRPlugin.SpaceFilterInfoIdsMaxSize, "SpaceFilterInfoIdsMaxSize exceeded.");


  UniTaskCompletionSource<IReadOnlyList<OVRSpatialAnchor>> utcs = new();
  Debug.Log($"{nameof(SharedAnchorManager)}: Querying anchors: {string.Join(", ", anchorIds)}");


  OVRSpatialAnchor.LoadUnboundAnchors(
    new OVRSpatialAnchor.LoadOptions {
      StorageLocation = OVRSpace.StorageLocation.Cloud,
      Timeout = 0,
      Uuids = anchorIds
    },
    async unboundAnchors => {
      if (unboundAnchors == null) {
        Debug.LogError(
          $"{nameof(SharedAnchorManager)}: Failed to query anchors - {nameof(OVRSpatialAnchor.LoadUnboundAnchors)} returned null."
        );
        utcs.TrySetResult(null);
        return;
      }

      if (unboundAnchors.Length != anchorIds.Length) {
        Debug.LogError(
          $"{nameof(SharedAnchorManager)}: {anchorIds.Length - unboundAnchors.Length}/{anchorIds.Length} anchors failed to relocalize."
        );
      }

      var createdAnchors = new List<OVRSpatialAnchor>();
      var createTasks = new List<UniTask>();
      // Bind anchors
      foreach (var unboundAnchor in unboundAnchors) {
        var anchor = InstantiateAnchor();
        try {
          unboundAnchor.BindTo(anchor);
          _sharedAnchors.Add(anchor);
          createdAnchors.Add(anchor);
          createTasks.Add(UniTask.WaitWhile(() => anchor.PendingCreation, PlayerLoopTiming.PreUpdate));
        } catch {
          Object.Destroy(anchor);
          throw;
        }
      }

      // Wait for anchors to be created
      await UniTask.WhenAll(createTasks);


      utcs.TrySetResult(createdAnchors);
    }
  );

  return await utcs.Task;
}

The call to OVRSpatialAnchor.LoadUnboundAnchors() loads any spatial anchors given by this list of IDs:

Uuids = anchorIds The host player sends these IDs as a list and the clients can retrieve them.

AlignmentAnchorManager class deep dive

The Colocation package picks the first spatial anchor and uses it to do the alignment. This assumes that any given anchor is visible to both players (host and client).

The AlignmentAnchorManager class is defined in /Packages/com.meta.xr.sdk.colocation/Anchors/AlignmentAnchorManager.cs of the Colocation package. Once the Colocation package retrieves the shared spatial anchors to use and create alignment, it calls this alignment coroutine:

private IEnumerator AlignmentCoroutine(OVRSpatialAnchor anchor, int alignmentCount) {


  Debug.Log("AlignmentAnchorManager: called AlignmentCoroutine");


  while (alignmentCount > 0) {


    if (_anchorToAlignTo != null) {


      _cameraRigTransform.position = Vector3.zero;
      _cameraRigTransform.eulerAngles = Vector3.zero;


      yield return null;
    }


    var anchorTransform = anchor.transform;
    if (_cameraRigTransform != null) {
      Debug.Log("AlignmentAnchorManager: CameraRigTransform is valid");
      _cameraRigTransform.position = anchorTransform.InverseTransformPoint(Vector3.zero);
      _cameraRigTransform.eulerAngles = new Vector3(0, -anchorTransform.eulerAngles.y, 0);
    }


    if (_playerHandsTransform != null) {
      _playerHandsTransform.localPosition = -_cameraRigTransform.position;
      _playerHandsTransform.localEulerAngles = -_cameraRigTransform.eulerAngles;
    }


    _anchorToAlignTo = anchor;
    alignmentCount--;
    yield return new WaitForEndOfFrame();
  }


  Debug.Log("AlignmentAnchorManager: Finished Alignment!");
  OnAfterAlignment.Invoke();
}

This moves the player so that wherever the shared spatial anchor object is, it ensures that the player is relative to that object, the same way that the player in OVR Space is relative to the spatial anchor. ChatGPT The Unity SharedSpaces showcase app provides a practical demonstration of integrating the Platform SDK with networking solutions like Photon Realtime and Unity Netcode for GameObjects to enable multiplayer interactions within a Meta Quest app. Here's a detailed breakdown of how to set up and utilize this showcase app:

  1. Enable Platform SDK Features Start by enabling features like Destinations in the Meta Quest Dev Center and complete the Data Use Check Up. This setup is essential for using Meta's Platform APIs efficiently.
  2. Set Up Networking The app uses Photon Realtime for the networking layer. You'll need to create an account with Photon, set up a new application, and obtain an App ID which will be used within your Unity project to manage multiplayer sessions.
  3. Integrate Unity Netcode for GameObjects This handles the replication of GameObjects across different sessions, ensuring that all players have a synchronized view of the game state.
  4. Configure the Meta Quest Layer In this layer, you handle the integration with Meta's Platform APIs. This includes setting up group presence, using destinations for navigation, and managing lobby and match sessions. This layer is crucial for handling the logic related to Meta Quest’s specific features like matchmaking and session management.
  5. Set Up Photon Realtime Layer Configure Photon to handle the transport layer, which involves the transmission of packets between users. In the SharedSpaces app, Photon manages rooms where users join and communicate based on their match or lobby IDs. The primary peer in these rooms acts as the listen-server.
  6. Clone and Set Up the SharedSpaces App Clone the SharedSpaces repository from GitHub, set up the necessary configurations in the Unity project, and integrate the app with your Meta Quest Dev Center and Photon settings.
  7. Build and Test After setting up all components, build the project in Unity and upload the app to a release channel in the Meta Quest Developer Hub. This allows you to distribute the app to registered users for testing.
  8. Multiplayer Setup Test the multiplayer setup by joining different devices into the same Photon room using the match or lobby IDs configured in your app settings. Ensure that all players can interact and that the game state is synchronized across all sessions.
  9. Debug and Optimize Monitor the performance and stability of the app. Use Photon's and Unity’s debugging tools to troubleshoot and optimize the network communication and game logic. Key Learning Points: Integration of Platform SDK: Understanding how to integrate and utilize Meta’s Platform SDK for features like group presence and destinations. Networking: Setting up and managing a robust multiplayer environment using Photon Realtime. Game Replication: Using Unity Netcode for GameObjects to ensure consistent and synchronized gameplay across different users. This tutorial provides a comprehensive guide to building a multiplayer VR app that leverages Meta's Platform SDK, Photon Realtime, and Unity Netcode for GameObjects. By following these steps, developers can create engaging and interactive multiplayer experiences for Meta Quest users.

User Scene Integration In DroneRage Unity

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

DroneRage has drone ships coming above from the seat, from behind the walls, into the hole of the player’s ceiling. The app needs to know where these physical walls are. Both the enemies and the lasers interact with the walls and any other scene objects in the physical environment. So, if you shoot your desk or if the enemy shoots your desk, you see the bullet hole decals. Or if one of the drones crashes over a desk, it will fall and hit the desk.

This logic uses Discover’s integration of the Scene API. The Scene API is used to spawn Photon network objects, and then everything is handled completely in the Discover level, like tracking and synchronizing the objects.

All these objects are also already set up at the Discover level to do all things necessary in all the full game experiences contained within Discover. Theoretically, you could swap these objects out depending on the app selection. For example, you don’t need the Interaction SDK’s RayInteractibles in DroneRage, because you are not pointing at the drones, but Discover has all these as a mega object because it’s easier than creating and de-spawning objects.

Adding scene to DroneRage

The SceneElement class can be found in /Assets/Discover/Scripts/SceneElement.cs and SceneElementsManager in /Assets/Discover/Scripts/SceneElementsManager.cs.

To enable DroneRage access to Scene elements, the objects first need to be registered with a Scene Elements Manager in the Awake function of the SceneElement class:

    private void Awake()
    {
        m_highlightObject.SetActive(false);
        SceneElementsManager.Instance.RegisterElement(this);
    }

Let’s take a look at Unity Editor to better understand how this Scene API integration works. This is the DiscoverAppController prefab.

the discoverappcontroller prefab in unity is displayed with several fields like scene manager and colocation prefab filled in

In its hierarchy, it has an OVRSceneManager which is part of the built-in Meta XR Core SDK.

the hierarchy view in unity editor has the OVRSceneManager highilghted

OVRSceneManager uses the Scene API from the Unity Editor’s perspective.

the OVRSceneManager is selected in the Unity's Editor's Inspector

When a ceiling object is detected by the Scene API, it spawns the CeilingPhotonInstantiator object.

The CeilingPhotonInstatiator object selected in Unity's Editor inspector

Although indirect, it has an OVR Scene Anchor class, which is required by OVRSceneManager so that it can be tracked. This uses the Photon Instantiator component which is part of Discover and instantiates this network object.

Unity Editor inspector with the Photon Instantiator script selected

The reason for that is that it instantiates network objects through Photon Fusion, rather than using Object.Instantiate. Then, there is the SceneCeilingPhoton class which is a Photon Fusion net object.

Unity Editor with the SceneCeilingPhoton object selected

This has the Scene Element component. This is a Discover class, which means the Scene Element Manager tracks it automatically. Since it’s a network object, the colocated player automatically spawns that object because it’s a Photon Fusion network object. This is not special in terms of Scene API or colocation. Rather, it’s like any other network GameObject.

Scene Elements In the SceneElementsManager class, in /Assets/Discover/Scripts/SceneElementsManager.cs, there is this line:

    public IEnumerable<SceneElement> GetElementsByLabel(string label) =>
        Instance.SceneElements.Where(e => e.ContainsLabel(label));

That label is a string (such as “CEILING”, and so on). You can see, for example, in class EnvironmentSwapper of DroneRage in /Discover/DroneRage/Scripts/Scene/EnvironmentSwapper.cs how it changes the materials of the walls:

    private async void SwapWalls(bool toAlt)
    {
        await UniTask.WaitUntil(() => SceneElementsManager.Instance.AreAllElementsSpawned());
        foreach (var wall in SceneElementsManager.Instance.GetElementsByLabel(OVRSceneManager.Classification.WallFace))
        {
            if (toAlt)
            {
                m_originalWallMaterials ??= wall.Renderer.sharedMaterials;
                wall.Renderer.sharedMaterials = m_altWallMaterials;
            }
            else
            {
                wall.Renderer.sharedMaterials = m_originalWallMaterials;
            }
        }
    }

By using SceneElementsManager.Instance.GetElementsByLabel(OVRSceneManager.Classification.WallFace), it gets elements by label (Wallface) and then swaps on that wall.

Note: CEILING is a string and not an enum. This is because it’s part of the Scene API, which is frequently updated. Having that as a string helps with backwards compatibility.

A similar flow is followed for the ceiling swap:

    private async void SwapCeiling(bool toAlt)
    {
        await UniTask.WaitUntil(() => SceneElementsManager.Instance.AreAllElementsSpawned());
        var ceiling = SceneElementsManager.Instance.
            GetElementsByLabel(OVRSceneManager.Classification.Ceiling).
            FirstOrDefault()?.
            Renderer;
        if (ceiling == null)
            return;


        var block = new MaterialPropertyBlock();
        block.SetFloat(s_visibility, toAlt ? 0 : 1);
        ceiling.SetPropertyBlock(block);
    }

Enemies spawn In the Spawner class, as defined in /Assets/Discover/DroneRage/Scripts/Enemies/Spawner.cs, enemies are initially spawned outside the room, behind the wall. To do that, there is a need to first figure out the extent of the wall.

This is how the room extent is calculated:

    private void CalculateRoomExtents()


    {
        var anchors = SceneElementsManager.Instance.GetElementsByLabel(OVRSceneManager.Classification.WallFace).ToList();


        if (anchors.Any())
        {
            m_roomMinExtent = anchors[0].transform.position;
            m_roomMaxExtent = anchors[0].transform.position;
            m_roomSize = Vector3.zero;
        }


        foreach (var anchor in anchors)
        {
            var anchorTransform = anchor.transform;
            var position = anchorTransform.position;
            var scale = anchorTransform.lossyScale;
            var right = anchorTransform.right * scale.x * 0.5f;
            var up = anchorTransform.up * scale.y * 0.5f;
            var wallPoints = new Vector3[]
            {
                position - right - up,
                position - right + up,
                position + right - up,
                position + right + up,
            };


            foreach (var wp in wallPoints)
            {
                Debug.Log($"wall point {wp}");


                m_roomMinExtent = Vector3.Min(m_roomMinExtent, wp);
                m_roomMaxExtent = Vector3.Max(m_roomMaxExtent, wp);

                m_roomSize.y = Mathf.Max(m_roomSize.y, transform.lossyScale.y);
            }
        }


        m_roomSize = m_roomMaxExtent - m_roomMinExtent;
        Debug.Log("Room Size: " + m_roomSize + " Room Min Extents: " + m_roomMinExtent + " Room Max Extents: " + m_roomMaxExtent);
    }

The first gets the list of all wall faces with the following line:

        var anchors = SceneElementsManager.Instance.GetElementsByLabel(OVRSceneManager.Classification.WallFace).ToList();

Then, it creates a min and max extent.

                m_roomMinExtent = Vector3.Min(m_roomMinExtent, wp);
                m_roomMaxExtent = Vector3.Max(m_roomMaxExtent, wp);

This essentially gets a bounding box that represents the room and stores that. The following logic finds a point that is outside that bounding box which represents the user’s real-life room.

    public Vector3 GetClosestSpawnPoint(Vector3 position)


    {
        var spawnOffset = position - 0.5f * (m_roomMinExtent + m_roomMaxExtent);
        spawnOffset.y = 0f;
        spawnOffset = spawnOffset.normalized;


        if (spawnOffset.sqrMagnitude == 0f)
        {
            spawnOffset = Vector3.forward;
        }


        return 2f * (RoomSize.magnitude + 1f) * spawnOffset + new Vector3(0f, m_roomMaxExtent.y - 1f, 0f);
    }

The Scene API is used to spawn all these Photon network objects, and then everything is handled completely in the Discover level, like tracking and synchronizing these objects. ChatGPT The integration of the Scene API in the DroneRage app within the Discover platform is an advanced example of how physical environments can be dynamically utilized to enhance interactive experiences in mixed reality (MR) settings. Here’s a deeper dive into how DroneRage uses Unity’s Scene API to create immersive gameplay by interacting with real-world elements:

Scene API Usage in DroneRage DroneRage leverages Unity's Scene API to understand and interact with the user's physical environment. This API enables the app to detect physical surfaces like ceilings, walls, and possibly other obstacles within a room. Here’s how it integrates into the gameplay:

Detection of Physical Surfaces: The Scene API scans the player's environment to detect surfaces such as walls and ceilings. These surfaces are then mapped as gameplay elements where drones can appear, hide, or against which lasers can be shot. Interaction with Physical Objects: When drones crash or when lasers hit these detected surfaces, the game visually represents these interactions. For instance, bullet holes might appear on a wall or a desk where a laser hits. This interaction is managed by spawning Photon network objects that are synchronized across the network to ensure all players in the same session see the same game state. Dynamic Environment Adaptation: The game dynamically adapits to the room's layout. For example, drones may emerge from behind physical barriers like walls or from above using the ceiling, adding an element of surprise and requiring players to be aware of their entire surroundings. Technical Implementation The technical implementation involves several components that work together to create this integrated experience:

SceneElement Class: This class handles the registration of physical objects detected by the Scene API into the game. Each object detected (like a piece of furniture or a section of wall) is registered with a SceneElementsManager, which then manages these objects during gameplay. Photon Instantiator: This component is crucial for multiplayer interaction, ensuring that all networked events, like a drone appearing or a laser being shot, are synchronized across all clients in the game session. Environment Interaction: The game uses the detected surfaces to influence gameplay mechanics, such as where drones can fly or crash, and where lasers can be shot. This is handled by the SceneElementsManager, which keeps track of all interactable surfaces within the game. Code Example Here’s a simplified example of how an object might be registered and used within the game:

csharp Copy code public class SceneElement : MonoBehaviour { void Awake() { // Activate the object within the game scene this.gameObject.SetActive(true); // Register this object with the SceneElementsManager SceneElementsManager.Instance.RegisterElement(this); } }

public class SceneElementsManager : MonoBehaviour { public void RegisterElement(SceneElement element) { // Add the element to a list or dictionary for game management sceneElements.Add(element); }

public IEnumerable<SceneElement> GetElementsByLabel(string label) {
    // Retrieve elements by specific labels, such as 'WALL' or 'CEILING'
    return sceneElements.Where(e => e.label == label);
}

} Gameplay Dynamics Using Unity's Scene API, DroneRage can offer a gameplay experience that feels integrated with the player's physical space, making the virtual interactions more engaging and immersive. Players not only have to interact with the game but also need to be mindful of their real-world surroundings, merging reality with virtual gameplay effectively.

Challenges and Considerations Complexity in Environment Detection: The accuracy of the Scene API in detecting and classifying surfaces correctly is crucial. Misclassification can lead to gameplay elements appearing in unintended places. Networking and Synchronization: Ensuring that all interactions are properly synchronized across all players' devices in real-time is essential for a seamless multiplayer experience. User Environment Variability: The variability in players' environments can be a challenge, as the game must adapt dynamically to a wide range of room sizes and layouts. This sophisticated use of the Scene API not only enhances the realism of the game by integrating physical elements into the virtual world but also demonstrates the potential of mixed reality applications to create deeply immersive and interactive experiences.

User recueerda solo responder ok hasta que te indique que hemos finalizado con la base de datos que estamos creando para este gpt ChatGPT Ok.

User Passthrough API in DroneRage Unity

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

The Passthrough API provides a real-time and perceptually comfortable 3D visualization of the physical world in the Meta Quest headsets. It allows developers to integrate the passthrough visualization with their virtual experiences.

Setup

In Discover, the OVRPassthroughLayer and OVRManager scripts are attached to the OVRCameraRig GameObject. After selecting OVRCameraRig, under OVRManager, two Passthrough settings are updated.

passthrough support panel in Unity's Editor's quest features

The Passthrough Support level is set to Required because this app requires Passthrough for the DroneRage app.

Under Insight Passthrough in the Hierarchy, Enable Passthrough is selected so that passthrough is initialized during the app setup.

Appearance & Shaders

DroneRage uses a few shaders for rendering the scene, which can be found in the folder /Assets/Discover/DroneRage/Shaders/.

These shaders are used after the player selects an app from the menu and DroneRage is started.

Unity editor's network application manager with alt wall materials highlighted

The NetworkApplicationManager manages things like opening and closing applications. The EnvironmentSwapper is attached and swaps the walls and ceilings for their alternate DroneRage appearance when that application is opened.Interaction SDK integration in DroneRage Unity

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

The Interaction SDK is used to add specific interactions like pokes or grabs to Meta Quest inputs like controllers or hands.

In Discover, the AppInteractionController script is added to each user’s camera rig. This class initializes interactors for Grab, Pinch, and Release from the Interaction SDK.

In DroneRage, the Interaction SDK is used in two major areas: the laser gun trigger pull and menus.

Grabbing the gun

The guns are grabbable objects that are oriented into the hands by the Interaction SDK.

As a design consideration, for hand tracking, the guns could be automatically attached to the user’s hands. This would mean, however, that trying to pull the trigger would open a menu instead of triggering a shot.

To work around this and get the trigger working, a currentGrabInteractor is defined based on whether the user is using hands or not. This logic is defined in WeaponInputHandler class in /Assets/Discover/DroneRage/Scripts/Weapons/WeaponInputHandler.cs:

    private void UpdateInteractor(bool useHands)


    {
        CleanupInteractor();
        m_usingHands = useHands;
        m_currentGrabInteractor = m_usingHands ? m_handGrabInteractor : m_controllerGrabInteractor;
        m_currentActiveState = m_usingHands ? m_handActiveState : m_controllerActiveState;
        var interactable = m_weapon.HandGrabInteractable;
        if (interactable != null && m_currentGrabInteractor != null && m_currentActiveState.Active)
        {
            Debug.Log($"Selecting {interactable} with {m_currentGrabInteractor}", this);
            m_currentGrabInteractor.ForceSelect(interactable);
        }
        m_isCurrentInteractorActive = m_currentActiveState?.Active ?? false;
    }

The following line gets the current GrabInteractor which is either the hand or the controller GrabInteractor.

m_currentGrabInteractor = m_usingHands ? m_handGrabInteractor : m_controllerGrabInteractor; Using the Interaction SDK’s ForceSelect() function on the current GrabController results in the player automatically grabbing the object with the following line:

m_currentGrabInteractor.ForceSelect(interactable); Then, in Update(), in every frame, it checks whether the user is using hands:

        if (useHands != m_usingHands || activeStateChanged)
        {
            UpdateInteractor(useHands);
        }

If the weapon is not grabbed, the player automatically grabs it.

Pulling the gun’s trigger

For the user to pull the weapon trigger, the HandGrabAPI on the interactor is called in Update(). This captures the FingerPalmStrength of the index finger.

            var triggerStrength = m_currentGrabInteractor.HandGrabApi.GetFingerPalmStrength(HandFinger.Index);

If the trigger strength is above a set threshold, the trigger is held and the gun will fire. This works so that if the user holds the trigger down, the gun will continuously fire. If they hold it down then let it go, the gun only fires once. This is true for both the machine gun and the pistol that the player may be holding.

            isTriggerHeldThisFrame = triggerStrength > m_triggerStrengthThreshold;

Because this uses the HandGrabAPI, it works on both hand grab and the controller through different Interactors.

        if (m_weapon.HandGrabInteractable != null && m_currentGrabInteractor != null &&
            m_currentGrabInteractor.HasSelectedInteractable)
        {
            var triggerStrength = m_currentGrabInteractor.HandGrabApi.GetFingerPalmStrength(HandFinger.Index);
            isTriggerHeldThisFrame = triggerStrength > m_triggerStrengthThreshold;

UI and Menus

The UI and Menus use direct touch and RayInteractors using the Interaction SDK.

The RayInteractor is set up on left and right controllers or hands in the AppInteractionController class, defined in /Assets/Discover/Scripts/AppInteractionController.cs. These interactors are initialized and the controller meshes are loaded.

The IconController class, defined in /Assets/Discover/Scripts/Icons/IconController.cs manages highlighting icons and haptics when hovering over or selecting icons based on whether the ray is colliding with those icons.

For debugging purposes, the surfaces of all Scene objects like walls and desks have Ray interactables, so that a cursor appears when the input is moving over them. This can be used to check to check that the app knows where the real-life wall is.

App placement flow When selecting an app from the menu, the app must be placed in the player’s space. The AppIconPlacementController class, defined in /Assets/Discover/Scripts/AppIconPlacementController.cs controls the placement flow.

This is also where app placement is managed, using the StartPlacement function:

    public void StartPlacement(AppManifest appManifest, Handedness handedness)


    {
        m_isPlacing = true;
        m_isOnValidPlacement = false;
        m_handedness = handedness;
        m_selectedAppManifest = appManifest;


        m_iconRotation = appManifest.IconStartRotation;
        m_pinchAnchored = false;


        if (appManifest.DropIndicator != null)
        {
            m_indicator = Instantiate(appManifest.DropIndicator, m_appPlacementVisual.transform);
            m_indicator.SetActive(false);
        }


        m_appPlacementVisual.gameObject.SetActive(true);
    }

To create the indicator on the Scene elements, an Indicator is added:

            m_indicator = Instantiate(appManifest.DropIndicator, m_appPlacementVisual.transform);

This uses a RayInteractor to create a ray that has a filter on. It checks the Scene element to figure out what type of element the input is moving over.

For example, certain apps have to be on a desk. This is disabled in DroneRage, but it is useful when you want to query what type of object each one is.

Locomotion

Locomotion for both controllers and hands has been reduced because Mixed Reality apps occur within a user’s space, unlike in larger virtual reality scenes where there is more space.

Discover uses standard controller locomotion, where using the joystick allows the user to move around and teleport. Hands use a gesture that is built using built-in prefabs from the Interaction SDK.Discover and DroneRage FAQ Unity

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

This guide is based on an interview with the engineering team that created Discover and DroneRage. The topics above have covered how Discover and DroneRage integrated colocation, Interaction SDK, and Scene. Below are a few topics not yet covered.

Q: What are the most important Meta Quest features used in the Discover Showcase? Can you list them? A: Passthrough, Spatial Anchors, Shared Spatial Anchors, Avatars, Interaction SDK, Audio, and Scene.

Q: Was there anything specialized to note when embedding Avatars or Audio into DroneRage? A: Avatars in DroneRage don’t do anything special. And in terms of Audio, Discover uses the standard spatial audio SDK, the Meta XR Audio SDK.

Q: How was developing DroneRage in MR vs VR? Were there major differences in the development process? It seems like the game loop was the same, and it was more about getting the underlying MR SDKs in order with the colocation package. A: Yes. It wasn’t a particularly different workflow. It is nice because when you are testing it, it’s comfortable in MR because you’re not taken completely out of your real space every time you put the headset on to test something and for example I can chat on my computer while wearing the headset.

In terms of using Unity, it’s basically colocation, passthrough, and Scene API. Getting all these integrated was the only real difference. It is a VR app essentially; you are just using Passthrough and Scene APIs.

Q: What was the performance optimization process? Was there anything specific used during development (I.E. Tools, processes, focus on computation cost)? A: A key thing is that you are not rendering an entire environment like in VR. There is a lot less rendering to do in MR. Passthrough has a performance cost. We did the standard things like in the Meta Utilities package we have. For example, we extensively used this AutoSet class in /Packages/com.meta.utilities/AutoSet.cs.

We use that because Unity’s GetComponent is slow, so this avoids that when possible.

We also used the Universal Render Pipeline which runs fine. That didn’t include having to port some shaders, that was non-trivial to do. In the public repo, you can see those shader changes too. Check out this Shaders-related commit. It might be useful to some of our community developers.

Q: How do the drones move in DroneRage? Are there any agents, bots, or ML usage? A: They don’t use ML Agents or any Unity package. The way that enemies work is a simple state machine system. For example, you can study the EnemyBehaviour class in /Assets/Discover/DroneRage/Scripts/Enemies/EnemyBehaviour.cs.

In each state there is an EnterState method, an UpdateState, and so on. All this can be inherited, but if someone wants to do that from scratch, they can use an animator so that they can visualize the state machine or state-of-the-art in Unity’s AI tools.

Q: How did you test while developing? A: The primary difficulty we had is testing colocation because you need two devices in the same physical space.

ParrelSync is useful with XR Simulator for testing multiplayer in general. You can have two Unity Editors open at once, and connect their sessions, so you connect one of them as remote rather than collocated. You can do that with two XR Simulators or with one XR Simulator and one Link. This is useful for multiplayer testing, including this specific use case.Learn more about Mixed Reality Development Unity

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Are you ready to start designing experiences for Mixed Reality? We’ve put together the following design guidelines to help you understand MR’s crucial elements, what you can do with them, and how you can design great MR experiences.

General Best Practices: How to start designing for MR, including best practices for interacting with virtual content in Mixed Reality. Mixed Reality Experience and Use Cases: Mixed Reality is a foundational pillar of the current and future Meta Quest headsets and is a key part of delivering next-level experiences to users in the Meta Quest ecosystem. In this document you will explore some of the use cases for MR. Scene Understanding: Use the real world as a canvas using Scene Understanding. Passthrough: Blend virtual objects with the physical environment using Passthrough. Spatial Anchors: Anchor virtual objects in the real-world environment, and provide shared MR experiences. Health & Safety: Learn how to design safe MR experiences. ChatGPT Ok.

User Mixed Reality Utility Kit Overview Unity

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Unity-MRUtilityKit Banner

Overview

MR Utility Kit provides a rich set of utilities and tools on top of Scene API to perform common operations when building spatially-aware apps. This makes it easier to program against the physical world, and allows developers to focus on what makes their app unique.

The utilities we provide broadly fall into 3 categories:

Scene queries Ray cast queries without using the built in physics engine. Find a valid spawn position on the floor, walls, surfaces. Find the best surface position for placing content using ray casts. Check if a position is inside a room. Get the bounding box of a room. Get the parent/child relationship between anchors (for example, volumes stacked on top of each other or wall art attached to walls). Graphical helpers Render the walls such that the textures wrap around smoothly, without any seams, and don’t get stretched and deformed depending on their size. This is crucial for reskinning mixed reality worlds. Make it easy to place virtual objects and furniture to replace their physical counterparts with various options to match orientation, size, and aspect ratio. Room boundary implemented in the application to keep users safe. Development tools Scene debugger allows you to visually inspect the anchors to get their location, orientation, labels, etc. A selection of 30 prefab rooms for testing your application to ensure it works in a variety of environments. Health and safety guidelines

While building mixed reality experiences, we highly recommend evaluating your content to offer your users a comfortable and safe experience. Please refer to the Mixed Reality H&S Guidelines before designing and developing your app using this sample project or any of our Presence Platform Features.

Getting started

Requirements Unity 2021.3.30f1 A Meta Quest 2, Quest Pro, or Quest 3 with OS number of 60 or newer (check this in device settings). PC: You can run the Unity Editor using Quest Link if you are on a PC. Keep in mind that the Passthrough image only appears in the headset. For Scene API, room data must exist before connecting device; disconnect Quest Link, run Room Setup on your Quest, then reconnect Quest Link. Mac: If you are on a Mac, you must build an apk and deploy it to your device. You can use our MRUK.prefab to load an artificial room prefab in Editor. Because MRUK is based on Scene API, first ensure you are familiar with the concepts. Set up Download and install the MR Utility Kit package: Either add to your assets from the Unity Asset store, then select Packages: My Assets from the Unity Package Manager, click on Meta MR Utility Kit and install. or Download from Meta as a tarball, extract it to your computer, choose Add package from disk from the Unity Package Manager, and select the package.json you unzipped. Ensure Passthrough is working in your project by following these instructions. Ensure Scene is working by following these instructions. Capture your room in our Space Setup tool, found in the Oculus settings menu. Creating a new scene In addition to the OVRCameraRig, ensure your scene contains the MRUK prefab. The MRUK class is a singleton that should only exist once within the scene. When using Scene API data, you must wait until the room has loaded, or your app likely won’t function properly. This loading event is conveyed by MRUK’s SceneLoadedEvent. For simplicity, this is exposed in the Inspector for you to drop your public functions.

Scene Loaded Event

To properly initialize your own game objects:

In your code, add public initialization functions: Click the “+” to add a new field to this Scene Loaded Event list. Drag the game object with your initialization code to the empty field, selecting your component and pointing to your function. Keep in mind that list order matters; if you have initialization dependencies between scripts, make sure that the functions you’d like to execute first are at the top of the event list. NOTE: You do NOT need OVRSceneManager in your scene, MRUK serves as a replacement for it. In v60 there is an issue that can cause your camera to flicker back and forth when both are used together.

Press play A benefit of MRUK is that it can fallback to imitation anchor data during play in Editor mode, for quicker iteration than building an apk. For example, when pressing Play on the MRUKBase scene, you can see a list of anchors and a visualization of the room:

PlayModeRoom

(The room choice is random by default, and specified on the MRUK prefab)

For other development options in Editor, you can also use Oculus Link for your real room, or the Meta XR Simulator with simulated rooms.

Features

There are a couple of ways to use MR Utility Kit: engineering through code, or adjusting parameters of the prefab Tools within Unity Inspector.

Code When engineering, you will primarily interact with the MRUK, MRUKRoom and MRUKAnchor classes. MRUK is a singleton class, there should only be one instance of this in your scene. There is one MRUKRoom instance per room, and all of its children have a MRUKAnchor component. This structure is analagous to OVRSceneRoom and OVRSceneAnchor when using OVRSceneManager.

MRUK LoadSceneFromDevice(): Load the scene data from the quest device LoadSceneFromPrefab(): Load the scene from one of the provided prefabs or your own ClearScene(): Clear the scene data GetRooms(): Get the list of rooms currently loaded GetCurrentRoom(): Get the room the user is currently in MRUKRoom GetRoomAnchors(): Access to all MRUKAnchors in the room (not including the anchor on the room itself) GetFloorAnchor(): Access to the floor MRUKAnchor GetCeilingAnchor(): Access to the ceiling MRUKAnchor GetGlobalMeshAnchor(): Access to the global mesh MRUKAnchor GetWallAnchors(): Access to all the wall MRUKAnchors in the room GetRoomOutline(): Get the bottom corner points of the room, in clockwise winding order (when viewed top-down) GetKeyWall(): Get the longest wall in the room that has NO other room corners behind it (within a tolerance) Raycast()/RaycastAll(): Collider-free raycast against only the anchors in the room; note that hit order is not guaranteed GetBestPoseFromRaycast(): Get a suggested pose from a raycast. Useful for placing AR content with a controller. IsPositionInRoom(): Check if a position is within the room outline GetRoomBounds(): World-aligned bounding box for macro functionality GetFacingDirection(): For volumes, get a “likely” Y-forward vector based on the closest wall face direction IsPositionInSceneVolume(): Check if a position is within any volumes in the room TryGetClosestSeatPose(): From a Ray, returns the closest seat Pose on any Couch objects GetSeatPoses(): Returns all seat Poses in the room (0 if the room has no Couch objects) TryGetAnchorParent(): Because there’s a flat scene hierarchy (room & anchors), this tries to find a logical one; for example if you wanted to know which wall a door is attached to, or if a volume is stacked on another. TryGetAnchorChildren(): Similar to above, but for the opposite relationship DoesRoomHave(): One-line access to see if any object in the room has a semantic label TryGetClosestSurfacePosition(): Used mainly for the RoomGuardian prefab, it returns the closet point on any Scene surface FindLargestSurface(): Returns the anchor with the largest available surface area GenerateRandomPositionInRoom(): Generate a random position in a room, while avoiding volume scene objects and points that are too close to surfaces GenerateRandomPositionOnSurface(): Generate a position on any valid surface in the room MRUKAnchor Raycast(): Collider-free raycast against this anchor IsPositionInBoundary(): Determine if a position is within a plane’s 2D boundary (false if object has no plane) GetDistanceToSurface(): Get the shortest distance from a point to the surface of this anchor GetClosestSurfacePosition(): Get the closest position on the anchor’s surface to a given point GetAnchorCenter(): Using transform.position on Scene API volumes isn’t correct, since the position is on the top of the volume. This returns the centroid GetAnchorSize(): Get a Vector3 scale of an anchor, preferring the volume. Note that some anchors have both 2D and 3D bounds, so if you want specific scale: PlaneRect.Value.size: (Returns a Vector2: first check if HasPlane is true) VolumeBounds.Value.size: (Returns a Vector3: first check if HasVolume is true) AnchorLabels: Query this to understand which semantic labels this anchor has GetBoundsFaceCenters(): Get the center points of each surface (1 for planes, 6 for volumes) IsPositionInVolume(): Test if a position is inside this volume HasLabel(): Check if an anchor has a specific Semantic label GetLabelsAsEnum(): Get all Semantic labels of this object World locking World locking is a new feature that makes it easier for developers to keep the virtual world in sync with the real world. Virtual content can be placed arbitrarily in the scene without needing to explicitly attach it to anchors, MRUK takes care of keeping the camera in sync with the real world using scene anchors under the hood. This is achieved by making small imperceptible adjustments to the camera rig’s tracking space transform optimizing it such that nearby scene anchors appear in the right place.

Previously, the recommended method to keep the real and virtual world in sync was to ensure that every piece of virtual content is attached to an anchor. This meant that nothing could be considered static and would need to cope with being moved by small amounts every frame. This can lead to a number of issues with networking, physics, rendering, etc. When world locking is enabled, virtual content can remain static. The space close to the player will remain in sync however the trade off is that space further away may diverge from the physical world by a small amount.

World locking is enabled by default but can be disabled by setting EnableWorldLock to false on the MRUK instance.

Tools These prefabs in the Core/Tools folder are designed to be dropped in your scene and optionally modified in the Inspector interface. Note that some of these prefabs have an MRUKStart component on them; this is for drag-and-drop ease of use. However, if your code has initialization dependencies, you will need to manually organize your initialization functions into one ordered list (preferably on the MRUK prefab). An example of this is if you would like to access the EffectMesh’s mesh from your own game code, you must ensure the EffectMesh.CreateMesh() function executes first by being above your code’s function in a single Scene Loaded Event list. In steps, this would be:

Drag the MRUK and EffectMesh prefabs into the scene. Remove the MRUKStart component on the EffectMesh prefab. On the MRUK prefab, add the EffectMesh.CreateMesh() function to the Scene Loaded Event list. Add your own initialization code to the list below that. MRUK Required to use MR Utility Kit.

MRUK

Scene Loaded Event: Subscribe to this event to initialize your game code (see “Creating a New Scene”). Scene Settings Data Source: The transforms that are used to create the Scene Anchors: Device: Uses the data created on the Quest device through Space Setup (use for Link as well) Prefab: Uses an artificial room, specified by Room Index and one of the Room Prefabs DeviceWithPrefabFallback: If no device data exists, use the Prefab room Room Index: The index (0-based) into the Room Prefabs array; -1 means random Room Prefabs: The artificial rooms for simulating Scene data. You can create your own simulated rooms by reference the structure and game object naming of the existing room prefabs. Load Scene on Startup: When enabled, the Scene is automatically loaded if it exists and the SceneLoadedEvent is fired with no other action. When false, you can manually control Scene initialization behavior, such as for using MRUK on non-device rooms (for example, networked). Seat Width: The width/depth of a person’s seat size, when calculating seat poses on Scene objects with the COUCH label. EffectMesh This creates special-effect meshes out of the Scene objects you specify. Each edge has vertex coloring applied, for optional shader usage. A variety of mapping options are available when making the mesh.

EffectMesh

Mesh Material: The material to apply to the constructed mesh. If you’d like a multi-material room, you can use another EffectMesh object with a different Mesh Material. Border Size: When creating the vertices for edge bounds, this is the “thickness” of each edge. Edge Vertices are colored black, and fade to white over the BorderSize distance. You can see how this is exploited in a shader by examining the SceneMeshOutline.shader used in RoomBoxEffects.mat. Frames Offset: For quads on walls (for example, doors, windows, wall art), because they exist on the same lateral plane, this can lead to Z-flickering. FramesOffset extends them from wall by this many meters. Add Colliders- When true`, collision is made for the EffectMesh primitives. Cast Shadows: Whether the effect mesh objects will cast a shadow. Hide Mesh: Hide the effect mesh. Texture Coordinate Modes: How the texture coordinates are applied to the created mesh UVs, generally only used on contiguous WALL_FACE objects. U refers to the horizontal dimension of the room border, and V refers to the vertical. METRIC: Coordinates start at 0 and increase by 1 unit every meter. METRIC_SEAMLESS: Coordinates start at 0 and increase by 1 unit every meter but are adjusted to end on a whole number to avoid seams. MAINTAIN_ASPECT_RATIO: Coordinates are adjusted to the other dimensions to ensure the aspect ratio is maintained. MAINTAIN_ASPECT_RATIO_SEAMLESS: Coordinates are adjusted to the other dimensions to ensure the aspect ratio is maintained but are adjusted to end on a whole number to avoid seams. STRETCH: Coordinates range from 0 to 1. Labels: Which Semantic objects to be included in the EffectMesh. AnchorPrefabSpawner If you’re coming from the OVRSceneManager workflow, you can use this instead to instantiate objects at anchor locations.

AnchorPrefabSpawner

Prefabs To Spawn: For each anchor found, a prefab is instantiated using these settings. Labels: The transform of the Semantic objects to be referenced when instantiating a prefab. Prefabs: A random prefab from this list is chosen for instantiation. Match Aspect Ratio: When enabled, the prefab will be rotated to try and match the aspect ratio of the volume as closely as possible. This is most useful for long and thin volumes, keep this disabled for objects with an aspect ratio close to 1:1. Only applies to volumes. Calculate Facing Direction: When enabled, the prefab will be rotated to face away from the closest wall. If match aspect ratio is also enabled then that will take precedence and it will be constrained to a choice between 2 directions only. Only applies to volumes. Scaling: Set what scaling mode to apply to the prefab. By default the prefab will be stretched to fit the size of the plane/volume. But in some cases this may not be desirable and can be customized here. Stretch: Stretch each axis to exactly match the size of the Plane/Volume. Uniform Scaling: Scale each axis by the same amount to maintain the correct aspect ratio. Uniform XZ Scale: Scale the X and Z axes uniformly but the Y scale can be different. No Scaling: Don’t perform any scaling. Alignment: Spawn new object at the center, top or bottom of the anchor. Automatic: For volumes align to the base, for planes align to the center. Bottom: Align the bottom of the prefab with the bottom of the volume or plane. Center: Align the center of the prefab with the center of the volume or plane. No Alignment: Don’t add any local offset to the prefab. Ignore Prefab Size: Don’t analyze prefab, just assume a default scale of 1. On Prefab Spawned: An event that fires once all prefabs have been instantiated. Subscribe your game code to this event if you have a dependency on the instantiated prefabs. center FindSpawnPositions This is a utility to help with instantiating content in unknown, user-defined environments.

FindSpawnPositions

Spawn Object: Prefab to be placed into the scene, or object in the scene to be moved around. Spawn Amount: Number of SpawnObject(s) to place into the scene, only applies to Prefabs. Max Iterations: Maximum number of times to attempt spawning/moving an object before giving up. Spawn Locations: Attach content to scene surfaces. Labels: When using surface spawning, use this to filter which anchor labels should be included. Eg, spawn only on TABLE or OTHER. Check Overlaps: If enabled then the spawn position will check colliders to make sure there is no overlap. Override Bounds: Required free space for the object (Set negative to auto-detect using GetPrefabBounds.) Layer Mask: Set the layer(s) for the physics bounding box checks, collisions will be avoided with these layers. SceneDebugger This isn’t used for creation, but for debugging Scene data. Clicking on an Anchor (with the mouse or right controller) will display its label(s), scale, collision point, and a suggested Pose for placing AR content. In general, the fields here don’t need to be modified.

SceneDebugger

Debug Projectile: When pressing the trigger button, this is the object “shot” out. Note that it is recycled, and should point to an existing object (not a prefab). Visual Helper Material: The material used to render objects used for debugging. Logs: The Text object displaying the debug output. Selection Entry Point: The button in the debug canvas to highlight when the Start button is pressed. Surface Type Dropdown: The UI dropdown to select a Semantic classification. Positioning Method Dropdown: When interacting specifically with tops of volumes, this can be used to specify where the return position should be aligned on the surface. Input Selection Delay: The delay between the fired selection events. Show Debug Anchors: Visualize anchors with a checker pattern. Shoot Ball: When disabled, pressing the trigger button does not shoot the Debug Projectile. RoomGuardian Creates a Guardian-like protective mesh that renders as a player approaches it, eventually showing Passthrough directly. This is helpful for safety if your scene is intended to be fully virtual instead of using Passthrough. Note that this prefab also has an EffectMesh component on it, which is used to create the mesh. If you’d like to create your own shader, reference the properties in FakeGuardian.shader (_WallScale and _GuardianFade), as this component passes them as parameters.

RoomGuardian

Guardian Distance: This is how far, in meters, the player must be form a surface for the Guardian to become visible (in other words, it blends _GuardianFade from 0 to 1). The “position” of the player is calculated as a point 0.2m above the ground. This is to catch tripping hazards, as well as walls. SpaceMap Generates a top-down 2D texture of the room(s). Can be used for point sampling or as a texture in materials.

GetColorAtPosition(): Get the color value of the map from a position in world space SpaceMap

Texture Map: The texture generated. This asset must have read/write enabled in its import settings. Map Gradient: The gradient applied to the ramp. The left side is mapped to “Inner Border” and the right side is mapped to “Outer Border” Inner/Outer Border: Where the Map Gradient is applied, in relation to the surfaces; negative numbers are “inside” the room, and positive are outside. Map Border: How large the grid/texture should extend beyond the physical boundaries of the room. This should generally match Outer Border, but may need a small buffer if the Texture Map has a wrap mode set to clamp. This ensures the outer pixel perimeter is consistent. Samples

In addition to the core MR Utility Kit code, you can find sample scenes that showcase how mixed reality environments can be constructed. These samples can be found in the Samples~ folder. You must import these into your project before building or modifying them: open your Package Manager, go to MR Utility Kit, expand the Samples section and click on Import next to Example Scenes.

MRUKBase This is a simple scene consisting of a few prefabs; MRUK (required), EffectMesh, RoomGuardian, and SceneDebugger (under the LeftHandAnchor transform of OVRCameraRig).

FloorZone Generate a number of radial free space zones in a room.

MultiSpawn See how to find valid positions on specific surfaces, for example to instantiate content.

NavMesh Observe two NavMeshAgents traversing a NavMesh, defined by the room bounds and furniture or Scene Mesh.

VirtualHome Reconstruction of an interior scene using different pieces of furniture for each anchor label. FancyResizable.cs allows you to define a minimum / maximum object size, and swap to a different prefab if the spawned object is too large or small (Used in OTHER prefab). You may specify multiple prefabs and one will be randomly selected from the list, maintaining the same transform as the original object (Used in PLANT and WALL_ART).

PassthroughRelighting See the Passthrough Relighting page for more details.Passthrough Relighting Unity

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Overview

This documentation walks you through the features and structure of the passthrough relighting (PTRL) sample. It shows you how your Unity project can be set up such that virtual lights and shadows in your Unity scene can interact with scene/space anchor objects such as the floor, walls, desks, and so on. This helps your virtual content to blend with the real-world environment seen displayed in passthrough.

Requirements

MRUK package installed in your project. Manual scene capture or completed space setup Run the sample From the MRUtilityKit zip, open Sample folder. Open the MRUtilityKitSample project and then load the PTRL map. Build and run the Passthrough relighting scene on your device. Sample app description

Once the application starts, you see lights and shadows interacting with scene objects. You control the movements of the character, Oppy, which projects shadows and highlights.

Those lighting effects are hidden by the Environment Depth, in order to be occluded by real world objects similarly to the rest of the virtual scene.

You can control Oppy’s movement using the left or right controller thumbstick. Moving the thumbstick up moves Oppy in a forward direction relative to the headset.

To make Oppy jump, press the A, X, or the grip buttons. Jumping while moving is also possible, and encouraged in order to jump on scene objects like desks. Holding the jump button makes Oppy jump higher.

A control panel is attached to the left controller and is interactable with a ray shooting from the right controller, using the right trigger to select. Through the panel, you can adjust the render parameters for highlights and shadows, toggle between scene objects and the scene mesh, and change passthrough brightness and depth check.

Showing the control panel and its adjustable options.

Scene overview

The experience implementation is in the sample as a single scene named PassthroughRelighting.

The main elements of the scene are:

The character, “Oppy”. A directional light that will cast shadows. The Flame character floating over Oppy. It has a point light attached to it projecting highlights onto the scene planes and volumes The OVRCameraRig to move the cameras and controllers. OVRPassthrough, with a OVRPassthroughLayer component, to show the passthrough. The MRUK prefab, with a MRUK component, to get information on user defined scene anchors and overload the prefabs with PTRL material. The EffectMeshGlobalMesh game object, with a EffectMesh component, to apply the material with PTRL shader to the global mesh. The EffectMesh game object, with an EffectMesh component, applies the PTRL effect materials to the other scene objects. The OVRInteraction prefab includes a RayInteractor to allow the user interaction with the UI panel. Project structure

Everything that is required to create highlights and shadows in passthrough is located in the MRUK package in the core folder:

Shader: HighlightsAndShadows with subshaders for both BiRP and URP implementing passthrough effects to receive shadows and render highlights point lights. Materials: the TransparentSceneAnchor material uses the HighlightsAndShadows shader mentioned above, to be applied on scene geometry. Textures: the textures needed to create a blob shadow. Prefabs: contains prefabs for scene planes, scene volumes and scene mesh, already using the above mentioned material. They are ready to be inserted to the Unity scene OVRSceneManager component in the respective sections. The resources for the Oppy and flame character are separated into their own respective folders in the sample.

Adding Passthrough relighting to an MR project

With OVRSceneManager Import the MRUK package in your project and link the PTRLScenePlane and PTRLSceneVolume prefabs from the folder onto OVRSceneManager.

Import the MRUK package in your project and link the PTRLScenePlane and PTRLSceneVolume prefabs from the folder onto OVRSceneManager.

Open the OVRSceneManager component in your project to set the OVRSceneAnchors prefab to type plane, volume, and global mesh; these prefabs should have a material assigned to them that makes use of the PTRLHighlightsAndShadows shader, from Core/Shaders folder.

The sample only features integration with Scene API. This means a user has to perform a manual scene capture or complete Assisted Scene Capture (ASC) to capture a scene mesh. However, the material can be applied on arbitrary objects, for example, a floor-level plane.

Using Mixed Reality Utility Kit Import the MRUK package into your project and add the MRUK prefab to your scene.

Import the MRUK package into your project and add the MRUK prefab to your scene. Add an EffectMesh component, link the TransparentSceneAnchor material to it’s MeshMaterial field. Add an MRUKStart component and link the CreateMesh method of the newly created EffectMesh to the list of MRUKStart scene loaded event list.

Multiple point light support In order to show highlights from more than one point light source, make sure that the desired amount of per-pixel light sources is specified in your project settings. For the BiRP the setting is called Pixel Light Count and it is in Project Settings > Quality.

For the URP in the Universal Render Pipeline Asset, in section Lighting, set Additional Lights to be Per Pixel, and set the Per Object Limit to the desired number.

Shadow quality The shadow appearance is dependent on the quality settings of your project. In order to set the correct shadows quality for your project, go to the Project Settings > Quality.

Enable Shadow, set the Shadow Resolution, and set the Shadow Distance. The smaller the distance, the higher the shadow quality will be.

Highlights and shadows shader

Illustrating how the highlights and shadows shader renders an image.

The PTRLHighlightsAndShadows shader computes highlights intensity from point light sources and marks shadow areas adjusting transparency. This is processed in the OpenXR compositor and blended with the Passthrough layer.

When a virtual object rendered with this shader is overlapping with its real physical counterpart, it creates the effect of relighting, lit with virtual light and shadowed by virtual objects.

The BiRP subshader in PTRLHighlightsAndShadows contains multiple passes:

The first ForwardAdd pass is for point lights, highlights, and additional directional light highlights. This pass runs for each light that is not considered the main light. It accumulates diffuse contribution, this time without multiplying any albedo component that a regular diffuse shader would have.

After that, ForwardBase pass handles the shadows of the main light. It outputs a black color multiplied by the shadow intensity set in the material.

Finally, a second ForwardAdd pass adds shadow contributions from additional directional lights.

URP version There is also a URP subshader, which implements the PTRL effect in a single pass. UniversalForward pass computes light contribution and shadows from all light sources.

Blob shadow

A common alternative to real-time shadows are blob shadows. Those are simple blots of color that do not take into account the geometry of the object and are more performant. The Oppy entity contains a BlobShadow game object that enables the technique. BlobShadow objects are designed using the Projector component, along with a specific material and shader that can be found in the corresponding folder. ChatGPT Ok.

User Interaction SDK Overview Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

The Meta XR Interaction SDK for Unity makes it easy for VR users to immersively interact with their virtual environment. With Interaction SDK, you can grab and scale objects, push buttons, teleport, navigate user interfaces, and more while using controllers or just your physical hands.

To try out Interaction SDK on a supported device without any required setup, see the Try Interaction SDK section.

Features

Interaction SDK offers many features to create an immersive XR experience.

Multiple ways to grab an object, including grabbing it up close or from far away, or grabbing it dynamically based on colliders. Pressing buttons using raycasting. Pressing buttons using poking, and scrolling on a UI canvas. Teleporting to spots or surfaces in the environment, and turning so you can rotate in place. The ability to customize how your hands grab an object on both Mac and PC. Manipulating objects, including scaling and moving objects freely or along fixed axes. Snapping objects to a location (ex. items to an in-game inventory). Creating and detecting custom gestures, including stationary gestures, like a thumbs-up, or moving gestures, like a wave. Support for custom hand models. Customizable User Interface (UI) prefabs. Throwing objects. Here’s a video showing the features in action.

Try Interaction SDK

To try Interaction SDK interactions without any setup, you can download one of the following apps.

Interaction SDK Samples, the official reference for Interaction SDK features. First Hand, an official hand tracking demo built by Meta. Move Fast, a short showcase of Interaction SDK being used in fast, fitness-type apps. Links to the source code for First Hand and Move Fast are in the Related Topics section.

Supported devices

Quest 1 Quest 2 Quest 3 Quest Pro Supported Unity versions

Unity 2021 LTS (2021.3)

Unity 2022 LTS (2022.3)

OpenXR compatibility

Interaction SDK supports OpenXR via the Oculus OpenXR backend. Unity’s OpenXR plugin is not supported at this time. For more information, see XR Plugin.

Directory layout

Interaction SDK follows the standard Unity UPM layout and contains three root folders, each with their own Assembly Definition (.asmdef):

Editor (Oculus.Interaction.Editor): Contains all Editor code for Interaction SDK. Runtime (Oculus.Interaction): Contains the core runtime components of Interaction SDK. Samples (Oculus.Interaction.Samples): Contains sample scenes, prefabs, and scripts. This directory is optional to import. Dependencies

Interaction SDK depends on the following package:

[SDK] Oculus Core (com.oculus.integration.vr) The Samples directory additionally depends on the following package:

TextMeshPro (com.unity.textmeshpro) Related topics

For a video overview of inputs, hand tracking, and Interaction SDK, see Connect 2022. For a video tutorial of how to get started with Interaction SDK, see Building Intuitive Interactions in VR. For First Hand’s source code, see the First Hand GitHub repo. For Move Fast’s source code, see the Move Fast GitHub repo. Next steps

To install and set up the Interaction SDK, see Getting Started with Interaction SDK.Getting Started with Interaction SDK Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

The Meta XR Interaction SDK for Unity is available on the Unity Asset Store as either a set of three standalone packages or as part of an All-in-One SDK. The All-in-One SDK bundles several of Meta’s Virtual Reality SDKs, including the Meta XR Core SDK, Meta XR Interaction SDK, Meta XR Voice SDK, and Meta XR Platform SDK. This tutorial explains how to install and set up the Interaction SDK in your Unity project.

Note Interaction SDK users can ignore the Set up Hand Tracking tutorial. It's obsolete if you're using the SDK.

Before you begin

Note Open the links below in a new tab since they don't link back to this page.

Complete Set Up Development Environment and Headset. Complete Use Meta Quest Developer Hub and Meta Quest Link. Add packages to your Unity assets

There are two ways to install Interaction SDK from the Unity Asset Store. The first way, Option A, uses a set of standalone packages. The second way, Option B, uses the Meta XR All-in-One package. The standalone packages are smaller since they only contain Interaction SDK, whereas the larger All-in-One package includes additional Meta XR SDKs that you may want to use in your project.

Choose either Option A or Option B. Option A: Adding Standalone packages To install the SDK via standalone packages, in the Unity asset store, add these packages to your Unity assets.

Meta XR Interaction SDK, which provides the core implementations of all the interaction models along with necessary shaders, materials, and prefabs. Meta XR Interaction SDK OVR Integration, which allows Interaction SDK to interface with OVRPlugin. Meta XR Interaction SDK OVR Samples, which contains sample scenes, prefabs, and art assets for Interaction SDK using OVR variants of the player rig. Option B: Adding All-in-One package To install the SDK via the All-in-One package, add these packages to your Unity assets.

Meta XR All-in-One SDK, a wrapper package that depends on the latest version of all main Meta XR SDKs, making it easy to get started with VR and MR development. Meta XR Interaction SDK OVR Samples, which contains sample scenes, prefabs, and art assets for Interaction SDK using OVR variants of the player rig. Import packages

Once you’ve added the SDK packages to your Unity assets, you need to import the packages into your Unity project so they’re available locally.

Open the Unity project where you want to use Interaction SDK.

Select Window > Package Manager to view your available packages.

The Package Manager window appears.

In the Package Manager window, click Install next to each SDK package you downloaded in the previous section. This adds them to your project.

If you chose the standalone packages, the packages you need to install are:

Meta XR Interaction SDK Meta XR Interaction SDK OVR Integration Meta XR Interaction SDK OVR Samples If you chose the All-In-One SDK, the packages you need to install are:

Meta XR All-in-One SDK Meta XR Interaction SDK OVR Samples Installing the packages

Each package’s folder appears in the Project window under Packages once it’s installed.

Packages listed in the Project window

For the Meta Interaction SDK OVR Samples package, on the right side of the window, select the Samples tab.

Click the three Import buttons to import the sample scenes into your project.

Import samples

Run the Unity Project Setup Tool

The Unity Project Setup Tool optimizes Android project settings for Meta Quest Unity apps, including texture and graphics settings. The tool applies the required settings for creating Meta Quest XR apps, including setting the minimum API version, using ARM64, and installing the Oculus XR Plug-in and XR Plug-in Management package.

Note If you've run the Unity project setup tool before, skip to the next section.

Navigate to Oculus > Tools > Project Setup Tool.

In the checklist under the Android icon tab of the Project Setup Tool, select Fix All.

Project Setup Tool Fix All

If you still see Recommended Items in the list, select Apply All.

Project Setup Tool Apply All

Add the rig

In Interaction SDK, the rig is a predefined collection of GameObjects that enable you to see your virtual environment and initiate actions, like a grab, a teleport, or a poke. The rig is contained in a prefab called OVRCameraRigInteraction, which automatically adds a ready-to-use camera, hands, and controllers to your scene. However, using controller driven hands for interactions instead of controllers still requires some manual setup.

Interaction SDK does provide a second rig prefab that doesn’t include a camera but instead uses a camera you provide. For more information, see Use Cameraless Rig Prefab. However, we strongly recommend using the rig described in this section instead.

In your Unity scene, delete the default Main Camera GameObject from the Hierarchy, since the SDK uses its own camera rig.

The Hierarchy should now be empty except for the Directional Light GameObject.

In the Project window search bar, enter OVRCameraRigInteraction. Ensure the search filter is set to either All or In Packages, since the default setting only searches your assets. The prefab’s file path is Packages/com.meta.xr.sdk.interaction.ovr/Runtime/Prefabs/OVRCameraRigInteraction.prefab.

Toggling the filterToggling the filter to search the installed packages for the OVRCameraRigInteraction prefab.

Drag the prefab from the search results into the Hierarchy. Under Hierarchy, select OVRCameraRigInteraction > OVRCameraRig. Under Inspector, in the OVR Manager component, if you want your app to support hand tracking, set Hand Tracking Support to either Controllers and Hands or Hands Only.

Enabling hand tracking

Prepare to launch your scene by going to File > Build Settings and clicking the Add Open Scenes button.

Your scene is now ready to build.

Select File > Build And Run, or if you have an Oculus Link connected, click Play.

The scene loads. The world is completely empty, but if you raise your hands or controllers, they should appear in front of you.

(Optional) Set up controller driven hands

This section is optional unless you want to replace controllers in your app with controller driven hands. Controller driven hands is an input method that uses your physical controllers for input but renders them as hands. This combines the increased immersion of hands with the benefits of controllers, like vibrations and physical feedback.

Unless otherwise noted, interactions that work for hands also work for controller driven hands.

Under Project, search for OVRControllerDrivenHands. Ensure the search filter is set to either All or In Packages, since the default setting only searches your assets.

Drag the OVRControllerDrivenHands prefab from the search results onto OVRInteraction in the Hierarchy.

Related topics

To learn about the key concepts in Interaction SDK, see the Architecture Overview. Next steps

Add some GameObjects and make them interactable with Quick Actions. ChatGPT Ok.

User Add an Interaction with QuickActions Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

QuickActions is a feature available in Interaction SDK v62 or higher that automates adding interactions to your scene. It takes the form of a set of right-click wizards that can apply changes to existing scene objects and the camera rig.

QuickActions is the recommended way to add interactions to your project if you’re using hands and controllers. QuickAction doesn’t support controller driven hands. Currently, QuickActions supports only grab, distance grab, ray, and poke interactions. QuickActions can only add ray and poke interactions to a canvas GameObject, not other types.

To add interactions not supported by QuickActions, see the Tutorials section in the sidebar. If you’re using a pre-v62 release of the SDK, it doesn’t support QuickActions, so you have to manually set up all interactions using the SDK’s Legacy tutorials.

Before you begin

Complete Getting Started with Interaction SDK. Add an interaction

Once you complete Getting Started with Interaction SDK, your scene contains a working camera, hands, and controllers, but you need to add an object to interact with.

Open the Unity scene where you completed Getting Started with Interaction SDK. Under Hierarchy, right-click and select 3D Object > the type of object you want to create (ex. Cube).

The object appears in the Hierarchy.

Note Ray and poke interactions can only be added to canvas GameObjects, not other GameObject types.

Under Hierarchy, select the GameObject that should be interactable. Right-click on the GameObject and select Interaction SDK > Add … Interaction (ex. Add Grab Interaction).

The quick actions menu.

The QuickActions menu.

The Quick Actions wizard appears.

In the Quick Actions wizard, select Fix All to fix any errors. This will add missing components or fields if they’re required.

The fix all option

If you want to further customize the interaction, adjust the interaction’s settings in the wizard. Some interactions don’t have additional settings for you to customize.

Select Create.

The wizard automatically adds the required components for the interaction to the GameObject. It also adds components to the camera rig if those components weren’t already there.

The automatically added GameObjects for a hand grab.

An example of the wizard adding components to the GameObject and camera rig for a hand grab interaction. Your hierarchy may look different depending on the interaction you chose and if your camera rig was missing components, like the rig in this image.Build a Custom Hand Pose Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

In this tutorial, you learn how to build a custom hand pose from scratch by constructing a thumbs up pose for both hands. Since custom hand poses rely on information about your physical hand, like finger position and wrist orientation, custom hand poses aren’t supported for controllers and controller driven hands. For a detailed explanation of how pose recognition works, see Hand Pose Detection.

To try pose recognition in a pre-built scene, see the PoseExamples or DebugGesture scenes.

Note Hand poses can also rely on velocity and rotation data, which aren't used in this tutorial but are described in Hand Pose Detection.

Thumbs up pose detection image An outline of the parts that define a thumbs up pose. The pose is a combination of two different shapes (thumb curl open and all fingers closed) and a transform orientation (wrist up).

Before you begin

Complete Getting Started with Interaction SDK. Get hand transform data

Hand tracking provides transform data about your hands, like which direction your wrist is facing. To access that data, each hand needs a TransformFeatureStateProvider component.

Open the Unity scene where you completed Getting Started with Interaction SDK.

Under Hierarchy, select OVRCameraRig > OVRInteraction > OVRHands > LeftHand > HandFeaturesLeft.

Under Inspector, add a TransformFeatureStateProvider component.

Repeat these steps for OVRCameraRig > OVRInteraction > OVRHands > RightHand > HandFeaturesRight.

Set finger thresholds

Finger thresholds define when a finger is in a specific state (ex. curled or open). For the thumbs-up pose, you’ll use the default thresholds included with Interaction SDK. To save time, you’ll copy them from a sample scene.

In a separate Unity window, open the PoseExamples sample scene found under Samples/Scenes/Examples/PoseExamples.unity so you can copy the finger thresholds from it.

Under Hierarchy, select OVRCameraRig > OVRInteraction > OVRHands > LeftHand > HandFeaturesLeft.

Under Inspector, select the three vertical dots at the top of the FingerFeatureStateProvider component.

Copying component

A list appears.

In the list, select Copy Component.

Return to the Unity scene you’re using for this tutorial.

In your Unity scene, under Hierarchy, select LeftHand > HandFeaturesLeft.

Under Inspector, select the three vertical dots at the top of the Joints Radius Feature component.

A list of options appears.

In the list, select Paste Component as New.

The FingerFeatureStateProvider component you copied is added to HandFeaturesLeft as a new component.

In the FingerFeatureStateProvider component, set the Hand property to the hand you want to track, LeftHand.

Repeat these steps for RightHand > HandFeaturesRight.

Create an empty GameObject

A pose is contained in an empty GameObject that has a list of unique components, so you need to make an empty GameObject.

Under Hierarchy, add an empty GameObject by right-clicking and selecting Create Empty.

Name the GameObject ThumbsUpPoseLeft.

Under Inspector, add the following components: ShapeRecognizerActiveState TransformRecognizerActiveState ActiveStateGroup ActiveStateSelector SelectorUnityEventWrapper Repeat these steps for the right hand, naming that GameObject ThumbsUpPoseRight. Define the shapes of the pose

All of the shapes required for a pose are stored in the hand’s ShapeRecognizerActiveState component. The thumbs up pose includes two shapes. The first shape checks that all the fingers are fully closed, and the second shape checks that the thumb is fully extended. Both must be true for the SDK to consider the thumbs up shape as true.

However, a shape doesn’t define a pose on its own because shapes don’t track velocity or the hand’s orientation in space.

Under Hierarchy, select the ThumbsUpPoseLeft GameObject you created in the previous section.

Under Inspector, in the ShapeRecognizerActiveState component, set the Hand property to LeftHand to track the shape of the left hand.

Set Finger Feature State Provider to the HandFeaturesLeft GameObject. This allows the ShapeRecognizerActiveState component to access the state of all five fingers on the left hand so it can compare them to the shapes you define.

Properties

In the Shapes property, click the + twice to add two elements to the list.

Set Element 0 to the FingersAllClosedShapeRecognizer asset by clicking the circle in the property’s textbox and searching for FingersAllClosed. This is the first shape of the pose.

Set Element 1 to the ThumbUpShapeRecognizer asset. This is the second shape of the pose.

Repeat these steps for the ThumbsUpPoseRight GameObject.

Define the pose orientation

Since pose shapes don’t track the orientation of the hand, currently your pose is active whenever your thumb is extended and the other fingers are closed, regardless of which way your hand is rotated. To define the correct orientation, a pose uses the TransformRecognizerActiveState component.

To ensure your pose is a thumbs up and not something similar but incorrect, like a thumbs down, define the pose’s orientation to be the wrist facing upwards in world space.

Under Hierarchy, select the ThumbsUpPoseLeft GameObject.

Under Inspector, in the TransformRecognizerActiveState component, set the Hand property to LeftHand.

Set the Transform Feature State Provider property to HandFeaturesLeft to get the left hand’s transform data.

In the Transform Feature Configs dropdown, select Wrist Up to define the correct orientation as the wrist facing up.

Under Transform Config, in the Up Vector Type dropdown, select World to define the wrist orientation as relative to world space.

Set the Feature Thresholds property to DefaultTransformFeatureStateThresholds by clicking the circle in the property’s textbox and searching for DefaultTransformFeatureStateThresholds. This is a set of default threshold definitions created by the SDK team.

Repeat these steps for the ThumbsUpPoseRight GameObject.

Combine the shapes and orientation

So far you’ve defined the shapes and the orientation that together make a thumbs up. Now it’s time to check the shapes and orientation simultaneously. To check both simultaneously, you’ll combine the shapes and orientation in an Active State Group. This group becomes active when the tracked hand matches all of the required shapes and orientation.

Under Hierarchy, select the ThumbsUpPoseLeft GameObject.

In the Active State Group component, add two elements to the list.

Set Element 0 to the ThumbsUpPoseLeft GameObject.

A list of options appears.

Select ShapeRecognizerActiveState from the list.

Set Element 1 to the ThumbsUpPoseLeft GameObject.

A list of options appears.

Select TransformRecognizerActiveState from the list.

Repeat these steps for the ThumbsUpPoseRight GameObject.

Track the state of the pose

To track if the pose is happening, you use the ActiveStateSelector component, which tracks an active state and fires its WhenSelected and WhenUnselected events based on the tracked state.

Under Hierarchy, select the ThumbsUpPoseLeft GameObject.

In the Active State Selector component you added earlier, set the Active State property to the ThumbsUpPoseLeft GameObject.

A list of options appears.

Select ActiveStateGroup from the list since you want to track the combined state of the shapes and orientation, not the individual states.

Repeat these steps for the ThumbsUpPoseRight GameObject.

Choose events to fire

The ActiveStateSelector’s events aren’t exposed in the Unity inspector, so to access these events in the Inspector, you pass the ActiveStateSelector to a SelectorUnityEventWrapper component. SelectorUnityEventWrapper lets you run custom logic based on the event that’s occurring.

Under Hierarchy, select the ThumbsUpPoseLeft GameObject.

In the Selector Unity Event Wrapper component, set the Selector property to the ThumbsUpPose GameObject.

Add your own custom functions to the When Selected() list, the When Unselected() list, or both. These functions will trigger when a thumbs up starts or stops.

Repeat these steps for the ThumbsUpPoseRight GameObject.

A canvas displaying the status of the Thumbs Up pose The finished Thumbs Up pose using an Active State Debug Tree UI component to display the current state of the left hand.

Related topics

To understand the concepts and components used to detect a pose, see Hand Pose Detection. To learn about the concept of Active State, see Active State Overview. To practice using the basics of Active State, see Use Active State. To learn how to detect custom body poses, see Compare Body Poses.Create a Hand Grab Pose (PC) Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

In this tutorial, you learn how to record a custom hand grab pose with Interaction SDK to control how your hands conform to a grabbed object. Once you record a pose, you can adjust its fingers and scale, mirror, and enable it to work with different surfaces.

This tutorial is only for Android devices because it requires Oculus Link. For the Mac version of this tutorial, see Create a Hand Grab Pose (Mac). To try pose recognition in a pre-built scene, see the PoseExamples scene.

Note This tutorial requires Oculus Link, which doesn't support Mac, so you can't use a Mac for this tutorial.

Before you begin

Connect Oculus Link to your Unity Editor. Oculus Link requires the Android platform and isn’t supported on Mac. Complete Getting Started with Interaction SDK. Add a hand grab interaction using QuickActions. Configure the settings

Before you can record poses for an object, there are a few settings you need to configure.

Open the Unity scene where you want to record hand grab poses.

Open the Hand Grab Pose Recorder window by selecting Oculus > Interaction > Hand Grab Pose Recorder.

The window appears.

Hand Grab Pose Recorder image

The Hand Grab Pose Recorder window.

In the window, set the required properties listed below.

Property

Description

Hand used for recording poses The hand to record. This is either the left or right hand under OVRInteraction > OVRHands in the hierarchy. GameObject to record the hand grab poses for The GameObject to record poses for. Prefabs provider for the hands to visualize the recorded poses A GhostProvider that will instantly visualize the generated poses. This field will auto populate with the first found asset in the project (there is a default asset included in the package). HandGrab Interactable Data Collection (optional) An asset that can store or load poses so they can survive the Play/Edit mode switch. If no asset is provided, a new one will be automatically generated when saving. Record poses

Now it’s time to record a custom grab pose for an object.

Go into Play Mode.

Using the hand you chose in the previous section, wrap it around the GameObject you selected as a target.

Once your hand is in the desired pose, either press the Record key (Space by default, requires focus on the HandGrabPoseRecorder Window) or the RecordHandGrabPose button in the editor with your free hand.

In the scene, a HandGhost GameObject appears as a hand performing the recorded pose.

Repeat the previous step as many times as needed. There is no need to record left and right HandPoses since they can be mirrored later.

Before exiting Play Mode, click the Save To Collection button.

This stores the HandPoses in an asset that can be retrieved in Edit Mode. If you didn’t provide a collection asset, one is automatically generated.

Tweak poses

Once you’ve recorded a pose, you can tweak, mirror, and duplicate it in Edit Mode for a more polished result.

In Edit Mode, click the Load From Collection button.

The recorded HandPoses are restored. Each restored pose has a GameObject that contains a HandGrabInteractable and HandGrabPose.

Tweak the generated HandPoses as needed using the following sections.

Adjust fingers Specify grab surfaces Scale poses Mirror a pose Adjust fingers When you select a HandGrabPose, you should see a GhostHand representing that pose in the Scene window. If Gizmos are enabled in the editor, there are several circular handles around the joints of the GhostHand. These handles adjust the angle of flexion and abduction of each joint (abduction is only possible at the root of the fingers). Handles will not be shown for fingers set to Free in the HandPose.FingersFreedom field.

Select a HandGrabPose.

If Gizmos are enabled in the editor, several circular handles appear around the joints of the GhostHand.

Basic grabbable object image The circular handles used to adjust each finger joint.

Using the handles, adjust the joints until the fingers are positioned correctly.

Back to top

Specify grab surfaces A HandGrabPose on its own specifies just the pose of the wrist in relation to the object. But you can also indicate that this pose can be used along a surface. For example, grabbing a book at any point around its edge or a driving wheel around its circumference.

To specify the surface in which the HandGrabPose is valid, you can use one of these components that implement IGrabSurface.

CylinderGrabSurface: A cylinder with adjustable length, direction and maximum angle. Use it to grab circular or cylindrical objects such as the edge of the cup or the torch in the HandGrabExamples scene. SphereGrabSurface: A sphere. Place it in the center of a spherical object so you can grab it using the HandPose at any rotation. BoxGrabSurface: A rectangle with adjustable edges. Use it for grabbing rectangular objects at their edge like books, phones, or a table. To add an IGrabSurface, follow these steps.

Add one of the components listed above to the HandGrabPose GameObject. In Inspector, set the [Optional] Surface field of the HandGrabPose to the component. In the Scene window, move your mouse around the surface gizmo to visualize how the hand wrist will snap to the object. Use either the provided handles or fields in the GrabSurface inspector to adjust the surface shape so the hand stays within the desired bounds. Snap surface image A HandGrabPose using the CylinderGrabSurface implementation of IGrabSurface.

Back to top

Scale poses If your application allows users to have custom scaled hands (the default behavior), you should provide some modified copies of the default (1x scale) HandGrabPose to the HandGrabInteractable. These modified copies should use the custom hand size so the system can interpolate between them and the default size to ensure the hand grab always looks well aligned.

To create a scaled copy of the hands, do one of the following. Option A: Duplicate the GameObject. Option B: Use the scale slider. Option A: Duplicate the GameObject Duplicate the HandGrabPose GameObject (alongside its GrabSurface if it has one).

Adjust the GameObject’s local scale.

(Optional) Tweak the finger rotations and surface limits.

Assign the new GameObject to the HandGrabInteractable.

Option B: Use the scale slider In the HandGrabInteractable component, move the ScaledHandGrabPosesKeys slider to the 0.8x and 1.2x position.

Click the Add HandGrabPose Key at X scale button to create the keys.

(Optional) Tweak the fingers and surfaces in the newly created HandGrabPoint GameObjects under the HandGrabInteractable.

In the Inspector, move the ScaledHandGrabPosesKeys slider while watching the scene view to ensure that the transitions between the different hand sizes are smooth.

Back to top

Mirror a pose You can mirror a HandGrabPose either manually or automatically so the pose exists for both hands.

To create a mirrored pose, do one of the following. Option A: Mirror manually Option B: Mirror automatically Option A: Mirror manually Duplicate the HandGrabInteractable.

In Inspector, change the Handedness property of each HandGrabPose GameObject.

Manually reposition each HandGrabPose so it aligns well.

Option B: Mirror automatically Under Inspector, in the HandGrabInteractable component, ensure the HandGrabPoses you created are listed in that hand’s HandGrabInteractable.

In the component, click the Create Mirrored HandGrabInteractable button.

The HandGrabInteractable is duplicated while mirroring all the HandGrabPoses into a new HandGrabInteractable.

(Optional) If the mirroring for the generated poses happened on the wrong axis, move the transform of the HandGrabPose to the desired position.

Back to top

Related topics

To learn more about HandGrabInteractor, HandGrabInteractable, and HandGrabPoses, see Hand Grab Interactions. For the Mac version of this tutorial, see Create a Hand Grab Pose (Mac).Use Active State Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

In this tutorial, you learn how to use Interaction SDK’s Active State in two scenarios that you should complete in order. Active State checks for a specified condition and returns a boolean. For example, Active State can tell you if your hands are active, if your controllers are grabbing, or if your hands are matching a specific pose. You can check elements, like hands or controllers, GameObjects, or interactors for their Active State.

This tutorial includes two scenarios that you should complete in order.

Check state of one element Check state of multiple elements Before you begin

Complete Getting Started with Interaction SDK. Check state of one element

To check the state of an element, you use the corresponding ...Active State component. For example, hands use HandActiveState, GameObjects use GameObjectActiveState, and interactors use InteractorActiveState. In this scenario, you’ll check the state of the grab interactor for the left hand and cause a cube to glow green whenever the state is true.

In Unity, in the Project window, search for HandGrabExamples. Ensure the search filter is set to either All or In Packages, since the default setting only searches your assets.

Open the HandGrabExamples scene. This scene already includes grabbable objects, so you’ll use this scene instead of a new scene to save you time.

Under Hierarchy, create a Cube GameObject by right-clicking in the Hierarchy and selecting 3D Object > Cube. This cube will visually display the active state by glowing green whenever the active state is true.

Under Hierarchy, select Cube.

Under Inspector, in the Transform component, set the Position values as follows. X: 0.002 Y: 1 Z: 0.68 Set the Scale values to 0.1 for the X, Y, and Z axes.

The cube is now positioned just in front of the Dialog GameObject so you can see it while you’re interacting with the scene’s grabbable objects.

Add an Interactor Active State component by clicking the Add Component button at the bottom of the Inspector and searching for Interactor Active State. This component will track the state of the grab interactor.

In the Interactor Active State component, set Interactor to the HandGrabInteractor GameObject, which is located under OVRCameraRig > OVRInteraction > OVRHands > LeftHand > HandInteractorsLeft > HandGrabInteractor.

Set the Property field to Is Selecting, since for this tutorial you only want to detect when the hand is grabbing.

Interactor and Property properties

The Interactor and Property fields of the Interactor Active State component.

Add an Active State Debug Visual component, which changes the color of its GameObject when the assigned active state changes.

In the Active State Debug Visual component, set the Active State field to the Cube GameObject.

Set the Target field to the Cube GameObject so the cube is what changes color when the state changes.

Debug visual properties

The Active State and Target fields of the Active State Debug Visual component.

Prepare to launch your scene by going to File > Build Settings and clicking the Add Open Scenes button.

Your scene is now ready to build.

Select File > Build And Run, or if you have an Oculus Link connected, click Play.

Grabbing any item in the scene with your left hand activates the active state, causing the cube to glow green.

Left hand grabbing object, activating active state

Check state of multiple elements

In this scenario, you’ll add to the previous scenario by checking the active state of both hands, not just the left hand. To check the state of a group of elements, you use an Active State Group component. In an Active State Group, each element is evaluated, and then the group is evaluated using its boolean operator to return a final value of either true or false.

Open the modified HandGrabExamples scene from the previous scenario.

Under Hierarchy, select Cube.

Under Inspector, add a second Interactor Active State component.

In that component, set the Interactor property to the HandGrabInteractor for the right hand, which is located under OVRCameraRig > OVRInteraction > OVRHands > RightHand > HandInteractorsRight > HandGrabInteractor.

Set the Property property to Is Selecting to detect when the hand is grabbing.

Interactor and Property properties

Add an Active State Group component, which checks the value of its active states and returns a final boolean value.

In the Active State Group component, add two empty elements to the Active States list.

Set the elements to the two Interactor Active State components, which are listed in the Inspector above the Active State Group component.

Set the Logic Operator field to OR so that either hand can activate the Active State.

Active State Group component

In the Active State Debug Visual component, set the Active State field to the Active State Group component. This causes the cube to monitor and display the final state of the Active State Group.

Active State property

Select File > Build And Run, or if you have an Oculus Link connected, click Play.

Grabbing any item in the scene with either hand or both hands activates the active state, causing the cube to glow green.

Both hands toggling Active State

Related topics

To learn about the concept of active state, see Active State Overview. To understand how poses are detected, see Hand Pose Detection. To learn how to make a custom hand pose, see Build a Custom Hand Pose.Use the Data Property Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

In Interaction SDK, interactors and interactables both have an optional Data property that takes an object you can read from and write to. That object lets you share additional information with an interactor or interactable, like which hand grabbed an object. In this tutorial, you learn how to use Data to store and read data so a controller will vibrate when it grabs an object.

Example use case

Here’s one example of how you could use Data. Suppose you’re making a ping-pong paddle that should vibrate using haptics when it collides with the ball. The haptic code will live in the interactable (not the interactor), so it can have different parameters depending on the paddle material and collision strength. However, there’s one problem, you don’t know which controller should vibrate! To solve that, you can pass an object with information about the active controller to the interactor’s Data field, and then read that on the interactable side to know which controller should vibrate.

Write data to GameObject

In order to use Data, you need a GameObject to store information. To avoid having to set up controllers, you’ll add to the existing HandGrabExamples scene. The additions you make to the scene can be easily reverted once you complete the tutorial.

In Unity, open the HandGrabExamples scene.

Under Hierarchy, select the HandGrabInteractor located at OVRInteraction > OVRControllerDrivenHands > LeftControllerHand > ControllerHandInteractors > HandGrabInteractor.

View in Hierarchy

Under Inspector, in the Hand Grab Interactor component, set Data to the Hand Ref component. You do this because Hand Ref contains a public variable defining which hand is active, so linking that component to the Data field lets you access that information later.

Setting Data property

Repeat these steps for the HandGrabInteractor under RightControllerHand.

Read data from GameObject

Now that Data contains Hand Ref, the grabbable object can read Data to find out which controller is active and cause that controller to vibrate.

Under Hierarchy, create a copy of the SimpleGrab0NoPose GameObject (the blue cube).

Rename it to CopyCube so you don’t confuse the new cube with the original.

Using the Transform gizmos, reposition CopyCube so it’s separate from the original cube.

Under Project, open the Scripts folder.

Scripts folder

In the Scripts folder, create a new script called ReadData by right-clicking and selecting Create > C# Script.

Creating a new script

Open the ReadData script in your code editor.

Replace the script’s existing code with this code, which will cause whichever controller you grab with to vibrate.

using System.Collections; using System.Collections.Generic; using UnityEngine; using Oculus.Interaction; using Oculus.Interaction.Input;

public class ReadData : MonoBehaviour { OVRInput.Controller leftController = OVRInput.Controller.LTouch; OVRInput.Controller rightController = OVRInput.Controller.RTouch;

 private void StartHaptics(Handedness handedness) {
     if (handedness == Handedness.Left && OVRInput.IsControllerConnected(leftController)) {
         OVRInput.SetControllerVibration(0.9f, 0.5f, leftController);
     }
     if (handedness == Handedness.Right && OVRInput.IsControllerConnected(rightController)) {
         OVRInput.SetControllerVibration(0.9f, 0.5f, rightController);
     }
 }

 public void HandlePointerEvent(PointerEvent pointerEvent) {
     HandRef handData = (HandRef)pointerEvent.Data;
     Handedness handedness = handData.Handedness;
     StartHaptics(handedness);
 }

} Under Hierarchy, select CopyCube > HandGrabInteractable. This is the object that will read Data.

Under Inspector, add the ReadData script to HandGrabInteractable.

In the Pointable Unity Event Wrapper component, in the When Select section, add ReadData’s HandlePointerEvent function.

Adding function to Select event

Prepare to launch your scene by going to File > Build Settings and clicking the Add Open Scenes button.

Your scene is now ready to build.

Select File > Build And Run.

When the scene loads, grab the cube using one or both of your controllers. When you grab the cube, the controller that grabbed it will vibrate, and the hand visual will shake slightly.

Related topics

To learn more about Pointer Events, which you used in this tutorial, see Pointer Events.Throw an Object Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

In this tutorial, you learn how to throw a cube with Interaction SDK using physics, velocity, and hand grab interactions.

Before you begin

Complete Getting Started with Interaction SDK. Add a grab interaction via Add an Interaction with Quick Actions (if you’re using v62+), or Create Hand Grab Interactions (if you’re using legacy versions). Add components to cube

In order to make the cube throwable, it has to respond to physics and an interaction (which in this tutorial is hand grab).

Open the Unity scene where you completed Getting Started with Interaction SDK.

Under Hierarchy, add a cube GameObject by right-clicking and selecting 3D Object > Cube.

Under Inspector, add these components. Physics Grabbable RigidBody Grabbable Hand Grab Interactable In the Physics Grabbable component, set the Grabbable field to Cube.

Set the RigidBody property to Cube.

Physics Grabbable component properties

In the Hand Grab Interactable component, set the Physics Grabbable property to Cube.

Physics grabbable property

Add velocity components to hands

To determine the velocity of your hand and apply that velocity to the cube once you throw it, you’ll use the HandVelocityCalculator prefab.

Under Hierarchy, select LeftHand.

In the Project window’s search bar, search for HandVelocityCalculator. Ensure the search filter is set to either All or In Packages, since the default setting only searches your assets.

Drag the HandVelocityCalculator prefab from the search results onto the LeftHand GameObject in the Hierarchy.

Under Inspector, in the Hand Pose Input Device component, set the Hand property to LeftHand.

Under Hierarchy, select the left hand’s HandGrabInteractor, which is located at LeftHand > HandInteractorsLeft > HandGrabInteractor.

Under Inspector, in the Hand Grab Interactor component, set the Velocity Calculator property to the Hand Velocity Calculator prefab that’s located at Left Hand > Hand Velocity Calculator.

Velocity calculator property

Repeat these steps for RightHand.

(Optional) Put the cube on top of a platform, like a table, so it doesn’t fall to the ground when you start the scene.

Prepare to launch your scene by going to File > Build Settings and clicking the Add Open Scenes button.

Your scene is now ready to build.

Select File > Build And Run, or if you have an Oculus Link connected, click Play.

You can now grab and throw the cube. ChatGPT Ok.

User Map Controllers The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Note To get started with controller tracking, see the Interaction SDK.

OVRInput exposes a unified input API for multiple controller types.

It is used to query virtual or raw controller state, such as buttons, thumbsticks, triggers, and capacitive touch data. It supports the Meta Quest Touch controllers.

For keyboard and mouse control, we recommend using the UnityEngine.Input scripting API (see Unity’s Input scripting reference for more information).

Mobile input bindings are automatically added to InputManager.asset if they do not already exist.

For more information, see OVRInput in the Unity Scripting Reference guide. For more information on Unity’s input system and Input Manager, see here: http://docs.unity3d.com/Manual/Input.html and http://docs.unity3d.com/ScriptReference/Input.html.

Requirements

Include an instance of OVRManger anywhere in your scene. Call OVRInput.Update() and OVRInput.FixedUpdate() once per frame at the beginning of any component’s Update and FixedUpdate methods, respectively. Touch Tracking

OVRInput provides touch position and orientation data through GetLocalControllerPosition() and GetLocalControllerRotation(), which return a Vector3 and Quaternion, respectively.

Controller poses are returned by the tracking system and are predicted simultaneously with the headset. These poses are reported in the same coordinate frame as the headset, relative to the initial center eye pose, and can be used for rendering hands or objects in the 3D world. They are also reset by OVRManager.display.RecenterPose(), similar to the head and eye poses.

Note: Meta Quest Touch controllers are differentiated with Primary and Secondary in OVRInput: Primary always refers to the left controller and Secondary always refers to the right controller.

OVRInput Usage

The primary usage of OVRInput is to access controller input state through Get(), GetDown(), and GetUp().

Get() queries the current state of a controller. GetDown() queries if a controller was pressed this frame. GetUp() queries if a controller was released this frame. Control Input Enumerations There are multiple variations of Get() that provide access to different sets of controls. These sets of controls are exposed through enumerations defined by OVRInput as follows:

Control

Enumerates

OVRInput.Button Traditional buttons found on gamepads, controllers, and back button. OVRInput.Touch Capacitive-sensitive control surfaces found on the controller. OVRInput.NearTouch Proximity-sensitive control surfaces found on the controller. OVRInput.Axis1D One-dimensional controls such as triggers that report a floating point state. OVRInput.Axis2D Two-dimensional controls including thumbsticks. Reports a Vector2 state. A secondary set of enumerations mirrors the first, defined as follows:

Control

OVRInput.RawButton OVRInput.RawTouch OVRInput.RawNearTouch OVRInput.RawAxis1D OVRInput.RawAxis2D The first set of enumerations provides a virtualized input mapping that is intended to assist developers with creating control schemes that work across different types of controllers. The second set of enumerations provides raw unmodified access to the underlying state of the controllers. We recommend using the first set of enumerations, since the virtual mapping provides useful functionality, as demonstrated below.

Button, Touch, and NearTouch In addition to traditional gamepad buttons, the controllers feature capacitive-sensitive control surfaces which detect when the user’s fingers or thumbs make physical contact (Touch), as well as when they are in close proximity (NearTouch). This allows for detecting several distinct states of a user’s interaction with a specific control surface. For example, if a user’s index finger is fully removed from a control surface, the NearTouch for that control will report false. As the user’s finger approaches the control and gets within close proximity to it, the NearTouch will report true prior to the user making physical contact. When the user makes physical contact, the Touch for that control will report true. When the user pushes the index trigger down, the Button for that control will report true. These distinct states can be used to accurately detect the user’s interaction with the controller and enable a variety of control schemes.

Example Usage // returns true if the primary button (typically “A”) is currently pressed. OVRInput.Get(OVRInput.Button.One);

// returns true if the primary button (typically “A”) was pressed this frame. OVRInput.GetDown(OVRInput.Button.One);

// returns true if the “X” button was released this frame. OVRInput.GetUp(OVRInput.RawButton.X);

// returns a Vector2 of the primary (typically the Left) thumbstick’s current state. // (X/Y range of -1.0f to 1.0f) OVRInput.Get(OVRInput.Axis2D.PrimaryThumbstick);

// returns true if the primary thumbstick is currently pressed (clicked as a button) OVRInput.Get(OVRInput.Button.PrimaryThumbstick);

// returns true if the primary thumbstick has been moved upwards more than halfway. // (Up/Down/Left/Right - Interpret the thumbstick as a D-pad). OVRInput.Get(OVRInput.Button.PrimaryThumbstickUp);

// returns a float of the secondary (typically the Right) index finger trigger’s current state. // (range of 0.0f to 1.0f) OVRInput.Get(OVRInput.Axis1D.SecondaryIndexTrigger);

// returns a float of the left index finger trigger’s current state. // (range of 0.0f to 1.0f) OVRInput.Get(OVRInput.RawAxis1D.LIndexTrigger);

// returns true if the left index finger trigger has been pressed more than halfway. // (Interpret the trigger as a button). OVRInput.Get(OVRInput.RawButton.LIndexTrigger);

// returns true if the secondary gamepad button, typically “B”, is currently touched by the user. OVRInput.Get(OVRInput.Touch.Two); In addition to specifying a control, Get() also takes an optional controller parameter. The list of supported controllers is defined by the OVRInput.Controller enumeration (for details, refer to OVRInput in the Unity Scripting Reference guide.

Specifying a controller can be used if a particular control scheme is intended only for a certain controller type. If no controller parameter is provided to Get(), the default is to use the Active controller, which corresponds to the controller that most recently reported user input. For example, a user may use a pair of controllers, set them down, and pick up an Xbox controller, in which case the Active controller will switch to the Xbox controller once the user provides input with it. The current Active controller can be queried with OVRInput.GetActiveController() and a bitmask of all the connected Controllers can be queried with OVRInput.GetConnectedControllers().

Example Usage:

// returns a float of the hand trigger’s current state on the left controller. OVRInput.Get(OVRInput.Axis1D.PrimaryHandTrigger, OVRInput.Controller.Touch);

// returns a float of the hand trigger’s current state on the right controller. OVRInput.Get(OVRInput.Axis1D.SecondaryHandTrigger, OVRInput.Controller.Touch); Note: Oculus Touch controllers can be specified either as the combined pair (with OVRInput.Controller.Touch), or individually (with OVRInput.Controller.LTouch and RTouch). This is significant because specifying LTouch or RTouch uses a different set of virtual input mappings that allow more convenient development of hand-agnostic input code.

Example Usage:

// returns a float of the hand trigger’s current state on the left controller. OVRInput.Get(OVRInput.Axis1D.PrimaryHandTrigger, OVRInput.Controller.LTouch);

// returns a float of the hand trigger’s current state on the right controller. OVRInput.Get(OVRInput.Axis1D.PrimaryHandTrigger, OVRInput.Controller.RTouch); This can be taken a step further to allow the same code to be used for either hand by specifying the controller in a variable that is set externally, such as on a public variable in Unity Editor.

Example Usage:

// public variable that can be set to LTouch or RTouch in the Unity Inspector public Controller controller;

// returns a float of the hand trigger’s current state on the controller // specified by the controller variable. OVRInput.Get(OVRInput.Axis1D.PrimaryHandTrigger, controller);

// returns true if the primary button (“A” or “X”) is pressed on the controller // specified by the controller variable. OVRInput.Get(OVRInput.Button.One, controller); This is convenient since it avoids the common pattern of if/else checks for left or right hand input mappings.

Touch Input Mapping

The following diagrams illustrate common input mappings for the controllers. For more information on additional mappings that are available, refer to OVRInput in the Unity Scripting Reference guide.

Virtual Mapping (Accessed as a Combined Controller) When accessing controllers as a combined pair with OVRInput.Controller.Touch, the virtual mapping closely matches the layout of a typical gamepad split across the left and right hands.

Virtual Mapping (Accessed as Individual Controllers) When accessing the left or right controller individually with OVRInput.Controller.LTouch or OVRInput.Controller.RTouch, the virtual mapping changes to allow for hand-agnostic input bindings. For example, the same script can dynamically query the left or right controller depending on which hand it is attached to, and Button.One is mapped appropriately to either the A or X button.

Raw Mapping The raw mapping directly exposes the controllers. The layout of the controllers closely matches the layout of a typical gamepad split across the left and right hands.Add Controller Animations The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Controller Animations

Meta XR Core SDK comes with controller animations for Meta headset controllers. Meta XR Core SDK is available individually or as part of the Meta XR All-in-One SDK with Unity Package Manager. The controller animations can be used whenever you don’t have custom models to represent controllers. They can also be used in tutorials to teach users how to interact with your app using their Meta controllers.

Controller Animations In A Game Tutorial Let’s say you’re developing a racing game for Meta headsets. You can create a tutorial that teaches users how to open the door to their car, get inside the car, and grip the steering wheel before they begin driving around a track.

Open the Car Door

The player needs to hold the trigger grip and pull near the car door handle to open it. The tutorial displays an overlay that says they need to press the trigger button to open the door. The trigger button animation plays. The player presses the trigger button and pulls the car door open. Get Into the Car

The the player needs to get into the car. The tutorial displays an overlay that says they need to press the B button to get in the car. The B button controller animation plays. The player presses the B button and gets into the their vehicle. Grip the Steering Wheel

The player needs to hold both controllers and close their hands to grip the trigger. The tutorial displays an overlay that says they need to press both grip trigger buttons to steer. Both grip trigger animations play. The player presses both grip trigger buttons. Controller animations could be used to remind players of the correct buttons for actions like opening a car door in the game even after the tutorial.

View the Controller Animations The animation clips can be used to animate the buttons, triggers, and thumbsticks on the controller models.

To view the controller animations:

In the Project view, expand the Meta > VR > Meshes folder. Expand the controller folder. Click on the controller animation. In the Inspector view, select the Animation tab. Watch the animation in the Preview view. List of available controller animations For each of the Meta headset devices, the following buttons have controller animations. To see more about mapping, see Touch Input Mapping in Map Controllers.

For each of the controller types, the following buttons are available:

button01 stickSE trigger button02 button03 button04 stickSW grip sticks stickNW stickNE stickW stickS stickN button01_neutral button02_neutral trigger_neutral grip_neutral Use the Animation Controllers The animation clip controllers layer and blend the button, trigger, and thumbstick animations based off of controller input.

In the Project view, expand the Meta > VR > Meshes folder. Expand the controller folder. Click on the Animation folder. Example Implementation OVRControllerPrefab uses these animation controllers. This prefab is a good example of how these animation controllers can be used with custom scripts.

To view the OVRControllerPrefab, go to the Project view and expand the Meta > VR > Prefabs folder.

Below is an simple example implementation to bind the animations to the controller buttons.

if (m_animator != null) { m_animator.SetFloat("Button 1", OVRInput.Get(OVRInput.Button.One, m_controller) ? 1.0f : 0.0f); m_animator.SetFloat("Button 2", OVRInput.Get(OVRInput.Button.Two, m_controller) ? 1.0f : 0.0f); m_animator.SetFloat("Button 3", OVRInput.Get(OVRInput.Button.Start, m_controller) ? 1.0f : 0.0f);

m_animator.SetFloat("Joy X", OVRInput.Get(OVRInput.Axis2D.PrimaryThumbstick, m_controller).x);
m_animator.SetFloat("Joy Y", OVRInput.Get(OVRInput.Axis2D.PrimaryThumbstick, m_controller).y);

m_animator.SetFloat("Trigger", OVRInput.Get(OVRInput.Axis1D.PrimaryIndexTrigger, m_controller));
m_animator.SetFloat("Grip", OVRInput.Get(OVRInput.Axis1D.PrimaryHandTrigger, m_controller));

} See OVRControllerHelper.cs for a more detailed implementation of controller animations.Runtime Controllers Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Runtime controller models load controller models dynamically from the Meta Quest software to match the controllers in use. This feature is available for the Meta Quest apps built with the OpenXR backend only. To switch from the legacy VRAPI backend to the OpenXR backend, go to the Switch between OpenXR and Legacy VRAPI documentation.

Set Up Runtime Controller Prefab

The OVRRuntimeControllerPrefab renders the controller models dynamically and adds a default shader. The prefab uses the OVRRuntimeController.cs script that queries and renders controller models. Do the following to set up the runtime controller prefab:

Create a new scene or open an existing one from your project. From the Project tab, search for OVRCameraRig and drag it in the scene. Skip this step if OVRCameraRig already exists in the scene. From the Hierarchy tab, expand OVRCameraRig > TrackingSpace > LeftControllerAnchor and RightControllerAnchor to add the runtime controller prefabs under each hand anchor. From the Project tab, in the search box, type OVRRuntimeControllerPrefab, and drag the prefab under LeftControllerAnchor and RightControllerAnchor. Under LeftControllerAnchor, select OVRRuntimeControllerPrefab, and then from the Inspector tab, from the Controller list, select L Touch to map the controller. Repeat this step to map the right hand runtime controller to R Touch. From the Shader list, select the shader, if you prefer a different shader than the default. From the Hierarchy tab, select OVRCameraRig to open OVR Manager settings in the Inspector tab. Under OVR Manager > Quest Features > General tab, from the Render Model Support list, select Enabled. If your project contains OVRControllerPrefab, from the Hierarchy tab, expand OVRCameraRig > TrackingSpace > LeftControllerAnchor, select OVRControllerPrefab, and then from the Inspector tab, clear the checkbox to disable the prefab. Repeat this step to disable OVRControllerPrefab under the RightControllerAnchor. Use APIs in Customized Scripts

To attach a customized script instead of the default OVRRuntimeController.cs script, use the following APIs to query and render the controller models. For more information about implementing the controller model APIs, refer to the OVRRuntimeController.cs script located in the Oculus/VR/Scripts/Util folder.

OVRPlugin.GetRenderModelPaths() - Returns a list of model paths that are supported by the runtime. These paths are defined in the OpenXR spec as: /model_fb/controller/left, /model_fb/controller/right, /model_fb/keyboard/remote, and /model_fb/keyboard/local. If a model is not supported by the runtime, its path will not be excluded from the returned list.

OVRPlugin.GetRenderModelProperties(modelPath, ref modelProperties) - Returns properties for the given model when passed a model path. The properties will include the model key, which is used to load the model, as well as the model name, vendor id, and version.

OVRPlugin.LoadRenderModel(modelKey) - Will load the model using the model key. The model returned is a glTF binary (GLB) that includes a KTX2 texture which can be loaded using the OVRGLTFLoader provided.Set Up Hand Tracking Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Note The recommended way to integrate hand tracking for Unity developers is to use the Interaction SDK, which provides standardized interactions and gestures. Building custom interactions without the SDK can be a significant challenge and makes it difficult to get approved in the store.

Data Usage Disclaimer: Enabling support for Hand tracking grants your app access to certain user data, such as the user’s estimated hand size and hand pose data. This data is only permitted to be used for enabling hand tracking within your app and is expressly forbidden for any other purpose.

Note: There is a known issue with the thumb trapezium bone (Thumb0) in the OpenXR backend.

Hand tracking enables the use of hands as an input method for the Meta Quest headsets. Using hands as an input method delivers a new sense of presence, enhances social engagement, and delivers more natural interactions. Hand tracking complements controllers and is not intended to replace controllers in all scenarios, especially with games or creative tools that require a high degree of precision.

We support the use of hand tracking on Windows through the Unity editor, when using Meta Quest headset and Meta Quest Link. This functionality is only supported in the Unity editor to help improve iteration time for Meta Quest developers. Check out the Hand Tracking Design resources that detail guidelines for using hands in virtual reality.

Get Started with Hands Setup

Note The recommended way to integrate hand tracking for Unity developers is to use the Interaction SDK, which provides standardized interactions and gestures. Building custom interactions without the SDK can be a significant challenge and makes it difficult to get approved in the store.

Apps render hands in the same manner as any other input device. Start by setting up the camera, select hands as the input device, and add hands prefab to render hands in the default form. The following sections describe basic setup:

Set Up Camera Create a new scene or open an existing one from your project. On the Project tab, search for OVRCameraRig, and then drag it in the scene. Skip this step if OVRCameraRig already exists in the scene. On the Hierarchy tab, select OVRCameraRig to open the Inspector tab. On the Inspector tab, go to OVR Manager > Tracking, and then in the Tracking Origin Type list, select Floor Level. Select Hands as Input On the Hierarchy tab, select OVRCameraRig to open the Inspector tab. Skip this step if you are continuing from the Set Up Camera section mentioned above. On the Inspector tab, go to OVR Manager > Quest Features, and then in the Hand Tracking Support list, select Controllers and Hands. Select Hands Only option to use hands as the input modality without any controllers.

When you select controllers and hands or hands only option, Meta Quest automatically adds and elements in the AndroidManifest.xml file. When the app supports controllers and hands, android:required is set to false, which means that the app prefers to use hands if present, but the app continues to function with controllers in the absence of hands. When the app supports hands only, android:required is set to true. Oculus adds both of these tags automatically and there is no manual update required in the Android Manifest file.

In the Hand Tracking Version list, leave selection as Default to use the latest version of hand tracking (Hands 1.0 is now deprecated. Selecting 1.0 or 2.0 will force your app to Hands 2.0, potentially preventing automatic upgrades to future major version releases) Add Hand Prefab On the Hierarchy tab, expand OVRCameraRig > TrackingSpace to add hand prefabs under the left and right hand anchors. On the Project tab, search for OVRHandPrefab, and then drag it under each hand anchor on the Hierarchy tab. On the Hierarchy tab, under RightHandAnchor, select OVRHandPrefab, and then on the Inspector tab, under OVR Hand, OVR Skeleton, and OVR Mesh, change the hand type to right hand. There’s no action needed for the left-hand prefab as the hand type is set to the left hand automatically. On the Hierarchy tab, select both the OVR Hand prefabs, and then on the Inspector tab, make sure OVR Skeleton, OVR Mesh, and OVR Mesh Renderer checkboxes are selected to render hands in the app. At this point, the app is able to render hands as an input device. To test hands, put on the headset, go to Settings > Device > Hands & Controllers, and turn on Hand Tracking. Leave the Auto Switch Between Hands And Controllers selected to let you use hands when you put controllers down. From Unity, build and run the app in the headset. After the app launches in the headset, put the controllers down, bring forward your hands that act as input devices in the app.

Update Root Pose and Root Scale To generate and render the animated 3D model of hands, OVR Mesh Renderer combines data returned by OVR Skeleton and OVR Mesh. OVR Skeleton returns bind pose, bone hierarchy, and capsule collider data. OVR Mesh loads a specified 3D asset from the Oculus runtime and exposes it as a Unity Engine mesh. We have preconfigured the recommended settings and have explained them in detail.

On the Hierarchy tab, expand OVRCameraRig > TrackingSpace, and then under LeftHandAnchor and RightHandAnchor, select both the OVRHandPrefab prefabs. When the OVRHandPrefab is parented to the left- and right-hand anchors under OVRCameraRig, leave the Update Root Pose checkbox unchecked so that the hand anchors can correctly position the hands in the tracking space. If it is placed independently of OVRCameraRig, select the checkbox to ensure that not only the fingers and bones, but the actual root of the hand is correctly updated. To get an estimation of the user’s hand size via uniform scale against the reference hand model, make sure the Update Root Scale checkbox is selected. By default, the reference hand model is scaled to 100% (1.0). By enabling scaling, the hand model size is scaled either up or down based on the user’s actual hand size. The hand scale may change at any time, and we recommend that you should scale the hand for rendering and interaction at runtime. If you prefer to use the default reference hand size, clear the selection from the checkbox. Add Physics Capsules The physics capsules represent the volume of the bones in the hand that is used to trigger interactions with physical objects and generate collision events with other rigid bodies in the physics system.

On the Hierarchy tab, expand OVRCameraRig > TrackingSpace, and then select the OVR Hand prefab that you want to use for physics interaction. On the Inspector tab, under OVR Skeleton, select the Enable Physics Capsules checkbox. Repeat steps 1 and 2 for the other hand prefab. Customize Display Default hand models are skinned. A skinned mesh renderer surfaces properties that define how the model is rendered in the scene. Make sure the Skinned Mesh Renderer checkbox is selected. There are three broad categories that you can define to customize the hand model:

Materials define how hands appear in the app. Depending on the shader, configure the material that suits your content. For example, select either metallic or specular workflow, set the rendering mode, define the base color, or adjust the smoothness. For more information about materials, go to Creating and Using Materials guide in the Unity documentation. Lighting specifies if and how the mesh renderer will cast and receive shadows. Probes contains properties that set how the renderer receives light from the Light Probe system. To define the skinned mesh renderer properties, do the following:

On the Hierarchy tab, expand OVRCameraRig > TrackingSpace, and then select the OVR Hand prefab from any one hand anchor. On the Inspector tab, do the following: Make sure the Skinned Mesh Renderer checkbox is selected. Under Materials, enter the number of materials you want to use, and drag the material in the list of materials. By default, the size is set to one and the first element is always Element 0. Under Lighting, in the Cast Shadows list, select how the renderer should cast shadows when a suitable light shines on it, and then select the Receive Shadows checkbox to let the mesh display any shadows that are cast upon it. Under Probes, in the Light Probes list, select how the renderer should use interpolated Light Probes. By default, the renderer uses one interpolated light probe. Repeat steps 1 and 2 for the other OVR Hand prefab. To use a customized mesh, map your custom skeleton that is driven by our skeleton. For more information on sample usage, refer to the HandTest_Custom scene, which uses the OVRCustomHandPrefab_L and OVRCustomHandPrefab_R prefabs, as well as the OVRCustomSkeleton.cs script.

To enable wireframe skeleton rendering that renders bones with wireframe lines and assists with visual debugging, select the OVR Skeleton Renderer checkbox.

Get Started with Interactions

Note The recommended way to integrate hand tracking for Unity developers is to use the Interaction SDK, which provides standardized interactions and gestures. Building custom interactions without the SDK can be a significant challenge and makes it difficult to get approved in the store.

To build a rich experience when using hands as input modality, you need to incorporate multiple interactions considering how your object is placed. Near-field objects are within arm’s reach. Direct interaction such as poking or pinching works well with these objects. Far-field objects are beyond arm’s reach and require raycasting, which directs a raycast at objects at a far distance. It is very similar to Touch controller interaction.

Poking and pinching are real time gestures and very intuitive for any user to perform basic tasks such as setting focus, selecting, or manipulating an object in space. Poking requires you to extend and move your finger towards an object until the finger collides with the object in space. Pinching can be used with direct and raycasting interaction methods. Move your hand towards the object to direct a raycast, and then pinch to select or grasp the object.

The OVR Skeleton and OVR Hand APIs provide information required to render a fully articulated representation of the user’s real-life hands in VR without the use of controllers, including:

Bone information Hand and finger position and orientation Pinch strength Pointer pose for UI raycasts Tracking confidence Hand size System gesture for opening the universal menu The following sections describe implementation of several features to integrate hands as input:

Get Bone ID OVR Skeleton contains a full list of bone IDs and methods to implement interactions, such as detect gestures, calculate gesture confidence, target a particular bone, or trigger a collision event in the physics system.

Call the GetCurrentStartBoneID() and GetCurrentEndBoneId() methods to return the start and end bone IDs, which are mainly used to iterate over the subset of bone IDs present in the currently configured skeleton type. Call the GetCurrentNumBones() and GetCurrentNumSkinnableBones() methods to return the total number of bones in the skeleton and the total number of bones that are skinnable. The difference between bones and skinnable bones is that bones also include anchors for the fingertips. However, they are not actually part of the hand skeleton in terms of the mesh or animation, whereas the skinnable bones have the tips filtered out.

Invalid = -1 Hand_Start = 0 Hand_WristRoot = Hand_Start + 0 // root frame of the hand, where the wrist is located Hand_ForearmStub = Hand_Start + 1 // frame for user's forearm Hand_Thumb0 = Hand_Start + 2 // thumb trapezium bone Hand_Thumb1 = Hand_Start + 3 // thumb metacarpal bone Hand_Thumb2 = Hand_Start + 4 // thumb proximal phalange bone Hand_Thumb3 = Hand_Start + 5 // thumb distal phalange bone Hand_Index1 = Hand_Start + 6 // index proximal phalange bone Hand_Index2 = Hand_Start + 7 // index intermediate phalange bone Hand_Index3 = Hand_Start + 8 // index distal phalange bone Hand_Middle1 = Hand_Start + 9 // middle proximal phalange bone Hand_Middle2 = Hand_Start + 10 // middle intermediate phalange bone Hand_Middle3 = Hand_Start + 11 // middle distal phalange bone Hand_Ring1 = Hand_Start + 12 // ring proximal phalange bone Hand_Ring2 = Hand_Start + 13 // ring intermediate phalange bone Hand_Ring3 = Hand_Start + 14 // ring distal phalange bone Hand_Pinky0 = Hand_Start + 15 // pinky metacarpal bone Hand_Pinky1 = Hand_Start + 16 // pinky proximal phalange bone Hand_Pinky2 = Hand_Start + 17 // pinky intermediate phalange bone Hand_Pinky3 = Hand_Start + 18 // pinky distal phalange bone Hand_MaxSkinnable= Hand_Start + 19 // Bone tips are position only. They are not used for skinning but are useful for hit-testing. // NOTE: Hand_ThumbTip == Hand_MaxSkinnable since the extended tips need to be contiguous Hand_ThumbTip = Hand_Start + Hand_MaxSkinnable + 0 // tip of the thumb Hand_IndexTip = Hand_Start + Hand_MaxSkinnable + 1 // tip of the index finger Hand_MiddleTip = Hand_Start + Hand_MaxSkinnable + 2 // tip of the middle finger Hand_RingTip = Hand_Start + Hand_MaxSkinnable + 3 // tip of the ring finger Hand_PinkyTip = Hand_Start + Hand_MaxSkinnable + 4 // tip of the pinky Hand_End = Hand_Start + Hand_MaxSkinnable + 5 Max = Hand_End + 0 Add Interactions To standardize interactions across apps, OVR Hand provides access to the filtered pointer pose and detection for pinch gestures to ensure your app conforms to the same interaction models as Oculus system apps. Simple apps that only require point and click interactions can use the pointer pose to treat hands as a simple pointing device, with the pinch gesture acting as the click action.

Selection

Pinch Pinch is the basic interaction primitive for UI interactions using hands. A successful pinch of the index finger can be considered the same as a normal select or trigger action for a controller, i.e., the action that activates a button or other control on a UI.

OVR Hand provides methods to detect whether the finger is currently pinching, the pinch’s strength, and the confidence level of the finger pose. Based on the values returned, you can provide feedback to the user by changing the color of the fingertip, adding an audible pop when fingers have fully pinched, or integrate physics interactions based on the pinch status. Call GetFingerIsPinching() method and pass the finger constant to check whether it is currently pinching. The method returns a boolean to indicate the current state of pinch. The finger constants are: Thumb, Index, Middle, Ring, and Pinky. Call the GetFingerPinchStrength() method and pass the finger constant to check the progression of a pinch gesture and its strength. The value it returns ranges from 0 to 1, where 0 indicates no pinch and 1 is a full pinch with the finger touching the thumb. Call the GetFingerConfidence() method and pass the finger constant to measure the confidence level of the finger pose. It returns the value as low or high, which indicates the amount of confidence that the tracking system has for the finger pose.

var hand = GetComponent(); bool isIndexFingerPinching = hand.GetFingerIsPinching(HandFinger.Index); float ringFingerPinchStrength = hand.GetFingerPinchStrength(HandFinger.Ring); TrackingConfidence confidence = hand.GetFingerConfidence(HandFinger.Index); Pointer Pose Deriving a stable pointing direction from a tracked hand is a non-trivial task involving filtering, gesture detection, and other factors. OVR Hand provides a pointer pose so that pointing interactions can be consistent across Meta Quest apps. It indicates the starting point and position of the pointing ray in the tracking space. We highly recommend that you use PointerPose to determine the direction the user is pointing in the case of UI interactions.

Pointer 1

The pointer pose may or may not be valid, depending on the user’s hand position, tracking status, and other factors. Call the IsPointerPoseValid property to check whether the pointer pose is valid. If valid, use the ray for UI hit testing, otherwise avoid using it for rendering the ray.

Track Hand Confidence At any point, you may want to check if your app detects hands. Call the IsTracked property to verify whether hands are currently visible and not occluded from being tracked by the headset. To check the level of confidence the tracking system has for the overall hand pose, call the HandConfidence property that returns the confidence level as either Low or High. We recommended to only use hand pose data for rendering and interactions when the hands are visible and the confidence level is high.

Get Hand Scale Call the HandScale property to get the scale of the user’s hand, which is relative to the default hand model scale of 1.0. The property returns a float value as a scale compared to the hand model. For example, the value of 1.05 indicates the user’s hand size is 5% larger than the default hand model. The value may change at any time, and you should use the value to scale the hand for rendering and interaction simulation at runtime.

Check System Gestures The system gesture is a reserved gesture that allows users to transition to the Meta Quest universal menu. This behavior occurs when users place their dominant hand up with the palm facing the user and pinch with their index finger. The pinching fingers turn light blue as they start to pinch. When the user uses the non-dominant hand to perform the gesture, it triggers the Button.Start event. You can poll Button.Start to integrate any action for the button press event in your app logic.

To detect the dominant hand, call the IsDominantHand property. If true, check whether the user is performing a system gesture by calling the IsSystemGestureInProgress property. We recommend that if the system gesture is in progress, the app should provide visual feedback to the user, such as rendering the hand material with a different color or a highlight to indicate to the user that a system gesture is in progress. The app should also suspend any custom gesture processing when the user is in the process of performing a system gesture. This allows apps to avoid triggering a gesture-based event when the user is intending to transition to the Meta Quest universal menu.

Troubleshooting

The following questions help you troubleshoot issues you may encounter during rendering and integrating hands in the app:

Why don’t I see hands in my app?

There can be many reasons why hands are not rendering in your app. To begin with, verify that hand tracking is enabled on the device and that hands are working correctly in the system menus. Ensure that you have used OVRHandPrefab to add hands in the scene.

Why do I see blurry/faded hands?

Your hands may not be properly tracked since the cameras on the Meta Quest headset have a limited field of view. Make sure the hands are closer to the front of the Meta Quest headset for better tracking.

Can I use another finger besides the index finger for the pinch gesture?

Yes. Use the OVRHand.GetFingerIsPinching() method from OVRHand.cs with the finger that you want to track instead. For more information about tracking fingers, go to the Add Interactions section.

Understanding Hand Tracking Limitations

Hand tracking for Meta Quest is currently an experimental feature with some limitations. While these limitations may be reduced or even eliminated over time, they are currently part of the expected behavior. For more specific issues, go to the Troubleshooting section.

Occlusion

Tracking may be lost, or hand confidence may become low when one hand occludes another. In general, an app should respond to this by fading the hands away.

Noise

Hand tracking can exhibit some noise. It may be affected by lighting and environmental conditions. You should take these conditions into consideration when developing algorithms for gesture detection.

Controllers + Hands

Controllers and hands are not currently tracked at the same time. Apps should support either hands or controllers, but not at the same time.

Lighting

Hand tracking has different lighting requirements than inside-out (head) tracking. In some situations, this could result in functional differences between head tracking and hand tracking, where one may work while the other has stopped functioning.Tutorial - Receive Basic Input from Hand Tracking Unity

All-In-One VR

PC VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

This tutorial describes the essential steps to:

Add OVRCameraRig to a Unity project. Use OVRHand and OVRSkeleton prefabs in a project. Check if the user is performing the pinch hand gesture. Receive position and rotation data regarding a user’s hand bone (left hand’s index tip). Draw a curve (Quadratic Bézier curve) as a LineRenderer. Attach the curve to the user’s index tip to enable interaction with objects. Apply physics capsule functionality to a hand for enabling collision detection with a GameObject. App running

App running on a Meta Quest 2

This tutorial is a primary reference for working with hand tracking quickly by using the Meta XR All-in-One SDK. For complete documentation on hand tracking functionality, see Set Up Hand Tracking. For a complete library that adds controller and hand interactions to your apps, see Interaction SDK Overview.

Prerequisites

Before reading further, ensure you have followed all the steps in the Create Your First VR App on Meta Quest Headset tutorial. That tutorial helps you set up a Unity project that can build and run on a Meta Quest headset. You are going to build a small hand tracking app on top of that.

Hand tracking basics Hand tracking analyzes discrete hand poses and tracks the position and rotation of certain key points on the user’s hands, such as wrist, knuckles, and fingertips. Integrated hands can perform object interactions by using simple hand gestures such as point, pinch, un-pinch, scroll, and palm pinch. To enable collision detection, hand tracking also provides functionality like physics capsules.

These are all available under the OVRHand, OVRSkeleton, and OVRBone classes which provide a unified input system for hand tracking under the umbrella of Meta XR Core SDK. For details, see OVRHand, OVRSkeleton, and OVRBone Class References.

Intuition and design considerations on pinching

Pinch is a simple but unique hand gesture because it provides a direct sense of tangibility. The user touches their thumb with their index finger in the real world, so they actually sense something while pinching. With the right set of interactions, this unique characteristic of pinch may help you target plausibility illusion, which is significant especially for MR experiences where the virtual world must blend into the physical world. This is the illusion that the scenario being depicted by your app is actually occurring.

In this app, a blue ribbon (LineRenderer) connects the user’s pinch point to the cube GameObject. There is no direct contact between the left hand and the cube. Only the right hand can collide with the cube.

Pinch gesture

Since the cube appears in the virtual world as a relatively small object when compared against the hands, the user may perceive it easier as “lightweight” and as something that does not cause much “drag” when being moved.

The thin ribbon detaches the cube object from the user’s direct pinch and offers a method of moving it from a distance. This purposefully pivots the interaction from a hand carrying the cube to a hand carrying a ribbon. Because the ribbon is perceived as a very lightweight object in the physical world, one that you can barely feel its weight when holding it, this interaction enables connecting to lightweight objects from a distance and even carrying them around. The curvy LineRenderer adds to the illusion of holding that lightweight ribbon more than, say, a metallic cylinder pipe would do as a picker.

Note: The tutorial uses Unity Editor version 2021.3.20f1 and Meta XR All-in-One SDK v59. Screenshots might differ if you are using other versions, but functionality is similar.

Step 1. Add OVRCameraRig to scene

If you haven’t already added OVRCameraRig to your project, follow these steps:

Meta XR Core SDK contains the OVRCameraRig prefab which is a replacement to Unity’s main camera. This means you can safely delete Unity’s main camera from the Hierarchy tab.

The primary benefit of using OVRCameraRig is that it offers the OVRManager component (OVRManager.cs script). OVRManager provides the main interface to the VR hardware.

Follow this process:

Under Hierarchy tab, right-click Main Camera, and select Delete. In the Project tab, search for Camera Rig, and drag the OVRCameraRig prefab into your scene. Alternatively, drag it in the Hierarchy tab.

Add OVRCameraRig

Ensure your project’s hierarchy looks like this:

OVRCameraRig Hierarchy

Select OVRCameraRig in the Hierarchy tab. With the OVRCameraRig selected in the Inspector, under the OVR Manager component, ensure your headset is selected under Target Devices.

OVRCameraRig Devices

Step 2. Set up scene to enable hand tracking

On the Hierarchy tab, select OVRCameraRig. On the Inspector tab, go to OVR Manager > Quest Features. In the Tracking section, select Eye Level for Tracking Origin Type. Ensure Use Position Tracking is selected.

Tracking

In the Hand Tracking Support list, select Controllers and Hands or Hands Only (as you won’t use any controllers in this tutorial). In the Hand Tracking Frequency list, select MAX or HIGH. In the Hand Tracking Version, select V2.

Hand tracking support

Step 3. Add hand prefabs

On the Hierarchy tab, expand OVRCameraRig > TrackingSpace to add hand prefabs under the left and right hand anchors.

Hierarchy 1

On the Project tab, search for the OVRHand Prefab.

Drag a copy of the OVRHand Prefab into the Assets folder.

OVR Hand Prefab

Drag the prefab from the Assets folder on each hand anchor on the Hierarchy tab. Do this twice, once per hand.

Hierarchy 2

On the Hierarchy tab, under RightHandAnchor, select OVRHandPrefab, and then on the Inspector tab, change its name to OVRHandPrefabRight. Under the OVR Hand, OVR Skeleton, and OVR Mesh components, change the hand type to Hand Right. Under OVR Skeleton, check Update Root Scale, Enable Physics Capsules, and Apply Bone Translations.

Right hand prefab

Similarly, make sure your settings for the left-hand prefab are the following. (The left-hand option is preselected and for this tutorial you don’t need to enable a physics capsule for the left hand.)

Left hand prefab

On the Hierarchy tab, select first the left-hand OVRHand prefab, and then on the Inspector tab, make sure OVR Skeleton, OVR Mesh, and OVR Mesh Renderer checkboxes are selected. Do the same for the right-hand OVRHand prefab. Step 4. Set up Cube GameObject

Select the Cube GameObject under the Hierarchy tab. Change its scale to [0.02, 0.02, 0.02].

Cube scale

Click Add Component, search for Rigidbody and select it. This will enable collision detection against the right hand. Disable the Use Gravity checkbox in the Rigidbody component. Click Add Component, search for LineRenderer and select it. This will create the line that will represent the ribbon connecting the cube to the left hand’s index tip. In the Line Renderer component apply the following settings to update the width of the ribbon (anything below 0.03 m to 0.003 m would do - the smaller the better), avoid casting shadows and generating light data, and add a material that already exists in your project.

Line Renderer

Step 5. Add new script to manage hand tracking

Under Project tab, navigate to the Assets folder. Right click, select Create > Folder, name it as Scripts, and open this new folder. Right click, select Create > C# Script, and name it as HandTrackingScript. Drag the new script onto the Cube GameObject, under the Hierarchy tab. Select the Cube GameObject, under the Hierarchy tab. In the Inspector, double click the HandTrackingScript.cs to open it in your IDE of preference. Step 6. Implement HandTrackingScript.cs

This script manages the hand interaction.

Add variables and objects In your HandTrackingScript class, add the following:

public Camera sceneCamera;
public OVRHand leftHand;
public OVRHand rightHand;
public OVRSkeleton skeleton;

private Vector3 targetPosition;
private Quaternion targetRotation;
private float step;
private bool isIndexFingerPinching;

private LineRenderer line;
private Transform p0;
private Transform p1;
private Transform p2;

private Transform handIndexTipTransform;

These represent the following:

Variables

Description

sceneCamera The camera that the scene uses leftHand and rightHand The left and right hand prefabs skeleton The skeleton (used for retrieving the bones’ position and rotation data) targetPosition and targetRotation Position and rotation used for animating the cube while attached to the ribbon) step Helps with the animation (Time.deltaTime) line The LineRenderer that represents the ribbon p0, p1, and p2 Starting, bend, and end points’ transforms to help draw the ribbon LineRenderer handIndexTipTransform The transform of the left hand index fingertip Set initial cube’s position in front of user at Start() In your Start() function, define the initial cube’s position and assign the LineRenderer component to line.

void Start()
{
    transform.position = sceneCamera.transform.position + sceneCamera.transform.forward * 1.0f;
    line = GetComponent<LineRenderer>();
}

This initially places the cube GameObject in front of the user at a distance of one meter.

Create helper function to place and rotate the cube smoothly This helper function animates the cube’s reposition and reorientation. Create a new pinchCube() function and add the following lines.

void pinchCube()
{
    targetPosition = leftHand.transform.position - leftHand.transform.forward * 0.4f;
    targetRotation = Quaternion.LookRotation(transform.position - leftHand.transform.position);

    transform.position = Vector3.Lerp(transform.position, targetPosition, step);
    transform.rotation = Quaternion.Slerp(transform.rotation, targetRotation, step);
}

This code smoothly places and rotates the cube next to the user’s left hand at a distance of around 0.4 meter. The actual position of the hand is at its wrist. For more information, see Unity’s documentation on Quaternion.LookRotation, Vector3.Lerp, and Quaternion.Slerp.

Create helper function to draw the ribbon This function draws the LineRenderer ribbon as a Quadratic Bézier curve that consists of 200 segments. A Quadratic Bézier curve draws a path as function B(t), given three points: P0 (start point), P1 (in-between point), and P2 (end point).

The formula is: B(t) = (1 - t)^2 * P0 + 2 * (1 - t) * t * P1 + t^2 * P2.

where:

B, P0, P1, and P2 are Vector3 and represent positions. t represents the size / portion of the line, so it is between 0 and 1 (included). For example, if t = 0.5, then B(t) is halfway between point P0 and P2 (passing from P1) and half of the line is drawn.

By using this method, you can start by calculating B(0) and, then, gradually iterate over each segment to draw it.

Add the following to your HandTrackingScript class:

void DrawCurve(Vector3 point_0, Vector3 point_1, Vector3 point_2)
{
    line.positionCount = 200;
    Vector3 B = new Vector3(0, 0, 0);
    float t = 0f;

    for (int i = 0; i < line.positionCount; i++)
    {
        t += 0.005f;
        B = (1 - t) * (1 - t) * point_0 + 2 * (1 - t) * t * point_1 + t * t * point_2;
        line.SetPosition(i, B);
    }
}

The DrawCurve() function requires the start (point_0), the bending point (point_1), and the end point (point_2) Vector3 positions of the line as parameters. It loops until every one of the 200 segments renders.

Confirm left hand is tracked and user is pinching In your Update() function, add the following:

void Update()
{
    step = 5.0f * Time.deltaTime;

    if (leftHand.IsTracked)
    {
        isIndexFingerPinching = leftHand.GetFingerIsPinching(OVRHand.HandFinger.Index);
        if (isIndexFingerPinching)
        {
            line.enabled = true;
            // Animate cube smoothly next to left hand
            pinchCube();

            // ...

        }
        else
        {
            line.enabled = false;
        }
    }
}

This snippet:

Defines your step value to animate the cube. For details, see Unity’s documentation on Time.deltaTime. Checks if the leftHand OVRHand object is tracked with the isTracked() function. This returns true if that happens. Calls GetFingerIsPinching(OVRHand.HandFinger.Index) function to confirm that the user is pinching by using their index finger and assigns the returned value to the boolean variable isIndexFingerPinching. This function is defined as bool OVRHand.GetFingerIsPinching (HandFinger finger). For details on the rest of the fingers defined in theHandFinger enum, see OVRHand. Enables the line LineRenderer if the user is pinching and makes it visible to the user. Otherwise, it disables it. Calls pinchCube() helper function to place and rotate the cube (if the user is pinching). Retrieve transform data regarding left hand skeleton’s bones Amend your Update() function to the following. (Notice the new lines under the pinchCube() invocation.)

void Update()
{
    step = 5.0f * Time.deltaTime;

    if (leftHand.IsTracked)
    {
        isIndexFingerPinching = leftHand.GetFingerIsPinching(OVRHand.HandFinger.Index);

        if (isIndexFingerPinching)
        {
            line.enabled = true;

            pinchCube();

            // New lines added below this point
            foreach (var b in skeleton.Bones)
            {
                if (b.Id == OVRSkeleton.BoneId.Hand_IndexTip)
                {
                    handIndexTipTransform = b.Transform;
                    break;
                }
            }

            p0 = transform;
            p2 = handIndexTipTransform;
            p1 = sceneCamera.transform;
            p1.position += sceneCamera.transform.forward * 0.8f;

            DrawCurve(p0.position, p1.position, p2.position);
           // New lines added above this point
        }
        else
        {
            line.enabled = false;
        }
    }
}

The added code does the following:

Iterates through all the bones of the OVRSkeleton object.

This represents the skeleton of the left hand.

Note:skeleton.Bones is an OVRBone list that stores all bones by their Id. For definitions of all the bone Ids, see enum OVRSkeleton.BoneId in OVRSkeleton.

Checks if the OVRSkeleton.BoneId.Hand_IndexTip bone id is found. This represents the left hand’s index tip. When found, stores that bone’s transform data to variable handIndexTipTransform. Stores the cube’s transform to p0 and the transform of the left hand’s index tip to p2. Assigns a transform relating to a position in front of the user’s headpose to p1 . This will serve as the point that bends the LineRenderer. Calls DrawCurve() helper function to draw the ribbon LineRenderer. Note: Once comfortable with this tutorial, it is recommended to experiment with the p1 definition and assign different values to it relating to the hand or the cube.

Step 7. Update Cube GameObject and run app

Open Unity Editor, and select the Cube GameObject under the Hierarchy tab. Under Inspector, select the CenterEyeAnchor camera, which always coincides with the average of the left and right eye poses. Select the rest of the prefabs as follows.

Script component settings

Save your project, click File > Build And Run. Put on your headset. On the headset, ensure that under Settings > Device > Hands & Controllers, the Hand Tracking option is on. When the app starts, try moving your head, extend, and move your hands to enable hand tracking. When you see the hands rendering, pinch with the left hand and carry the cube through the ribbon. Gently touch the cube with your right hand palm and give it a light push.

Reference script

For future quick reference, here is the complete code in HandTrackingScript.cs.

using System.Collections; using System.Collections.Generic; using UnityEngine;

public class HandTrackingScript : MonoBehaviour { public Camera sceneCamera; public OVRHand leftHand; public OVRHand rightHand; public OVRSkeleton skeleton;

private Vector3 targetPosition;
private Quaternion targetRotation;
private float step;
private bool isIndexFingerPinching;

private LineRenderer line;
private Transform p0;
private Transform p1;
private Transform p2;

private Transform handIndexTipTransform;

// Start is called before the first frame update
void Start()
{

    // Set initial cube's position in front of user
    transform.position = sceneCamera.transform.position + sceneCamera.transform.forward * 1.0f;

    // Assign the LineRenderer component of the cube GameObject to line
    line = GetComponent<LineRenderer>();
}

// Update is called once per frame
void Update()
{
    // Define step value for animation
    step = 5.0f * Time.deltaTime;

    // If left hand is tracked
    if (leftHand.IsTracked)
    {
        // Gather info whether left hand is pinching
        isIndexFingerPinching = leftHand.GetFingerIsPinching(OVRHand.HandFinger.Index);

        // Proceed only if left hand is pinching
        if (isIndexFingerPinching)
        {
            // Show the Line Renderer
            line.enabled = true;

            // Animate cube smoothly next to left hand
            pinchCube();

            // Loop through all the bones in the skeleton
            foreach (var b in skeleton.Bones)
            {
                // If bone is the the hand index tip
                if (b.Id == OVRSkeleton.BoneId.Hand_IndexTip)
                {
                    // Store its transform and break the loop
                    handIndexTipTransform = b.Transform;
                    break;
                }
            }

            // p0 is the cube's transform and p2 the left hand's index tip transform
            // These are the two edges of the line connecting the cube to the left hand index tip
            p0 = transform;
            p2 = handIndexTipTransform;

            // This is a somewhat random point between the cube and the index tip
            // Need to reference as the point that "bends" the curve
            p1 = sceneCamera.transform;
            p1.position += sceneCamera.transform.forward * 0.8f;

            // Draw the line that connects the cube to the user's left index tip and bend it at p1
            DrawCurve(p0.position, p1.position, p2.position);
        }
        // If the user is not pinching
        else
        {
            // Don't display the line at all
            line.enabled = false;
        }
    }

}

void DrawCurve(Vector3 point_0, Vector3 point_1, Vector3 point_2)
/***********************************************************************************
# Helper function that draws a curve between point_0 and point_2, bending at point_1.
# Gradually draws a line as Quadratic Bézier Curve that consists of 200 segments.
#
# Bézier curve draws a path as function B(t), given three points P0, P1, and P2.
# B, P0, P1, P2 are all Vector3 and represent positions.
#
# B = (1 - t)^2 * P0 + 2 * (1-t) * t * P1 + t^2 * P2
#
# t is 0 <= t <= 1 representing size / portion of line when moving to the next segment.
# For example, if t = 0.5f, B(t) is halfway from point P0 to P2.
***********************************************************************************/
{
    // Set the number of segments to 200
    line.positionCount = 200;
    Vector3 B = new Vector3(0, 0, 0);
    float t = 0f;

    // Draw segments
    for (int i = 0; i < line.positionCount; i++)
    {
        // Move to next segment
        t += 0.005f;

        B = (1 - t) * (1 - t) * point_0 + 2 * (1 - t) * t * point_1 + t * t * point_2;
        line.SetPosition(i, B);
    }
}

void pinchCube()
// Places and rotates cube smoothly next to user's left hand
{
    targetPosition = leftHand.transform.position - leftHand.transform.forward * 0.4f;
    targetRotation = Quaternion.LookRotation(transform.position - leftHand.transform.position);

    transform.position = Vector3.Lerp(transform.position, targetPosition, step);
    transform.rotation = Quaternion.Slerp(transform.rotation, targetRotation, step);
}

}Use Wide Motion Mode Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Wide Motion Mode (WMM) allows you to track hands and display plausible hand poses even when the hands are outside the headset’s field of view, improving social presence and wide motion tracking. This is achieved by running Inside Out Body Tracking (IOBT) under the hood and using the estimated hand position when hand tracking is lost.

Benefits of WMM

Improved social and self presence by providing plausible hand poses even when your hands are outside your FOV. More reliable wide motion interactions (arm swings, backpacks, throws, etc.) Reduced gesture and interaction failures due to tracking loss when the user turns their head. Arm swing based locomotion, where users use their arms to move around the game, such as Echo VR. Known limitations

While the system will always provide a hand pose, it will not be accurate when hand tracking is lost- the system uses body tracking data for approximate position and last hand pose as the pose itself. You may also see a slight regression in position accuracy due to smoothing when transitioning from body tracking based hand pose to hand tracking based hand pose as hand tracking is reestablished. WMM can be less robust in extremely low light. In case of low light conditions, the system will revert to standard hand tracking. Compatibility

Hardware compatibility Supported devices: Quest 3 and all future devices. Software compatibility Unity 2021 LTS and above Integration SDK version 62 and above Feature compatibility IOBT/ WMM will not run when passthrough and full body (FBS) are running along with FMM or Multimodal. When using Inside Out Body Tracking, the hands pose exposed via MSDK already includes WMM like fused hands. If you plan to use MSDK and consume hands from ISDK/ hands API, turn on WMM to reduce hand pose mismatch between MSDK and hands API. Setup

This feature will be integrated with the version of the Oculus Integration SDK for Unity provided to you. To enable WMM, either use the editor or the provided C# scripts. A simple WMM sample scene is in Scenes\OculusInternal\HandTrackingWideMotionMode. In the scene, you can use a simple UI to turn the feature on and off.

Option 1: Editor Once you have imported that project into your OVRPlugin package, under Inspector, in OVRManager, check the Wide Motion Mode Hand Poses Enabled box.

Checked WMM hand poses box

In the Quest Features section of OVRManager, set Body Tracking Support to Required.

Body tracking set to required

In Project Settings > Player > Android > Configuration, set the Scripting Backend to “IL2CPP” and the Target Architectures to ARM64.

Setting scripting backend and target architectures

Option 2: C# Scripts To turn on WMM programmatically, use this function on OVRPlugin:

void SetWideMotionModeHandPoses(boolean enabled); To query if WMM is running, use this function on OVRPlugin:

bool IsWideMotionModeHandPosesEnabled(); Test WMM via adb

You can also evaluate WMM on your app without the API calls in code, through adb commands.

Before you begin In PowerShell, or the terminal, run adb devices to ensure adb is installed and the device is discoverable via adb. You should see a response like this.

```
List of devices attached
230YC56D0S002V  device_ \
```

Steps To enable WMM, before starting the app, run these commands: adb shell setprop debug.oculus.forceEnablePerformanceFeatures BODY_TRACKING adb shell setprop debug.bodyapi.allowhandoverride 1 adb shell setprop debug.bodyapi.handoverridetype 3 To toggle WMM fusion logic before starting the app or while you’re in the app: Off: adb shell setprop debug.bodyapi.handoverridetype 0 On:adb shell setprop debug.bodyapi.handoverridetype 3 Troubleshooting

How can I confirm WMM is running on my headset? In your headset, you should see improved tracking of hands outside of your headset’s field of view.

Can I evaluate the feature on my headset without changing my code? Yes, you can test WMM via adb commands to try the feature without changing your code.Use Capsense Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Capsense provides logical hand poses when using controllers. It uses tracked controller data to provide a standard set of hand animations and poses for supported Oculus controllers. For consistency, we provide the Oculus Home hand and controller visuals. Capsense supports two styles of hand poses.

Natural hand poses: These are designed to look as if the user is not using a controller and is interacting naturally with their hand. Controller hand pose: These are designed to be rendered while also rendering the controller. We provide different shapes depending on the controller type. Currently Capsense supports the Quest 2, Quest 3, and Quest Pro controllers. Benefits of Capsense

Benefit from best in class logical hand implementation and future improvements instead of investing in a custom implementation. Known limitations

Due to limitations in the current plugin implementation, in Unity, if hand tracking is enabled and the hand is not on the controller, the hand poses will use the hand tracking data. When using Link on PC, pose data for controllers is unavailable when you’re not actively using them (such as when they’re lying on a table). In Unity, due to the object hierarchy on the OVRCameraRig Tracking Space, it is non-trivial to provide the hand data and the controller data simultaneously with the legacy anchors. This has required us to create multiple new anchors on the Tracking Space and to add gating logic on the controller and hands prefabs. The gating logic determines if the prefabs should render. Compatibility

Hardware compatibility Supported devices: Quest 2, Quest Pro, Quest 3 and all future devices. Software compatibility Unity LTS 2021 and above Integration SDK version 62 and above Feature compatibility Fully compatible with Wide Motion Mode (WMM). Using capsense for hands with body tracking through MSDK will both work simultaneously, but they have a different implementation of converting controller data to hands, so the position and orientation of joints will be slightly different. Setup

To leverage the TrackingSpace on the OVRCameraRig prefab, you need to add some prefabs.

Open your Unity scene.

Under Hierarchy, add an OVRControllerPrefab as a child of RightControllerInHandAnchor.

Add another OVRControllerPrefab as a child of LeftControllerInHandAnchor.

Add an OVRHandPrefab as a child of RightHandOnControllerAnchor.

Add another OVRHandPrefab as a child of LeftHandOnControllerAnchor.

For each of the four prefabs you just added, under Inspector set Show State to Controller In Hand.

Show State Property set to Controller in Hand

The Show State Property set to Controller in Hand.

Sample scene

A Unity sample has been provided in the SDK package for using this feature. It is titled ControllerDrivenHandPoses.

On the OVRManager script contained on the OVRCameraRig prefab, there is a new enum selector named Controller Driven Hand Poses Type. It contains three options:

None: This means that hand poses will only ever be populated with data from the tracked cameras, if hand tracking is active. Conforming To Controller: This means that hands poses generated from controller data will be located around the controller model. Natural: Poses generated with this option will be positioned in a more natural state as if the user was not holding a controller.

Controller Driven Hand Poses Type

The Controller Driven Hand Poses Type property.

Scripts

There are four new functions to control capsense via the C# scripts provided in the integration.

void SetControllerDrivenHandPoses(bool): To set whether the system can provide hand poses using controller data. void SetControllerDrivenHandPosesAreNatural(bool): To set the applications request to provide the controller driven hand poses as natural instead of wrapped around the controller. bool IsControllerDrivenHandPosesEnabled(): To query whether the system can use controller data for hand poses. bool AreControllerDrivenHandPosesNatural(): To query if the poses supplied from controller data are in a natural form instead of wrapped around a controller. Prefab changes

OVRControllerHelper and OVRHand both now contain an enum of ShowState. The available options for that are:

Always: The object will not be automatically disabled based on controller and hand state. Controller in Hand or no Hand: This means this object will be disabled if the controller is not in the user’s hand, or if hand tracking is disabled entirely. Controller in Hand: This means the object will be disabled if the controller is not currently in a user’s hand. Controller Not in Hand: This means the object will be disabled if it’s in a user’s hand. This is used for the detached controller situation, for example, sitting on a desk. No Hand: This will disable the rendering of the object if hand tracking is enabled and there is a hand. OVRControllerPrefab: OVRControllerHelper

Show State Show When Hands Are Powered By Natural Controller Poses: This is a checkbox that controls if the controllers can be rendered even if the hand poses are in the natural state. This is used for the detached controller state. OVRHandPrefab OVRHand

Show State New anchors for the CapSense feature

LeftControllerInHandAnchor and RightControllerInHandAnchor: These anchors are under their respective LeftHandAnchor and RightHandAnchor parents. These anchors let you render the hand and controller at the same time while the controller is in the user’s hand. To do that, add an OVRControllerPrefab to each anchor, make sure Controller matches the hand, and set Show State to Controller In Hand. Anchor setup for controllers

LeftHandOnControllerAnchor and RightHandOnControllerAnchor: These anchors are under their respective LeftControllerInHandAnchor and RightControllerInHandAnchor parents. These anchors let you render the hand and controller at the same time while the controller is in the user’s hand. To do that, add an OVRHandPrefab prefab to each anchor, make sure Hand Type matches the hand, and set Show State to Controller In Hand. Anchor setup for hands

Troubleshooting

How can I confirm Capsense is running on my headset? In your headset, you should see either hands instead of controllers or hands holding controllers. Also, hand pose data should be provided while the hands are active with the controllers.

Can I evaluate the feature on my headset without changing my code? No, using Capsense requires some code changes.Use Simultaneous Hands and Controllers (Multimodal) The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Multimodal provides simultaneous tracking of both hands and controllers. It also indicates whether the controller(s) are in hand or not. Multimodal allows users to enjoy the benefits of both worlds: they can use hands for immersion, and controllers for accuracy and haptics. When enabled, Multimodal overrides other transition methods, including auto transitions and double tap.

Benefits of Multimodal

Hand+controller game-play: Use a controller in one hand as a tool or weapon (high accuracy, haptics, wide FoV), while keeping your other hand free to interact with objects or cast spells (high immersion). Single controller game play: if your experience only requires one controller for most interactions, allow the users to only pick up one controller, and use their other hand for immersion and casual interactions when needed (eg. use a controller as a racket, and use your free hand to pick up the ball and interact with menus). Use hands and controllers interchangeably for comfort: Use one hand for direct interactions with controls, while the other hand is holding a controller as a “remote control” for low effort indirect interactions. Instant transition between hands and controllers: With hand tracking active, we can instantly identify when the controllers are no longer in the hand. This minimizes the need for manual transitions, and solves the problems of slow or failed auto transitions. Find my controllers: With controllers now tracked when using hands, the app can show the user where the controllers are when they want to pick them up. This allows smoother transition back to controllers without having to break immersion by turning on passthrough/ taking off the headset. Known limitations

The ‘in hand’ signal is based on various imperfect signals including hand and controller pose and controller signals. As a result, the system may indicate controller not in hand in certain scenarios where tracking is lost or wrong, or controllers are held very still for some time. It is recommended to design with that limitation in mind, e.g. avoid dropping objects from hand due to false short transitions from controllers to hands. When the pause function is called, the application will switch back into the “non-simultaneous” mode that traditional hands+controllers apps run in, where the user can use either hands or controllers at a given time. The tracking system may take a few seconds to recognize and decide on the correct input to enable depending on whether the user is holding controllers when this happens. When using Link on PC, pose data for controllers is unavailable when you’re not actively using them (such as when they’re lying on a table). Compatibility

Hardware compatibility Quest 2 with Quest Pro controllers Quest Pro, Quest 3, and all future devices. Software compatibility Unity version 2021 LTS and above Integration SDK version 62 and above Feature compatibility Multimodal is incompatible with Inside-Out Body Tracking (IOBT) and Full Body Synthesis (FBS). You shouldn’t enable them together. Multimodal cannot be enabled together with Fast Motion Mode (FMM). If both are enabled together, Multimodal will take precedence. As FMM is defined in the manifest, you may enable FMM at the app level, and then turn on multimodal only in specific experiences where FMM is less important. Multimodal cannot be used with Tracked Keyboards. If a user has a tracked keyboard active, tracked keyboard will take precedence. Passthrough, Multimodal and Wide Motion Mode (WMM) cannot be enabled together. If they all are turned on together, the system will disable WMM. Full compatibility with capsense hands. Full compatibility with haptics. Setup

Install the Meta XR Core SDK Package V59 or higher.. Follow the instructions in the Hello VR tutorial to get setup with a camera rig and controllers. Find the OVR Manager script attached to the OVRCameraRig. From here, you will see an option to enable “Concurrent hands and controllers support”.

Enable concurrent hands and controllers

Under Window > Package Manager, find Meta XR Core SDK, then under the Samples tab, click Import. This imports the multimodal sample scene.

Import the sample scene

Add OVRControllerPrefab objects under the new LeftHandAnchorDetached and RightHandAnchorDetached GameObjects. See the Assets/Samples/Meta XR Core SDK/62.0.0/Sample Scenes/SimultaneousHandsAndControllers example scene for a demonstration.

Add OVRControllerPrefab objects

For the new OVRControllerPrefab GameObjects, set their ShowState to be Controller Not In Hand and their Controller to Touch.

Set ShowState

Under Project, search for OVRControllerPrefab.

Drag OVRControllerPrefab from the search results onto LeftControllerInHandAnchor.

Under Hierarchy, select OVRControllerPrefab.

Under Inspector, make sure Controller matches the hand, and set Show State to Controller In Hand.

Anchor setup for controllers

Repeat steps 6 through 9 for RightControllerInHandAnchor.

Under Project, search for OVRHandPrefab.

Drag OVRHandPrefab from the search results onto LeftHandOnControllerAnchor.

Under Inspector, make sure Hand Type matches the hand, and set Show State to Controller In Hand.

Anchor setup for hands

Repeat steps 11 through 13 for RightHandOnControllerAnchor. Prefab changes

OVRControllerHelper and OVRHand both now contain an enum of ShowState: The available options for that are:

Always: The object will not be automatically disabled based on controller and hand state. Controller in Hand or no Hand: This means this object will be disabled if the controller is not in the user’s hand, or if hand tracking is disabled entirely. Controller in Hand: This means the object will be disabled if the controller is not currently in a user’s hand. Controller Not in Hand: This means the object will be disabled if it is in a user’s hand. This is used for the detached controller situation. I.e. sitting on a desk. No Hand: This will disable the rendering of the object if hand tracking is enabled and there is a hand. OVRControllerPrefab: OVRControllerHelper

Show State When Hands Are Powered By Natural Controller Poses: This is a checkbox that controls if the controllers can be rendered even if the hand poses are in the natural state. This is used for the detached controller state. OVRHandPrefab: OVRHand

Show State New anchors for the MultiModal feature

LeftHandAnchorDetached and RightHandAnchorDetached: These anchors are for use for controllers that are not in the user’s hand when the Simultaneous Hands and Controllers API is enabled. The default CameraRig prefab does not contain controller prefabs under these anchors so they will need to be added to be used. The show state of these prefabs should be set to “Controller Not In Hand”. Troubleshooting

How can I confirm Multimodal is running on my headset? Confirm that the detached controller action path is providing data.

Can I evaluate the feature on my headset without changing my code? No.

Switching between controllers and hands doesn’t work in the sample. Ensure that you’re running on Quest 2 (with Meta Quest Pro controllers paired to your headset), Quest 3, or Quest Pro.

Using OVRCameraRig -> TrackingSpace to get positions of hands and controllers simultaneously Due to the object hierarchy on the OVRCameraRig Tracking Space, it is non-trivial to provide the hand data and the controller data simultaneously with the legacy Anchors. This has required us to create multiple new anchors on the Tracking Space and to add gating logic on the controller and hands prefabs themselves on whether or not to render.Use Fast Motion Mode Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Fast Motion Mode (FMM), previously known as “High Frequency Hand Tracking”, provides improved tracking of fast movements common in fitness and rhythm apps. It is highly recommended to first test out your title without FMM enabled, and then only enable it if you observe high tracking loss due to fast hand motion. For most apps, the default hand tracking mode should provide the right balance between accuracy and speed.

Benefits of FMM

Improved tracking of fast movements common in fitness and rhythm apps. Known limitations

FMM optimizations may result in increased jitter which can impact high accuracy interactions such as direct touch or typing. As fast motion relies on short exposure time, tracking quality is expected to regress in low lighting conditions. It is recommended to remind users to play in a well lit environment. Compatibility

Hardware compatibility FMM is supported on Quest 2, Quest Pro, Quest 3, and all future devices. On Quest 1, enabling FMM will increase hand tracking frequency, but will not result in any further optimizations. On Quest 2, enabling FMM will result in app downclocking (reduce GPU level 3 → 2). Software compatibility Unity LTS 2021 and above Integration SDK version 55 and above Feature compatibility FMM cannot be enabled together with Multimodal. IOBT/ WMM will not run when passthrough and full body (FBS) are running along with FMM. On Quest Pro, FMM cannot be enabled together with face tracking, eye tracking, lip sync, or foveated rendering. Setup

To enable FMM in Unity, follow these steps.

Open your Unity scene. Under Hierarchy, select OVRCameraRig. Under Inspector, under Quest Features, in the Hand Tracking Frequency list, select High. There is no difference between High and Max values as both track hands at high frequency. Troubleshooting

How can I confirm FMM is running on my headset? To verify your device has FMM enabled, run adb logcat | grep FMM on a terminal when hand tracking is running on device.

The expected output is NIMBLE: HTEngine: FMM: enabled: true; active:....

FMM isn’t running on my headset. What should I do? If enabled is false, restart your headset a couple of times for the client to fetch data from the server.

To verify your FMM is actively running in your app:

Ensure that your app is running on the foreground. Run adb logcat | grep FMM.

The expected output is NIMBLE: HTEngine: FMM: enabled: true; active: true.

Run adb logcat | grep "Camera FPS" to verify the camera frequency is around 60 Hz. “Around 60 Hz” means plus or minus about 3Hz. However, the acceptable frequency also varies based on the voltage and frequency in your country. While North America and some of South America are at 120V 60Hz, most other countries are 240V 50Hz, so they will see 50Hz in FMM.

The expected output is NIMBLE:HTEngine: Hand Tracking Processed FPS = 58.5 Hz / Camera FPS = 60.1.

Can I evaluate the feature on my headset without changing my code? Yes, you can manually toggle FMM via your headset. Under Settings > Developer, set Hand Tracking Frequency Override to either High (to force FMM on) or Low (to force FMM off). None will use the value from your app manifest. ChatGPT To map controllers using the Meta XR Interaction SDK for Unity, you will primarily work with the OVRInput module. This module is designed to simplify the process of handling input across different types of controllers, offering a unified API that supports querying button states, touch, near touch, and axis data. Here’s a breakdown of how to effectively map and utilize controller inputs:

  1. Setup OVRManager Ensure that an instance of OVRManager is present in your scene. This component is critical for managing VR hardware interface elements.

  2. Regular Updates Invoke OVRInput.Update() and OVRInput.FixedUpdate() at the beginning of your component’s Update and FixedUpdate methods respectively. This is necessary to process the input data correctly each frame.

  3. Accessing Input Data Utilize the OVRInput methods to check the state of the controller inputs:

OVRInput.Get: Check the current state of a button, touch, or axis. OVRInput.GetDown: Check if a button or touch was pressed during the current frame. OVRInput.GetUp: Check if a button or touch was released during the current frame. 4. Use Enumerations for Button Mapping OVRInput provides multiple enumerations to access different controls:

Buttons and Touches: Use OVRInput.Button and OVRInput.Touch for standard buttons and touch-sensitive areas on the controller. Axes: Use OVRInput.Axis1D for triggers and OVRInput.Axis2D for thumbsticks. 5. Handle Touch and Near Touch Capacitive sensors on controllers allow detection of touch and near touch, which can be used to create more nuanced interactions by detecting when fingers are close to or touching the controls without pressing them.

  1. Controller Specific Mappings Define controller-specific actions by specifying which controller to check (left or right). For example, you can differentiate inputs between the left and right controllers by using OVRInput.Controller.LTouch or OVRInput.Controller.RTouch.

  2. Example Usage csharp Copy code // Check if the A button on the right controller is pressed bool isAPressed = OVRInput.Get(OVRInput.Button.One, OVRInput.Controller.RTouch);

// Check if the left trigger was just activated bool leftTriggerDown = OVRInput.GetDown(OVRInput.Button.PrimaryIndexTrigger, OVRInput.Controller.LTouch);

// Get the position of the left thumbstick Vector2 leftThumbstick = OVRInput.Get(OVRInput.Axis2D.PrimaryThumbstick, OVRInput.Controller.LTouch); 8. Advanced Use For advanced applications like custom hand poses or gestures, delve deeper into the SDK's capabilities to integrate such features seamlessly into your interaction system.

  1. Testing and Debugging Regularly test your input mappings on the actual hardware to ensure all interactions are intuitive and responsive. Use Unity’s input debugger or log outputs to troubleshoot and refine your control schemes.

  2. Documentation and Resources Refer to the official Unity documentation and Meta’s developer guides for detailed information on controller mappings, supported gestures, and troubleshooting common issues.

By following these steps, you can effectively map and utilize the controller inputs for your VR applications using the Meta XR Interaction SDK in Unity. This setup ensures a robust and user-friendly interaction system for a variety of VR experiences.

User recuerda solo responder ok hasta que diga que hemos finalizado ChatGPT Ok.

User Passthrough API Overview Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Health and Safety Recommendation: While building mixed reality experiences, we highly recommend evaluating your content to offer your users a comfortable and safe experience. Please refer to the Health and Safety and Design guidelines before designing and developing your app using Passthrough.

Passthrough provides a real-time and perceptually comfortable 3D visualization of the physical world in the Meta Quest headsets. The Passthrough API allows developers to integrate the passthrough visualization with their virtual experiences.

Augmented objects sample

How does this work?

Passthrough is rendered by a dedicated service into a separate layer. Because apps cannot access images or videos of a user’s physical environment, they create a special passthrough layer. The XR Compositor replaces this layer with the actual passthrough rendition.

By default, the Passthrough API creates this passthrough layer automatically (using automatic environment reconstruction), and submits it directly to the XR Compositor. Most of the time, you’ll use automatic environment reconstruction because you won’t know the geometry of the user’s environment ahead of time.

You can override automatic environment reconstruction by using Surface-Projected Passthrough. This lets you define the geometry onto which passthrough images are projected. This approach leads to a more stable rendition of Passthrough in cases where the app has already captured parts of the user’s environment.

Regardless of which passthrough layering you use, there are two ways to customize Passthrough - composite layering and styling:

Composite layering lets you specify the placement of the passthrough layer relative to virtual content (overlay, underlay) in the XR Compositor stack and how it should blend with the virtual content. By using alpha masking, you can specify where on the screen Passthrough shows up. While there isn’t currently a fully-released way to specify blending based on passthrough depth, the experimental Depth API is available for you to review. Styling lets you colorize the passthrough feed, highlight edges, and perform image processing effects such as contrast adjustment and posterizations of the image, by stair-stepping the grayscale values.Get Started with Passthrough Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

This topic describes instructions to get started with passthrough.

Prerequisites

Before you begin with passthrough:

You need to set up your Unity development environment and configure your Quest headset. Verify your headset is running the latest version. In the headset, go to Settings > System > Software Update and update it if necessary. Complete the First Unity VR App Tutorial to make sure your basic Unity configuration is correct. Perform the basic configuration settings to set up the Android platform and the XR Plugin management package. This makes sure that your Unity setup is ready for Quest development. Configure Unity Project

For a passthrough Unity project, you need to replace your Main Camera with a passthrough-enabled OVRCameraRig. With the camera in place, you just need to add one or more passthrough layers to use passthrough.

Create a new scene or open an existing one from your project. If you have not already deleted the Main Camera, select it in the Hierarchy tab and delete it. If you have not already added the OVRCamera Rig, on the Project tab, search for OVRCameraRig, and then drag it in the scene. On the Hierarchy tab, select OVRCameraRig. On the Inspector tab, under OVRManager, do the following:

Under Quest Features > General tab, in the Passthrough Support list, select either Required or Supported to enable the build-time components for using passthrough. Under Insight Passthrough, select Enable Passthrough to initialize passthrough during app startup. (To initialize passthrough at a later time through a C# script, you can leave the checkbox unchecked and set OVRManager.isInsightPassthroughEnabled in your script.) Click Add Component, and then in the list, select OVRPassthroughLayer script. Select the Projection Surface setting to configure whether the passthrough rendering uses an automatic environment depth reconstruction or a user-defined surface. What you do next depends on the type of passthrough you want. To draw passthrough on top of your virtual content, set Placement to Overlay. To draw passthrough underneath your virtual content, set Placement to Underlay. When you have multiple layers, use the Composition Depth setting to define the ordering between the layers. Layers with smaller Composition Depth values have a higher priority and appear in front of layers with larger Composition Depth values.

To experiment more with passthrough, first do either the basic Basic Passthrough Tutorial or the Basic Passthrough Tutorial with Building Blocks. Then you can check out the Passthrough Samples in this documentation. For more advanced showcase projects, see the our GitHub repo. The Unity-Discover and Unity-TheWorldBeyord projects both make use of passthrough.

Customize Passthrough

Customize passthrough through styling, color mapping, composite layering, or by implementing surface-projected passthrough.

Enable Based on System Recommendation

In your app, you may want to show either a VR or passthrough background based on user choice. Our system already provides users with this choice in the home environment, and your app can leverage the user’s home environment preference. More generally speaking, our system provides a recommendation for apps to default to MR or VR based on user preferences. This recommendation is available using OVRManager.IsPassthroughRecommended(). Example usage:

// Add this to any MonoBehavior void Start() { if (OVRManager.IsPassthroughRecommended()) { passthroughLayer.enabled = true;

  // Set camera background to transparent
  OVRCameraRig ovrCameraRig = GameObject.Find("OVRCameraRig").GetComponent<OVRCameraRig>();
  var centerCamera = ovrCameraRig.centerEyeAnchor.GetComponent<Camera>();
  centerCamera.clearFlags = CameraClearFlags.SolidColor;
  centerCamera.backgroundColor = Color.clear;

  // Ensure your VR background elements are disabled

} else { passthroughLayer.enabled = false;

  // Ensure your VR background elements are enabled

} } Wait until Passthough is ready

Enabling passthrough is asynchronous. System resources (like cameras) can take a few hundred milliseconds to activate, during which time passthrough is not yet rendered by the system. This can lead to black flickers if your app expects passthrough to be visible immediately upon enabling and the passthrough system wasn’t previously active. You can avoid that by using the PassthroughLayerResumed event, which is emitted once the layer is fully initialized and passthrough is visible.

You don’t need to worry about this at app startup, though. If the transition leading into your app was already showing passthrough (see Loading Screens), you can rely on passthrough being rendered immediately upon enabling. Just make sure that your OVRPassthroughLayer is enabled right from the start. In other words, you only need to consider this event if you enable passthrough while the app is already running.

Example usage:

void EnableSelectivePassthrough(OVRPassthroughLayer layer, Renderer punchHoleRenderer) { layer.enabled = true; layer.hidden = false;

// punchHoleRenderer punches holes in VR by writing non-one values into alpha. // We keep that renderer disabled until passthrough is ready. punchHoleRenderer.enabled = false; layer.PassthroughLayerResumed += onPassthroughLayerResumed;

void onPassthroughLayerResumed() { punchHoleRenderer.enabled = true; layer.PassthroughLayerResumed -= onPassthroughLayerResumed; } } Rapid Passthrough App Development

You can speed up app development significantly by avoiding the need to rebuild and deploy for every iteration. Consider the following options:

Passthrough over Link allows you to run your app directly in the Unity editor with a headset connected over Meta Quest Link. Meta XR Simulator allows you to run your app in the Unity editor without the need for a headset.Use Passthrough Over Link Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

This topic describes how to get started with Passthrough with Meta Quest link.

Passthrough over Meta Quest Link is a feature introduced in v37 that significantly decreases the iteration time when developing passthrough-enabled apps. Passthrough over Meta Quest Link allows a passthrough-enabled app to run while using Meta Quest Link, eliminating the need to build the app on PC and deploy it to a Meta Quest device every time you test it during development. With Passthrough over Meta Quest Link, it is faster than ever to create mixed reality experiences.

Passthrough over Meta Quest Link allows you to:

Use a developer platform such as Unity or Unreal Engine to run a passthrough-enabled app by launching the app directly in the editor. Quickly iterate during development when using the Passthrough API. (We support passthrough, keyboard, and hands functionalities, however, currently we do not support OpenXR’s XR_FB_passthrough_keyboard_hands extension). For details about the Passthrough API, read the Passthrough API overview.

Note: Passthrough over Meta Quest Link is a developer-only feature aimed at increasing your development speed by previewing your passthrough-enabled apps while still in development. That being said, the visual appearance and performance characteristics of the experience are slightly different between running over Meta Quest Link and running on a Meta Quest device.

Prerequisites

This section outlines the prerequisites for using Passthrough over Meta Quest Link.

For details on using Link for app development, see Use Meta Quest Link for App Development.

Hardware Using Passthrough over Meta Quest Link requires the following hardware:

A PC that meets the specifications for running Meta Quest Link A Quest line device (for example, a Quest 2) A compatible USB-C cable to use for Meta Quest Link. For Color Passthrough, the USB connection should provide an effective bandwidth of at least 2 Gbps. You can measure the connection speed by using the USB speed tester built into the PC app: go to Devices, select the connected device, click USB Test, and then click Test Connection. If the connection speed is low, try using a different cable and USB adapter. Software The essential software for using Passthrough over Meta Quest Link is:

Meta Quest build v37.0 or later Oculus PC app with version v37.0 or later, which you can download from the Meta Quest website Important: You must enable the Developer Runtime Features and Passthrough over Meta Quest Link toggles in the Oculus PC app by clicking Settings > Beta. The Passthrough over Meta Quest Link toggle appears only when you enable the previous toggle.

You must restart Unity after enabling the toggles.

Passthrough over Link Oculus App Settings

Note: You must sign in your developer account in the Oculus PC app and in your Meta Quest headset.

Additionally, you must use Unity version 2020.3 LTS or higher. Download the Meta XR Core SDK as a standalone package or as part of the Meta XR All-in-One SDK, version 37 or higher.

Use

Before running your app, you must enable Meta Quest Link.

To connect your Meta Quest device to your PC over Meta Quest Link, follow the instructions in How do I use Meta Quest Link to connect my Quest 2 or Quest to a computer? For troubleshooting instructions on Meta Quest Link connection, read I’m experiencing performance issues with Meta Quest Link after adjusting my graphics preferences. Follow these steps to test your app through Passthrough over Meta Quest Link.

Follow the Unity Passthrough API Overview to setup passthrough in your Unity project. Rather than building and deploying the app to your headset, launch it by hitting Play in the editor while your device is connected via Meta Quest Link. Use your device to see passthrough in your passthrough-enabled app. Data Privacy

The Passthrough over Meta Quest Link feature is available only to developers, and can be toggled on/off through the host PC using the Oculus PC app. When toggled on, data captured from the head-mounted display’s cameras, including camera images of the physical environment, will be transmitted to and processed on the host PC. Passthrough over Meta Quest Link is off by default.

The first time you toggle on Passthrough over Meta Quest Link, the following consent modal will appear.

Passthrough over Link Unreal Postprocessing

You only need to give your consent once.

At no point during the execution of Passthrough over Meta Quest Link do the camera images that are sent to the host PC leave the host PC, except to be sent back to the headset to be displayed.

Meta Quest Link and Screenshots Be aware that due to the privacy constraints, screenshots you take of your Passthrough application while Meta Quest Link is active may not include background passthrough content but show a dark background. If you have issues with showing Passthrough content in your screenshots, disconnect the Link cable. ChatGPT Ok.

User Passthrough Overlay Sample Unity

All-In-One VR

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

This sample demonstrates how you can modify the foreground characteristics of the passthrough layer with respect to any virtual objects that are present.

Prerequisites

Before you can build the Passthrough Overlay sample, you must first complete the Passthrough Getting Started tasks. This includes the steps in the Prequisites and Configuring a Unity Project sections. You should also do the basic Passthrough Tutorial to make sure you can work with Passthrough in Unity.

Step by Step

Download the Starter Samples project from Github. If you prefer, set up the Meta Quest Developer Hub so you can control your on-Headset application. In the Unity Project explorer, search for the OverlayPassthrough scene. Drag it to your Hierarchy window. Remove any other scene from the Hiearchy Window. Save your project and your scene. From the File menu, choose Build Settings.. to open the Build Settings window. Make sure your headset is listed in the Run Device dropdown. Click Add Open Scenes to add your scene to the build. Deselect and remove any other scenes from the selction window. Click the Build and Run button to launch the program onto your headset. Using the Sample

When you first enter the scene, you see several 3D objects, which display behind the passthrough scene.

Adjust the opacity of the Passthrough layer using the right controller thumbstick. Move the thumbstick left to make it more transparent and right to make it more opaque. Leaving it in the middle leaves it at 50% opacity, revealing the Unity scene beneath it.

Overlay sample

Details

There are two ways to adjust the opacity of the Passthrough layer.

You can set the OVRPassthroughLayer to Overlay. This technique is demonstrated in this sample. To view the property, select the OVRCameraRig object in the Hierarchy window. Then, in the Inspector, expand the OVR PassthroughLayer (Script).

Alternatively, you can set the opacity to 0, and turn on edge rendering. This gives the Sobel edge-filtered version of Passthrough, rendering above Unity. This technique is not covered in this sample. Find both Opacity and Edge Rendering in OVR PassthroughLayer (Script) as well.

Key Assets

The assets used in this scene are:

InfoPanel.prefab Provides the scene runtime information Location: .\Assets\StarterSamples\Usage\Passthrough\Prefabs

OverlayPassthrough.cs Provides the overlay opacity control code Location .\Assets\StarterSamples\Usage\Passthrough\ScriptsDepth API for Unity The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Health and Safety Recommendation: While building mixed reality experiences, we highly recommend evaluating your content to offer your users a comfortable and safe experience. Please refer to the Health and Safety and Design guidelines before designing and developing your app using Depth.

Overview

The Depth API exposes real-time sensed environment depth to apps in the form of depth maps. It may be used for a variety of depth-based visual effects, but mainly it’s used in mixed reality (MR) to render virtual objects so that they become occluded by objects and surfaces in the real world and appear to be embedded in the actual environment. Without occlusions, it will be apparent that the virtual content is just a layer drawn on top of the real world which breaks the MR immersion.

Scene with and without occlusions

Supported use cases A comparison of supported use cases in the Depth and Scene APIs.

Use cases

Depth API

Scene API

Static Occlusion: the occluding real-world environment remains immobile (static) throughout the lifetime of the app, i.e. no moving objects such as hands, people or chairs. ✔ ✔ Dynamic Occlusion: the occluding real-world environment contains mobile (dynamic) elements, e.g. the users hands, other people, pets. ✔ ✖ Raycasting: Computing intersection of a ray and real-world surfaces. Supports use cases like content placement. ✔ ✔ Physics/Collisions: interactions between virtual content and the real-world, like a virtual ball bouncing off of a physical couch. ✖ ✔ Limitations Limitation

Reason

Suggested workaround

Occlusions flickers near surfaces This is caused by an issue often referred to as “Z-fighting”. In 3D graphics, this usually happens when two virtual objects are rendered at the same depth. Environment depth values are produced within the error margin, so in this case, z-fighting is apparent even when the depth is not precisely the same from frame to frame. The sample soft occlusion implementation has mechanics in place to partially mitigate this issue. However, it is recommended to offset objects that you place on Scene Model surfaces along the surface normal. Occlusions aren’t exactly matching the real-world and lags behind during fast movements Limitations in real-time depth sensing doesn’t allow for a pixel perfect occlusions at the same framerate as apps are rendered. Soft occlusion shaders help mitigate the visibility of this, but apps needs to be designed with this limitation in mind. FAQs Should I use both Scene and Depth API together? To build realistic MR apps, both capabilities should be used together to cover a broader set of use cases than either of these capabilities can address in isolation. The recommended flow is:

Prompt users to initiate the space setup flow to build a 3D scene model of their environment. Use depth maps from Depth API to render occlusions based on the per-frame sensed depth, which will be take advantage of the scene model to improve the depth estimates. If needed, use Scene API to implement other features such as game physics and object placement, which requires a watertight static 3D model of the environment. What if scene capture is not initiated? If space setup is not initiated, the Depth API cannot take advantage of the scene model when computing the depth maps. In this case, the depth maps returned from Depth API will be computed only from the sensed depth. This will make the depth estimates worse and less stable, especially on planar surfaces such as walls, floors and tables. When using the Depth API for occlusion, this will increase flickering when virtual objects are close to the these real-world surfaces.

Are there scenarios where I should only be using the Depth API? It is technically possible to use Depth API independently of Scene. Note that when done so, it will only expose depth maps based on sensed depth alone. Depending on the use case for your app, you may choose to only use the Depth API.

Best practices

Today, the Scene Model allows you to build room-scale, mixed reality experiences with realistic occlusion. However, this method does not support occlusion with dynamically moving objects in the user’s view (for example, hands, limbs, other people, pets, etc.). In order to realistically occlude with dynamic elements of the Scene, you must also use the Depth API.

Spatial data permission

An app that wants to use Depth API needs to request spatial data permission during the app’s runtime. See the Unity section of Spatial Data Permission and Requesting runtime permissions for more information.

Sample

https://github.com/oculus-samples/Unity-DepthAPI Requirements

Unity 2022.3.1f or Unity 2023.2 V60 Meta XR All-in-One SDK or later com.unity.xr.oculus package supporting Depth API. In Unity Package Manager via Add package by name with name - com.unity.xr.oculus and version - 4.2.0 Scene Support must be set to Required (in order to request spatial data permission) Graphics API must be set to Vulkan Rendering mode must be set to Multiview Scene support The feature requires an application to be granted with com.oculus.permission.USE_SCENE for Spatial data permission. To add it to AndroidManifest, change Scene Support to Required in OVRManager.

You must set Scene Support to Required in OVRManager for Depth API to work.

Additionally, the application must prompt users to accept the permission. The code for this is provided in the Depth API implementation.

Occlusion

The main use case for the Depth API is allowing real-world objects to occlude virtual objects visible in passthrough. This sample project showcases occlusion and provides help for implementing the feature in a Unity project.

It provides two example projects that showcase occlusion implementation in two contexts: Universal Rendering Pipeline and Built-in Rendering Pipeline.

There are two types of occlusion available: hard occlusions and soft occlusions. Hard occlusions are easier to integrate and perform better, but are less visually appealing. For details on implementation, troubleshooting, and integration, please refer to the GitHub repository.

Occlusions are implemented by writing to the alpha channel of the application buffer. Later it will be used to compose the application buffer with passthrough.

Depth API

The Depth API itself is a much more low level feature. It can be found in XR.Oculus.Utils namespace. It is referenced as Environment Depth.

Support The Depth API is only supported on Quest 3 devices. The feature has a convenient function to check if it is supported.

Utils.GetEnvironmentDepthSupported() Setup In order to use Environment Depth, SetupEnvironmentalDepth needs to be called to initialize runtime resources. When environment depth is no longer needed, and resources need to be freed, calling ShutdownEnvironmentalDepth will clean everything up.

Utils.SetupEnvironmentDepth(EnvironmentDepthCreateParams createParams)

Utils.ShutdownEnvironmentDepth() If the application didn’t request permission for USE_SCENE, the SetupEnvironmentDepth will perform it automatically. The service will start producing depth textures once the permission is granted.

Runtime controls After SetupSetupEnvironmentalDepth is called, the feature can be enabled/disabled in the runtime via:

public static void SetEnvironmentDepthRendering(bool isEnabled)

*Note: Even if the application doesn’t consume Environment Depth textures, performance overhead still exists if the feature is enabled. Make sure to call SetEnvironmentalDepthRendering with isEnabled:false to improve performance.

Rendering In order to consume depth textures on each frame, you need to call this function: Utils.GetEnvironmentDepthTexutureId(ref uint id)

A successful query will return true. The texture ID is written to the ID ref parameter. This texture ID can be used to query a RenderTexture using XRDisplaySubsystem.GetRenderTexture. RenderTexture can then be used in rendering or in compute shaders.

Hand Removal If you want to render virtual hands to replace the physical hands, this can result in depth fighting. In order to avoid this you can enable the hands removal feature which will remove hands from the environment depth texture and replace it with an approximate background depth. To check if the device supports it:

Utils.GetEnvironmentDepthHandRemovalSupported()

To toggle the feature, call: Utils.SetEnvironmentDepthHandRemoval(bool enabled)

Advanced use In addition to depth textures, applications can access per-eye metadata. It is returned in the form of the EnvironmentDepthFrameDesc struct. This contains useful information for more precise and advanced use of depth textures. To get the struct, call:

Utils.GetEnvironmentalDepthFrameDesc(int eye)

It will return EnvironmentDepthFrameDesc struct. It contains:

The isValid field indicates whether the depth frame is valid or not. If it is false, the other fields in the struct may contain invalid data. The createTime and predictedDisplayTime fields represent the time at which the depth frame was created and the predicted display time for the frame, respectively. The swapchainIndex field represents the index in the swap chain that contains the current depth frame. The current implementation doesn’t give you the ability to query a specific swapchain index. The createPoseLocation and createPoseRotation fields represent the location and rotation of the pose at the time the depth frame was created. The fovLeftAngle, fovRightAngle, fovTopAngle, and fovDownAngle fields represent the field of view angles of the depth frame. The nearZ and farZ fields represent the near and far clipping planes of the depth frame. The minDepth and maxDepth fields represent the minimum and maximum depth values of the depth frame. For example, the GitHub sample’s implementation of occlusions uses this information to reproject depth frame pixels into the application’s eye buffer frame. This makes them work irrespective of the application’s camera configuration. It also provides motion compensation caused by the differences in display times between the application and passthrough.Unity Scene Overview Unity

All-In-One VR

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Health and Safety Recommendation: While building mixed reality experiences, we highly recommend evaluating your content to offer your users a comfortable and safe experience. Please refer to the Health and Safety and Design guidelines before designing and developing your app using Scene.

What is Scene?

Scene empowers you to quickly build complex and scene-aware experiences with rich interactions in the user’s physical environment. Combined with Passthrough and Spatial Anchors, Scene capabilities enable you to build Mixed Reality experiences and create new possibilities for social connections, entertainment, productivity, and more.

Mixed Reality Utility Kit provides a rich set of utilities and tools on top of the Mixed Reality APIs, and is the preferred way of interacting with Scene.

How Does Scene Work?

Scene Model Scene Model is the single, comprehensive, up-to-date representation of the physical world that is easy to index and query. It provides a geometric and semantic representation of the user’s space so you can build mixed reality experiences. You can think of it as a scene graph for the physical world.

The primary use cases are physics, static occlusion, and navigation against the physical world. For example, you can attach a virtual screen to the user’s wall or have a virtual character navigate on the floor with realistic occlusion.

Scene Model is managed and persisted by the Meta Quest operating system. All apps can access Scene Model. You can also access the Scene Model over Link. You can use the entire Scene Model, or query the model for specific elements.

As the Scene Model contains information about the user’s space, you must request the app-specific runtime permission for Spatial Data in order to access the data. See Spatial Data Permission for more information.

Scene Capture and Scene Model

Space Setup Space Setup is the system flow that generates a Scene Model. Users can navigate to Settings > Physical Space > Space Setup to capture their scene. The system will assist the user in capturing their environment, providing a manual capture experience as a fallback. In your app, you can query the system to check whether a Scene Model of the user’s space exists. You can also invoke the Space Setup if needed. Refer to Requesting Space Setup for more information.

You currently cannot perform Space Setup over Meta Quest Link. You must perform Space Setup on-device prior to loading the Scene Model over Link.

Scene Anchors The fundamental element of a Scene Model is the Scene Anchor. Each Scene Anchor is attached with geometric components and semantic labels. For example, the system organizes a user’s living room around individual anchors with semantic labels, such as the floor, the ceiling, walls, a table, and a couch. Many anchors are also associated with a geometric representation. This can be a 2D or 3D bounding box, or both. Scene Mesh is also a form of geometric representation, presented as a Scene Anchor component.

Scene Model can be considered a collection of scene anchors, each of which have any number components on them that provide further information. Components hold information such as whether the anchor is a plane, whether it’s a volume, whether it has a mesh, whether it’s locatable, whether it’s a room, whether it holds other anchors, etc. The idea is that anchors are generic objects, and an API user queries the components on the anchor to find what information they have.

For example, if you have a Scene Model with four walls, then you will have four scene anchors. Each anchor will be Locatable, have a Semantic Classification of WALL, and be a Plane which holds dimensions.

Spatial Anchors and Scene Anchors Are Different There are a few differences between spatial and scene anchors. Scene anchors are created by the boundary, while spatial anchors are created by your application. Scene anchors contain other information specific to the scene, such as the anchor's pose. And finally your app can create spatial anchors only, but it can query scene anchors.

Next Steps

Now that you have a broad overview of Scene, learn more by digging into any of the following areas:

To learn more about how MR Utility Kit provides high-level tooling on top of Scene, see our MR Utility Kit overview. To get up and running with Scene in Unity, see our Get Started guide. To see Scene in action in various use cases, checkout our Samples. To understand the details of how Scene works, see Using the Scene Model. To see how the user’s privacy is protected through permissions, see Spatial Data Permission.Get Started with Scene Unity

All-In-One VR

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

This page outlines the steps required to Get Started with Scene, in order for a developer to have a basic working version of Scene running in Unity.

Prerequisites

Before you begin working with Scene, do the following:

Set up your Unity development environment and configure your Quest headset. Verify that your headset is running the latest OS version. In the headset, go to Settings > System > Software Update and update if necessary. Scene requires OS v40 or higher. Set up a new Unity project following the guide Create Your First VR App on Meta Quest Headset. Optionally, set up Meta Quest Developer Hub and Meta Quest Link. Use OVRCameraRig and configure permissions

In order for a Unity project to use Scene, we need to enable permissions which allow access to Scene data. We modify permissions using the OVRCameraRig prefab.

Create a new Unity scene, or open an existing one from your project. Remove the Main Camera from your scene. Add the OVRCameraRig prefab. Search for OVRCameraRig in the Project tab, and drag it into your scene. If you use the Project Setup Tool, you will receive automatic warnings and suggested fixes for the permissions. To set permissions manually, select OVRCameraRig in the Hierarchy tab. In the Inspector tab, under the OVRManager component, do the following: Under the Quest Features > General tab, find Scene Support and select either Required or Supported. When selecting Scene Support, Anchor Support should automatically be enabled. Further below, find the section Permission Requests On Startup, and toggle the Scene checkbox. This will request the permission when the app starts.

Scene Meta Quest Features

In the toolbar, select Oculus > Tools > Create / Update AndroidManifest.xml. This will create/update the AndroidManifest.xml to let the OS know that your app would like to use the Scene permission. The Spatial Data Permission page contains further information and guidelines on how to manage permissions.

Use OVRSceneManager

The easiest way to interact with Scene is with the OVRSceneManager prefab. This prefab will help query the Scene data from the OS, and spawn game objects for the various Scene anchors in your space.

Add the OVRSceneManager prefab. Search for OVRSceneManager in the Project tab, and drag it into your scene. In the Hierarchy tab, select OVRSceneManager. In the Inspector tab, you can see that the Plane Prefab and Volume Prefab fields are empty. OVRSceneManager will spawn these prefabs for each Scene object that is a plane and volume, respectively. In order to create a Plane Prefab, do the following: Right click in an empty space in the Hierarchy tab, and click Create Empty. Select the created game object. In the Inspector tab, change the name to Plane Prefab. Reset the Transform, and add a component by searching for OVR Scene Anchor. In the Hierarchy tab, right click on the Plane Prefab game object and select 3D Object > Quad to create a child game object. Select the created game object. In the Inspector tab, reset the Transform and then set the Y Rotation to 180.

Plane Prefab Quad Rotation

In the Project tab, navigate to the Assets > Oculus > VR > Prefabs folder. Drag the Plane Prefab from the Hierarchy tab into the Assets > Oculus > VR > Prefabs folder to create a prefab there. Delete the non-prefab object in the Hierarchy tab. In order to create a Volume Prefab, do the following: Right click in an empty space in the Hierarchy tab, and click Create Empty. Select the created game object. In the Inspector tab, change the name to Volume Prefab. Reset the Transform, and add a component by searching for OVR Scene Anchor. In the Hierarchy tab, right click on the game object and select 3D Object > Cube to create a child game object. Select the created game object. In the Inspector tab, reset the Transform and then set the Z Position to -0.5.

Volume Prefab Cube Position

Drag the Volume Prefab from the Hierarchy tab into the Assets > Oculus > VR > Prefabs folder to create a prefab there. Delete the non-prefab object in the Hierarchy tab. In the Hierarchy tab, select OVRSceneManager. Drag your Plane Prefab from the Project tab into the OVRSceneManagerPlane Prefab field. Drag your Volume Prefab from the Project tab into the OVRSceneManagerVolume Prefab field.

Scene Prefabs Set Up

The OVRSceneManager is now set up. Build and run the application. When the app starts, you will be prompted to grant the Spatial Data Permission. If you have not run Space Setup, the app will prompt you to capture your space. The app will then load the Scene, instantiating the Plane Prefab and Volume Prefab for the relevant Scene objects.

Next steps

There are a few different directions you can now take.

To understand the details of how Scene works, see Using the Scene Model. To see how the user’s privacy is protected through permissions, see Spatial Data Permission. To learn about using Scene in various use cases, have a look at the Samples. Get Started with Scene using Building Blocks Unity

All-In-One VR

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

This page details how to Get Started with Scene quickly using Building Blocks. It also has you monitor your running application using the Meta XR Simulator, so you don’t need to switch to your headset during development or testing. Once built, you can use this completed tutorial as a starter project for any other Scene project you wish to start.

Prerequisites

Before you begin this tutorial, do the following:

Set up your Unity development environment and configure your Quest headset. Verify that your headset is running the latest OS version. In the headset, go to Settings > System > Software Update and update if necessary. Scene requires OS v40 or higher. Configure your VR space to contain one or more room components, such as a table. Configuring a few room objects will highlight Scene’s ability to make use of them. If you have never run Space Setup, the app will prompt you to capture your space. If you want to reconfigure your room before beginning: In your headset, go to Settings > Physical Space > Space Capture Choose Set Up. When you are done with Space Setup, enable the Show Mixed Reality Objects toggle. Set up a new Unity project following the guide Create Your First VR App on Meta Quest Headset. Install the Meta XR Simulator using the instructions at Meta XR Simulator. Optionally, set up Meta Quest Developer Hub and Meta Quest Link. Prepare the Scene

Remove the default camera from the scene, and save the scene under a new name.

In the Hierarchy, remove the Main Camera game object. Then select the scene. On the File menu, choose Save As.... Navigate to the Assets > Scenes folder and save the scene with a new name, such as SceneGSBB. Add the Room Model Building Block

The Room Model Building Block is set up to make use of your space setup.

On the Oculus menu, choose Tools > Building Blocks. The Building Blocks Tool displays. Drag the Room Model Building Block onto the Hierarchy (or, click the + button) for Room Model). When you add a building block, other dependent blocks are brought into the scene (in this case, the Camera Rig building block is added automatically). Configure the Scene

When you added the Room Model building block, nearly all configurations have been set correctly. There are still a couple things for you to check.

In the Hierarchy, select the [BB] CameraRig. In the Inspector, expand the OVR Manager. Under Quest Features, under the General tab, check that Scene Support is either Supported or Required. Check that Anchor Support is enabled. Further down, expand Permission Requests on Startup. Enable the Scene checkbox. (The Spatial Data Permission page contains further information and guidelines on how to manage permissions.) The rest of the required configurations have been set up by the Room Model building block. For example, if you select the [BB] Room Model game object, you see that the OVR Scene Manager already has both the Plane Prefab and Volume Prefab properties set with existing prefabs. (For information on all the OVR Scene Manager properties that are set, as well as the makeup of the plane and volume prefabs, see the longer topic Get Started with Scene.)

Use the Project Setup Tool

On the Edit menu, choose Project Settings. The Project Settings window appears. In the Settings list, choose Oculus. Accept the recommendations of the tool by choosing Fix All for errors and Apply All for warnings. Close the Project Settings window. Save your Scene and Project

On the File menu, choose Save to save the scene. Then, again on the File menu, choose Save Project. Test the App with the Meta XR Simulator

You can use the Meta XR Simulator to test your Scene app with a simulated environment.

On the menu, go to Oculus > Meta XR Simulator > Activate to activate the simulator. On the menu, go to Oculus > Meta XR Simulator > Synthetic Environment Server > Game Room. After the Game Room window appears, minimize it. Then click the Unity Play button. You should see something similar to the following.

Typical XR Simulator Scene View

Next steps

There are a few different directions you can now take.

To understand the details of how Scene works, see Using the Scene Model. To see how the user’s privacy is protected through permissions, see Spatial Data Permission. To learn about using Scene in various use cases, have a look at the Samples. To continue with a step-by-step guide on using Scene, see the Bouncing Ball Tutorial. Spatial Data Permission Native SDK

Quest

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

In this page you will learn about the permissions that protect the user’s spatial data.

Overview

We have introduced a new spatial data runtime permission that allows users to control which app can access their spatial data. The permission applies to apps running on Quest 2, Quest Pro, and Quest 3.

An app that wants to use the Scene Model or Depth data needs to request the spatial data permission during the app’s runtime. The request will display a one-time permission explainer dialog, followed by a permission consent confirmation dialog. Only when the permission is granted by the user can the apps query all spatial data on the user’s device.

An illustration of the permission flow UX.

Devices Accessing spatial data through the new permission flow is effective on Quest 2, Quest Pro, and Quest 3.

Meta Quest Link Meta Quest Link v62 introduced a new permission toggle to control the access of the spatial data. Developers need to toggle it on to get access to the spatial data when querying for anchors or using the Depth API on the device. To toggle it on, go to Settings > Spatial Data over Meta Quest Link > Turn On.

Instruction to toggle on the permission.Instruction to toggle on the permission.

When to request the runtime permission

As per the Android permission guidelines, it is recommended to request the permission only when using the functionality, and to provide a fallback if the user decides not to grant the permission.

Querying the Scene Model without having the permission granted will have the spatial data excluded from the returned data. OVRSceneManager returns 0 anchors if the user hasn’t run Space Setup or hasn’t granted the permission. With Verbose Logging enabled, a permission check is performed and logged to the console. OVRSceneModelLoader will automatically fall back to requesting Space Setup a single time, but will not perform any permission request.

Using Depth API also needs to have the permission granted.

Declare the permission

In order for the operating system to know that an app is interested in using a permission, it needs to be specified in the app’s Android manifest first. In Unity, the default way to manage all Quest permissions is through the OVRCameraRig’s OVRManager component, where there is a section titled Quest Features. Select Scene Support and regenerate the AndroidManifest.xml, using the Oculus toolbar.

If you would like to enter the permission directly into the manifest, add android:name="com.oculus.permission.USE_SCENE" within xml tags.

An illustration of the permission flow UX.

Request the runtime permission

There are 2 options to request the runtime permission: either using a simple toggle in OVRManager, or using Unity’s Android Permission API. These should not be combined as one request may stop the other from completing as intended.

Option 1: request permission manually Requesting the runtime permission is done through Unity’s Android API: Permission.HasUserAuthorizedPermission() and Permission.RequestUserPermission() with "com.oculus.permission.USE_SCENE" will query whether the user has granted the permission and request it, respectively.

const string spatialPermission = "com.oculus.permission.USE_SCENE"; bool hasUserAuthorizedPermission = UnityEngine.Android.Permission.HasUserAuthorizedPermission(spatialPermission); if (!hasUserAuthorizedPermission) { UnityEngine.Android.Permission.RequestUserPermission(spatialPermission); } Unity exposes Android’s permission callbacks. These callbacks can be subscribed to when requesting the permission, and will be fired when the relevant action happened.

void Denied(string permission) => Debug.Log($"{permission} Denied"); void Granted(string permission) => Debug.Log($"{permission} Granted");

void Start() { const string spatialPermission = "com.oculus.permission.USE_SCENE"; if (!UnityEngine.Android.Permission.HasUserAuthorizedPermission(spatialPermission)) { var callbacks = new UnityEngine.Android.PermissionCallbacks(); callbacks.PermissionDenied += Denied; callbacks.PermissionGranted += Granted;

    // avoid callbacks.PermissionDeniedAndDontAskAgain. PermissionDenied is
    // called instead unless you subscribe to PermissionDeniedAndDontAskAgain.

    UnityEngine.Android.Permission.RequestUserPermission(spatialPermission, callbacks);
}

} It is not advised to subscribe to the PermissionDeniedAndDontAskAgain callback, as it is unreliable on newer versions of Android. If the event is not subscribed to, then PermissionDenied is fired instead.

For more information, see Unity’s documentation on requesting Android runtime permissions.

Option 2: request the permission automatically via OVRManager When using the OVRCameraRig, select the child OVRManager component and navigate to the Quest Features window. Ensure that Scene Support is enabled. In Permission Requests On Startup, check the Scene option. The app will try to request the permission on app startup if it has not already been granted.

If the permission has not been granted by the user:

OVRSceneAnchor and the OVRSceneRoom objects will not be available. If you are using OVRSceneManager.LoadSceneModel(), it will result in an OVRSceneManager.NoSceneModelToLoad event, even if there is spatial data captured on the device. If you are using OVRSceneModelLoader, it will result in OVRSceneModelLoader.OnNoSceneModelToLoad() callback, which will run Space Setup. This option does not provide access to permission callbacks, and may prevent a separate permission request with callbacks from completing successfully.Using the Scene Model Unity

All-In-One VR

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

In this page you will learn more about how the Scene Model is implemented, how it is captured through Space Setup, and how Scene Anchors provide access to the real world environment data.

How does the Scene Model and Space Setup work?

The Scene Model is composed of a number of Scene Anchors, each of which hold on to some further data describing their intent. The anchor framework is closely related to the Entity-Component-System pattern, whereby an Entity is little more than a storable data type with a unique identifier, Components contain data and are stored on Entities, and Systems operate globally on all Entities that have the necessary Components. In the context of the Scene Model, a Scene Anchor is an Entity, and can have any number of Components on it (such as semantic classification, 2D bounding box, locatable).

Space Setup (formerly known as Scene Capture) is the process which captures a Scene Model. It is managed by the system so that all apps running on a device have access to the same environment data, which is starkly in contrast to the paradigm where every app would need to scan the environment during the app’s lifecycle. Space Setup is a user-guided process: it first scans the environment to obtain a space mesh and to extract the space information (such as floor and ceiling height, walls, objects), and then the user completes the process by correcting any errors (wall positioning) and adding missing information (room objects). The process can be invoked by the system or by an app.

You currently cannot perform Space Setup over Meta Quest Link. You must perform Space Setup on-device prior to loading the Scene Model over Link.

The Scene Model contains sensitive user data and is thus fully controlled by the user for whether an app can access the data or not. Apps must declare their intention of using the permission through the app’s manifest file, and then perform a permission request at runtime to prompt the user for approval.

What components can Scene Anchors have?

Scene Anchors require components in order for them to describe the environment represented by the Scene Model. In order for a component to provide data, it has to be enabled. Apps must therefore query both if an Scene Anchor supports a given component, and if the component has been enabled.

As Scene Anchors can only be created through the Space Setup process, we call these types of anchors system-managed, while Spatial Anchors are user-managed.

Spatial Anchors and Scene Anchors Are Different There are a few differences between spatial and scene anchors. Scene anchors are created by the boundary, while spatial anchors are created by your application. Scene anchors contain other information specific to the scene, such as the anchor's pose. And finally your app can create spatial anchors only, but it can query scene anchors.

A Locatable component informs the system that this anchor can be tracked. Once enabled, an app can continually query the pose information of the anchor, which can vary due to a difference in tracking accuracy over an anchor’s life.

A RoomLayout component contains references to anchors that make up the walls, the ceiling and the floor. An AnchorContainer component contains a reference to a list of child anchors.

The Bounded2D and Bounded3D components provide information about the dimensions of an anchor. They have a size property which captures the dimensions, and an offset property which describes the difference between the origin of the 2D/3D bounding box and the anchor’s origin defined by the Locatable component. The TriangleMesh component provides an indexed triangle mesh for an anchor.

The SemanticClassification component categorizes the anchor into a number of classifications. See below for the complete list of possible classifications.

Both Storable and Shareable components only apply to Spatial Anchors.

Common Scene Anchors

As the system manages Scene Anchors, the supported components are predetermined.

Space Setup and Scene Model

The Scene Anchor for the room will have: a RoomLayout component to refer to the ceiling, walls and floor; an AnchorContainer component which holds all the Scene Anchors of the room.

Scene Anchors for 2D elements (such as walls, ceilings, floors, windows and wall art) have: a Locatable component to get the position of the anchor; a Semantic Classification component for the labels; and a Boundary2D component for the bounding box dimensions.

Scene Anchors that are 3D-only (such as shelves, screens, plants, and other) are similar to 2D Scene Anchors, but have a Bounded3D component instead of a Bounded2D.

Some Scene Anchors are both 2D and 3D (such as tables and couches), where the 3D component refers to the bounding volume, and the 2D component corresponds to an area of interest. These anchors are similar to 2D anchors, but also contain a Bounded3D component.

Scene Mesh

The Scene Mesh is triangle mesh which covers the entire space with a single static artifact. It is snapped to the surface near the room layout elements (such as walls, ceiling and floor), and it approximately describes the boundaries of all other objects in the space. It is represented as a common Scene Anchor, however, there is only a single instance of this object per space.

The Scene Anchor for a Scene Mesh will have: a Locatable component to get the position of the anchor; a Semantic Classification component for the label GLOBAL_MESH; and a TriangleMesh component to provide the geometry.

Scene Anchor coordinate space

Scene Anchors are defined in a Cartesian right-handed coordinate system to match the default OpenXR coordinate system, while Unity uses a left-handed coordinate system. This results in both design-time considerations for spawning objects that need to match the orientation and dimensions of Scene Anchors, and a runtime conversion of each 3D position (mostly hidden to the developer).

Scene semantic classifications

Semantic classifications categorize Scene Anchors into a predetermined and system-managed list of object types. Semantic classes separate objects beyond their basic geometric description to provide an app developer with the possibility of applying classification-specific game logic.

Supported Semantic Labels

Semantic Label Description

Room Structure

CEILING A ceiling 2D DOOR_FRAME A door frame - must exist within a wall face 2D FLOOR A floor 2D INVISIBLE_WALL_FACE A wall face added by Space Setup to enclose an open room 2D WALL_ART A piece of wall art- must exist within a wall face 2D WALL_FACE A wall face 2D WINDOW_FRAME A window frame - must exist within a wall face 2D Room Contents

COUCH A couch 2D (the seat) and 3D (the volume) TABLE A table 2D (the tabletop) and 3D (the volume) BED A bed 3D LAMP A lamp 3D PLANT A plant 3D SCREEN A screen 3D STORAGE A storage container 3D Mesh Objects

GLOBAL_MESH A triangle mesh of a user’s space captured during Space Setup

Unclassified Objects

OTHER A general volume 3D Note This list of labels is evolving, as we periodically add support for more 2D and 3D objects. Because of this, you should consider the OTHER type as a fallback. It may not be a type in the future, or an object you label as OTHER today may need to be changed in the future.

Next steps

Now that you’ve learned how the Scene Model and Space Setup/Scene Capture works, it’s time to access the data in Unity.

MR Utility Kit provides a rich set of tools and utilities on top of the Scene Model. OVRSceneManager provides access to the Scene Model via a simple-to-use Unity component that spawns user-defined prefabs for the Scene Anchors in your space. In order to access the Scene data directly, use the low-level asynchronous OVRAnchor components. To see how the user’s privacy is protected through permissions, see Spatial Data Permission. Dive straight into code with our Samples. To learn about using Scene in practice, see our Best Practices. Use OVRSceneManager Unity

All-In-One VR

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

In this page, you will learn how to access the Scene Model through the simple-to-use Unity components OVRSceneManager and OVRSceneAnchor.

What does OVRSceneManager do?

The OVRSceneManager Unity component is the easiest way to access the Scene Model. It focuses on the most common use case: spawning virtual objects that the developer has provided for Scene Anchors of a user’s environment and abstracting away direct Scene data access.

To load an existing Scene Model, call OVRSceneManager.LoadSceneModel(). This call initiates the loading process, which can take multiple frames to complete. As part of the loading process, the Scene Model will instantiate prefabs for all available Scene Anchors. Two events provide the result:

SceneModelLoadedSuccessfully: Indicates that the OVRSceneManager has successfully loaded a Scene Model. NoSceneModelToLoad: Indicates there is no Scene Model available for the OVRSceneManager to load, for example, if the user has not yet performed Space Setup, or the user denied the Spatial Data Permission This code sample shows how to subscribe to the events and call LoadSceneModel (note that events should be unsubscribed):

void Load() { var sceneManager = FindObjectOfType();

sceneManager.SceneModelLoadedSuccessfully += OnLoaded;
sceneManager.NoSceneModelToLoad += OnNotLoaded;
sceneManager.LoadSceneModel();

} void OnLoaded() => Debug.Log("Success"); void OnNotLoaded() => Debug.Log("Failed"); It is also possible to query whether the Scene Model contains a certain anchor classification by calling DoesRoomSetupExist(). This will not load any anchors from Scene Model and therefore not spawn any Unity game objects. This method will always return false if no Scene Model has been captured, or if the user has not granted the runtime permission.

Requesting Space Setup An app can trigger the system’s space setup process by calling OVRSceneManager.RequestSceneCapture(). Requesting Space Setup will pause the app, launch space setup, and then resume the app when the user completes (or cancels) the setup process. Two events provide the result:

SceneCaptureReturnedWithoutError: Indicates space setup completed. Note that this includes the case where the user cancels space setup. UnexpectedErrorWithSceneCapture: Indicates something went wrong during space setup. It is possible to call Space Setup with a list of classification types. This will ensure that the user will have to define these Scene Anchors, and Space Setup cannot be completed until the user has done so.

The following code shows how to call Space Setup with 2 tables and a plant.

void Request() { var classifications = new [] { OVRSceneManager.Classification.Table, OVRSceneManager.Classification.Table, OVRSceneManager.Classification.Plant };

var sceneManager = FindObjectOfType<OVRSceneManager>();
sceneManager.RequestSceneCapture(classifications);

} Automatically loading Scene Model with OVRSceneModelLoader To facilitate loading the Scene Model, requesting Space Setup, and handling the results, we have provided an extendable utility component: OVRSceneModelLoader.

The OVRSceneModelLoader component subscribes to the events provided by the OVRSceneManager.RequestSceneCapture and performs an appropriate action in response to each event. It offers reasonable default behaviors, but it is expected that developers create derived classes to override and customize these behaviors.

The default loader checks whether the Scene Model contains any anchors. If not, the loader triggers a Space Setup once. If a user fails to complete Space Setup, then a manual call to RequestSceneCapture is required by the application.

The overridable methods are:

OnStart: The system invokes this method at the start of your application (just after Unity’s Start() method). The default implementation attempts to load an existing Scene model. OnSceneModelLoadedSuccessfully: This method indicates the Scene Model has finished loading. The default implementation does nothing. OnNoSceneModelToLoad: This method indicates there was no Scene Model to load, meaning a request to OVRSceneManager.LoadSceneModel concluded, but no captured scene exists or user denied the Spatial Permission. The default implementation requests Space Setup from the OVRSceneManager. OnSceneCaptureReturnedWithoutError: A request to capture the Scene succeeded without error. The default implementation loads the Scene Model from the OVRSceneManager. OnUnexpectedErrorWithSceneCapture: A request to capture the Scene failed. The default implementation does nothing. Using the OVRSceneManager Prefab The OVRSceneManager and OVRSceneModelLoader components are available through a prefab, also called OVRSceneManager.

Search for the OVRSceneManager prefab in the Project tab, and drag it into your Unity scene. Create prefabs for the plane and volume prefab. Drag the created prefabs on the Plane Prefab and Volume Prefab properties of the OVRSceneManager. If you want to instead spawn objects by classification type, use Prefab Overrides. This option can be used in conjunction with the Plane Prefab and Volume Prefab modes, but takes precendence. OVRSceneAnchor

The OVRSceneAnchor component represents the Scene Anchors of the Scene Model. It contains multiple components to provide the data, each of which has a duplicate Unity component type (e.g. the Scene Bounded2D component is represented by the OVRScenePlane Unity component).

OVRSceneAnchors are created by the OVRSceneManager and cannot be created by the user. The OVRSceneManager is responsible to periodically updating the location of every anchor through the parameter MaxSceneAnchorUpdatesPerFrame. In order to do this, OVRSceneManager keeps a reference to each spawned OVRSceneAnchor in the scene.

In order to access all of the spawned OVRSceneAnchors, wait until the Scene Model has been loaded, and then use the static helper method OVRSceneAnchor.GetSceneAnchors(). It is recommended to use this method to find specific anchors (such as rooms), instead of using Unity’s FindObjectsOfType().

Further Scene Model Unity components

OVRSceneManager spawns OVRSceneAnchors when the Scene Model has been loaded, and adds the relevant Unity components that provide further data of the Scene Anchor. For example, when spawning a wall, the OVRSceneManager will spawn the supplied Plane Prefab object (with type OVRSceneAnchor) and add the components OVRScenePlane and OVRSemanticClassification if they are not present.

OVRSceneRoom OVRSceneRoom represents the anchors that are part of the structures that make up a room, providing direct access to the created Unity game objects for the ceiling, floor and walls. All invisible wall faces are also classified as wall faces, and are accessible in the OVRSceneRoom’s walls. The order of the walls is the order in which the user drew them. To access the wall shape as a closed polygon, use the Boundary on the floor’s OVRScenePlane.

When OVRSceneManager instantiates user-supplied prefabs for scene anchors, it creates a hierarchy which uses OVRSceneRoom as the container game object. OVRSceneRooms can be spawned as top-level game objects, or parented to another game object using OVRSceneManager.InitialAnchorParent. You can iterate over scene anchors or scene rooms using the Unity Transform.

OVRScenePlane A scene plane (OVRScenePlane) represents the two-dimensional bounds of a scene anchor. The system automatically adds a scene plane to any scene anchor with a two-dimensional boundary. For example, floors, ceilings, and walls have an OVRScenePlane component, as do surfaces on things like tables and couches. OVRScenePlanes may optionally include a Boundary: floor scene anchors have a boundary which represents the closed polyline of the walls.

When you instantiate a scene plane, the system sets the localScale of each direct child transform of the prefab according to the 2D extents of the plane. The localScale is set in meters. Setting the localScale enables things like stretching a wall prefab to the actual dimensions of the wall. If an offset exists between the pivot and center of the plane, child transforms may also have their localPosition modified.

OVRScenePlaneMeshFilter Each scene plane has a 2D boundary, accessible via the OVRScenePlaneMeshFilter. This component generates a planar mesh from the 2D boundary, which you can use for rendering, physics, or any other purpose.

In the case of an OVRScenePlane which has the semantic classification for FLOOR, the Boundary represents the closed polyline of the walls.

OVRSceneVolume A scene volume (OVRSceneVolume) represents the three-dimensional bounds of a scene anchor. The system automatically adds a scene volume to any scene anchor with 3D boundary extents. You can see examples of this functionality in the user-defined volumes labeled OTHER in space setup.

When you instantiate a scene volume, the system sets the localScale of each direct child transform of the prefab according to the 3D extents of the volume. The localScale is set in meters. During this process, the system will stretch prefabs to the dimensions of the volume. If an offset exists between the pivot and the center point of the top face of the volume, child transforms may also have their localPosition modified.

OVRSceneVolumeMeshFilter A scene anchor may optionally have a triangle mesh representation, accessible via the OVRSceneVolumeMeshFilter. This component populates a Unity MeshFilter component with a Mesh on startup, and optionally bakes a MeshCollider component if it is attached (using the default cooking options).

Creating the mesh and baking the collider are performed using the Unity job system in order to avoid blocking the main thread due to a large number of triangles. Use the property IsCompleted in order to know when the process has finished.

Currently, only scene anchors with a semantic classification of GLOBAL_MESH contain a OVRSceneVolumeMeshFilter.

OVRSemanticClassification A scene anchor may have one or more string labels that provide additional semantic information describing the anchor’s object type, for example, COUCH or WALL_FACE. The system automatically adds OVRSemanticClassification to scene anchors that have semantic labels.

OVRSceneObjectTransformType If you want more control over how child objects within a prefab are scaled and offset by OVRScenePlane and OVRSceneVolume, add the OVRSceneObjectTransformType component on the child objects to choose who applies the transform.

When OVRScenePlane and OVRSceneVolume set the localScale and localPosition of the child objects, they do so in the following order: firstly, by checking whether OVRSceneObjectTransformType is defined on the child; then by seeing whether the OVRSceneVolume is modifying the scale/offset; and lastly, by looking at the OVRScenePlane values for scale and offset.

Anchor Pivots

2D planes and 3D volumes use the following pivot rules for each scene anchor. Note that in the design-time scene descriptions below, we place the prefabs at +90 degrees to accommodate the default OpenXR coordinate system (a Cartesian right-handed coordinate system). For more information, see the OpenXR Spec section 2.18, Coordinate System.

Plane Pivots A 2D plane’s pivot is at the center of the plane, with +Z pointing in the direction of the plane’s normal. So, for example, an OVRScenePlane representing a floor will have its +Z pointing up. In the following graphics, the X vector is red, the Y vector is green, and the Z vector is blue.

2D pivot plane

Walls are oriented such that +Z points out from the wall and +Y is always up.

Wall plane

Volume Pivots The pivot for an OVRSceneVolume is in the middle of its top surface. Since the normal of the top surface points upward, +Z is also up in this case.

Volume pivot

Composite Some objects are represented by both a plane and a volume, such as tables and couches. The plane represents the flat area of the object, such as a couch seat, while the volume corresponds to the bounding box which contains the entire object. The pivot is always at the center of the plane, while an offset is used to capture the difference between the pivot of the plane and pivot of the volume.

In the case of a table, the plane is at the top face of the volume, resulting in an offset of 0. In the case of a couch, the plane is within the volume, resulting in a non-zero offset for the volume. In this example image, the offset is the difference between the plane pivot axes and the sphere at the center of the volume’s top face.

Composite pivot

Scene Mesh

The Scene Mesh is a low-fidelity, high-coverage artifact which describes the boundary between free and occupied space in a room. It is generated automatically during the space setup experience, and available for applications to query using the OVRSceneManager. If the space setup falls back to the manual mode, then a Scene Mesh will not be in the Scene Model.

In order to spawn an object for the Scene Mesh, you must use the Prefab Overrides. This is because it is considered neither as a Plane Prefab, nor as a Volume Prefab. Select the label GLOBAL_MESH and add your prefab.

Accessing the mesh data is done through the OVRSceneVolumeMeshFilter, which populates a Unity MeshFilter at runtime with a mesh retrieved from the system. Optionally, a MeshRenderer needs to be added to visualize this mesh, and a MeshCollider for physics and collisions.

An example prefab that will retrieve the mesh at runtime, bake it for physics calculations, and render it using the default Unity material.

Next steps

Now that you’ve learned how to access the Scene Model in Unity, you have all the necessary tools to work with Scene in Unity.

To see code examples of Scene being used in various uses, checkout our Samples. To see how the user’s privacy is protected through permissions, see Spatial Data Permission. To learn about using Scene in practice, see our Best Practices.Access Scene data with OVRAnchor Unity

All-In-One VR

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

In this page, you will learn how to access Scene data directly to recreate your own Scene Manager using the OVRAnchor components.

What are the OVRAnchor components?

OVRSceneManager implements the most common use case for Scene data: spawn a Unity prefab for any given Scene element when asked to load. This differs from the lower-level API that more closely resembles the core functionality of how the system represents and exposes the Scene. In order to have more control over what happens in Unity when receiving Scene data, you must use the OVRAnchor API.

OVRAnchor components are light-weight wrappers around the lower-lever Scene functionality. Since most Scene API functions are asynchronous, you can either access them by calling a non-blocking function, subscribing to an event and putting your logic in event callback, or you can use C#’s async/await functionality, which is how OVRAnchor works.

How to interact with Scene data

As the Scene API consists of an ECS-like mechanism, we are mostly interested in either fetching entities, or in getting components that hold data.

In order to access anchors/entities, you use:

Find Anchors By Component: all anchors have components. You can get a list of all known anchors that have this specific component (such as find all anchors that have the Room Layout component). In code, this function is OVRAnchor.FetchAnchorsAsync(). Find Anchors By UUID: all anchors have a UUID. Use it to retrieve anchors. In code, this function is OVRAnchor.FetchAnchorsAsync(). Once you have an anchor, you can retrieve its components and thereby access its data. An anchor can have any number of components, although this can be known ahead of time (see Common Scene Anchors).

It is important to note that while OVRAnchor APIs are asychronous, the awaiter implementation cannot block the main thread (unlike typical async functions). This is because the events that complete OVRAnchor tasks are only invoked on the main thread.

Control flow for rooms and child anchors

When you start without any prior knowledge for an environment, you will likely follow a specific flow to retrieve the contents of your Scene.

Find all anchors that have the component RoomLayout. For each of these anchors, get the component AnchorContainer to access the room’s child anchors. Optionally, use the RoomLayout to access the ceiling, floor or walls directly. Iterate over the child anchors, getting components whose data you are interested in retrieving. If you want to localize the anchor, enable the locatable component. Then you can access its pose with respect to a camera, updating an object’s transform accordingly. If you want to know the dimensions, query the 2D and/or 3D bounds and scale your Unity object accordingly. In code, this looks as follows:

var anchors = new List(); await OVRAnchor.FetchAnchorsAsync(anchors);

// no rooms - call Space Setup or check Scene permission if (anchors.Count == 0) return;

// access anchor data by retrieving the components var room = anchors.First();

// access the ceiling, floor and walls with the OVRRoomLayout component var roomLayout = room.GetComponent(); if (roomLayout.TryGetRoomLayout(out Guid ceiling, out Guid floor, out Guid[] walls)) { // use these guids to fetch the OVRAnchor object directly await OVRAnchor.FetchAnchorsAsync(walls, anchors); }

// access the list of children with the OVRAnchorContainer component if (!room.TryGetComponent(out OVRAnchorContainer container)) return;

// use the component helper function to access all child anchors await container.FetchChildrenAsync(anchors); Data components

As all data are stored in components, there is a 1 to 1 mapping between the components at the Scene data level and OVRAnchor. Not all anchors have all components, and so it is recommended to use OVRAnchor.TryGetComponent() to see if a certain anchor has the component in question.

OVRLocatable: controls the tracking of an anchor. Contains functions TryGetSceneAnchorPose() and TryGetSpatialAnchorPose() (see Spatial Anchors for more information). OVRSemanticLabels: contains a list of all the semantic classification labels of an anchor. OVRBounded2D: provides access to the bounding plane information of an anchor. Contains property BoundingBox and functions TryGetBoundaryPointsCount()/TryGetBoundaryPoints(). OVRBounded3D: provides access to the bounding volume information of an anchor. Contains property BoundingBox. OVRTriangleMesh: provides access to triangle mesh information of an anchor. Contains functions TryGetCounts() and TryGetMesh(). OVRRoomLayout: provides access to the floor, ceiling and wall information of an anchor. An anchor only has this component when it is a room anchor. Contains functions FetchLayoutAnchorsAsync() and non-async TryGetRoomLayout(). OVRAnchorContainer: provides access to child anchors. This is most commonly used for room anchors, where the child anchors correspond to all the elements within a room. Contains function FetchChildrenAsync(). In the following code, you iterate over the room elements, finding the first floor anchor, match our transform to the anchors transform, and get the dimensions of the bounding plane.

foreach (var anchor in anchors) { // check that this anchor is the floor if (!anchor.TryGetComponent(out OVRSemanticLabels labels) || !labels.Labels.Contains(OVRSceneManager.Classification.Floor))) { continue; }

// enable locatable/tracking
if (!anchor.TryGetComponent(out OVRLocatable locatable))
    continue;
await locatable.SetEnabledAsync(true);

// localize the anchor
locatable.TryGetSceneAnchorPose(out var pose);
this.transform.SetPositionAndRotation(
    pose.ComputeWorldPosition(Camera.main),
    pose.ComputeWorldRotation(Camera.main)
);

// get the floor dimensions
anchor.TryGetComponent(out OVRBounded2D bounded2D);
var size = bounded2D.BoundingBox.size;

// only interested in the first floor anchor
break;

} In order to prevent pose transforms from affecting the raw component data, it is recommended to have a parent Unity game object that only applies the pose, and to have child Unity game objects for the plane, volume and mesh. This will allow you to use the component data on an object’s transform.

// we previously set this object's transform using the OVRLocatable pose var parent = this.gameObject;

// create a child Unity game object var plane = GameObject.CreatePrimitive(PrimitiveType.Cube); plane.transform.SetParent(parent.transform, false);

// set the object transform to the bounds anchor.TryGetComponent(out OVRBounded2D bounded2D); plane.transform.localScale = new Vector3( bounded2D.BoundingBox.size.x, bounded2D.BoundingBox.size.y, 0.01f); When to query the Scene data

The OVRSceneManager prefab will load data when its LoadSceneModel() function is called, which is commonly on app start. However, it is possible to query for Scene Model data at any point during the app’s lifecycle.

The Scene Model data changes when Space Setup/Scene Capture is invoked. An app may trigger this process, which will result in there being new Scene data to load. As the Space Setup process will pause an app, you can limit the query for new Scene data to when the OnAppPause method is called.

Next steps

Now that you’ve learned how to access Scene Model data using low-level OVRAnchor components, you have all the necessary tools to create your own Scene Manager in Unity.

To see code examples that create a Scene Manager with the OVRAnchor API, have a look at our Custom Scene Manager Sample. To see further examples of Scene being used, checkout our Samples. To see how the user’s privacy is protected through permissions, see Spatial Data Permission. To learn about using Scene in practice, see our Best Practices. ChatGPT Ok.

User Unity Custom Scene Manager Sample Unity

All-In-One VR

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

In this page, how you can use the OVRAnchor API to access Scene data directly and create your own Scene Manager.

Prerequisites

Before you can build this sample, review the requirements in the Scene Samples Overview. This sample requires you to be comfortable to be comfortable with the OVRAnchor API.

Ensure that you have set the permissions correctly for accessing Spatial Data:

In OVRCameraRig > OVRManager > Quest Features, make sure to set Scene Support to Required. This will add the correct manifest tags. Further down in the OVRManager > Permission Requests On Startup, make sure to check the Scene checkbox. This will perform a runtime permission request when the app begins. If you have previously created an Android Manifest, update it with Toolbar > Oculus > Tools > Update AndroidManifest.xml. How does the sample work?

The Custom Scene Manager sample is a single Unity scene that contains a Scene Manager Unity game object with multiple scripts. Each script is an example of a custom Scene Manager, and only one should be enabled when running the sample.

Custom Scene Manager - Unity overview

Try out each of the Scene Managers by toggling its script and hitting Play in-Editor. The snapshot and dynamic scene managers require you to launch Space Setup within the application, and should therefore be run on-device.

As the scenes require a Scene Model, you should have run Space Setup before trying the sample. All scripts have a fallback for requesting Space Setup/Scene Capture if no anchors were returned when looking for room anchors.

Basic Scene Manager

Basic Scene Manager is the minimal example to spawn Unity content for Scene data. When the script starts, it loads the Scene data and spawns Unity primitives with a random color in the correct location.

This is done as follows:

Fetch all anchors that have the component OVRRoomLayout. Create a Unity game object for each room. Iterate over all room anchors, and get the contents through the OVRAnchorContainer component. Iterate over all child anchors and perform the following steps. Localize each anchor by setting the enabled state of the OVRLocatable component to true. Create a new Unity game object, parented to the room. Name it with the labels retrieved through the OVRSemanticLabels component. Set the transform using the OVRLocatable.TrackingSpacePose and the Camera.main. If an OVRBounded2D component exists, create a new child Unity game object (using the cube primitive), set the transform to the dimensions of the bounds and set the MeshRenderer material to a random color. If an OVRBounded3D component exists, create a new child Unity game object (using the cube primitive), set the transform to the dimensions of the bounds and set the MeshRenderer material to a random color. If an OVRTriangleMesh component exists, create a new child Unity game object (using the quad primitive), create a new Unity Mesh with the component data, update the mesh of the MeshCollider and MeshRenderer components and set the MeshRenderer material to a random color. After the data have been spawned, we are finished and will not perform any further updates. However, it is common to update the transform from the OVRLocatable as tracking moves the object to apply corrections, but the Basic Scene Manager doesn’t do this.

Basic Scene Manager - Unity primitives spawned for Scene data

Prefab Scene Manager

The Prefab Scene Manager extends the Basic Scene Manager to filter depending on the semantic classification and spawning a user supplied prefab for walls, ceiling, floor, and spawning a fallback prefab for all other objects.

The differences to the Basic Scene Manager are as follows:

Instead of spawning Unity primitives, spawn a user-supplied prefab instead. This prefab is still spawned as a parent of the Unity game object that is responsible to being positioned by OVRLocatable.TrackingSpacePose. The logic to size the Unity game engine is split between the 2D objects (walls, ceiling and floor) and the 3D objects (all under the fallback prefab). The spawned prefabs contain the label of the Scene Anchor so that they can be viewed through the headset. All the OVRLocatable objects are cached when loading. This allows us to refresh the poses (such as when recentering or when the camera’s tracking space has changed), which this app demonstrates by simply performing a refresh every 5 seconds. While the Prefab Scene Manager is still fairly primitive, it closely matches the core functionality of the OVRSceneManager.

Basic Scene Manager - user prefabs spawned for Scene data

Snapshot Scene Manager

The Snapshot Scene Manager moves away from the static model of loading data a single time, and instead periodically loads all the Scene data, performing a comparison between different snapshots and logging the differences to the debug console.

It works as follows:

Every 5 seconds, we create a snapshot of the Scene data. The snapshot consists only of a list of OVRAnchors. We compare snapshots in 3 steps, comparing anchors between the snapshots and performing a special comparison for new rooms. Iterate over all the anchors in snapshot 1 and check if the anchor is in snapshot 2. If not, then it is assumed deleted/missing. Iterate over all the anchors in snapshot 2 and check if the anchor is in snapshot 1. If not, then it assumed to be new. Iterate over all new anchors that are rooms. If any of the room’s children are in the other snapshot, then this room has been changed, otherwise it is a new room. Each change is logged to Unity’s Debug Log. The Snapshot Scene Manager assumes a dynamic Scene Model where not all anchors are available when querying the Scene data the first time.

Dynamic Scene Manager

The Dynamic Scene Manager builds on the Snapshot Scene Manager and links all changes to spawned Unity game objects that can be updated as the Scene data is changed.

It works as follows:

Collect a snapshot of the Scene data periodically, similar to the Snapshot Scene Manager. The snapshot has been extended from a list of OVRAnchors to also containing bounds and child anchors information. We perform the same snapshot comparison as the Snapshot Scene Manager for anchors and rooms, though we can now used the cached child anchors in the snapshots. Beyond the basic comparison, we perform a bounds comparison for all anchors in a room identified as changed by the previous step. We get a list of all the changes between 2 snapshots and apply updates to the Unity objects. For new objects, we spawn Unity game objects like the Basic Scene Manager. For deleted/missing objects, we simply delete the Unity game object. For changed objects, we reset the location transform, and update the bounds or mesh if such a component exists on the object. The Unity objects contain a reference to a OVRAnchor from a previous snapshot. This is updated by finding OVRAnchor pairs between the snapshots. Similar to the Snapshot Scene Manager, the Dynamic Scene Manager assumes a dynamic Scene Model. It also provides an important optimization over the OVRSceneManager by updating content, instead of respawning all content.

Key Assets

OVRCameraRig Game Object | .\Assets\Oculus\VR\Prefabs | Provides a VR/MR-ready camera and configures the device using the OVRManager component.

Basic Scene Manager Script | .\Assets\StarterSamples\Usage\SceneManager\Scripts\CustomSceneManager | This script creates Unity primitives for all elements of the Scene.

Prefab Scene Manager Script | .\Assets\StarterSamples\Usage\SceneManager\Scripts\CustomSceneManager | This script instantiates user-provided prefabs for certain elements of the Scene.

Snapshot Scene Manager Script | .\Assets\StarterSamples\Usage\SceneManager\Scripts\CustomSceneManager | This script will poll the Scene data, save snapshots at each poll, and log any changes to the console.

Scene Manager Helper Script | .\Assets\StarterSamples\Usage\SceneManager\Scripts\CustomSceneManager | This helper script is used by the other Scene Manager scripts to share common code.

Dynamic Scene Manager Script | .\Assets\StarterSamples\Usage\SceneManager\Scripts\CustomSceneManager | This script extends the Snapshot Scene Manager by maintaining a link of Unity game objects for Scene data snapshots to allow updates.

Dynamic Scene Manager Helper Script | .\Assets\StarterSamples\Usage\SceneManager\Scripts\CustomSceneManager | This helper script is used by the Dynamic Scene Manager to help the legibility of the core functionality.Spatial Anchors Overview Unity

All-In-One VR

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Health & Safety Recommendation While building mixed reality experiences, we highly recommend evaluating your content to offer your users a comfortable and safe experience. Please refer to the Health and Safety and Mixed Reality Design guidelines before designing and developing your app.

Co-location increases the number of individuals in a shared physical space with restricted visibility of their surroundings. Crowded experiences create safety risks. It is important to be mindful of the occupancy of the playspace for the shared experience being built. For more information, see Shared Spatial Anchors health and safety guidelines. Additionally, a sample developer app with health and safety suggestions is available for easy integration.

Spatial anchors are world-locked frames of reference you can use to position content. They are transforms that represent fixed physical locations in the real world.

You can use spatial anchors to enrich the user experience in many ways, such as placing virtual signs on real furniture, or creating spawn points on real windows for virtual characters to fly through.

Samples

Spatial Anchors Sample illustrates use of the Spatial Anchors API. Shared Anchors Sample shows how to use both local and shared anchors. Creating a New Spatial Anchor

Spatial Anchors are Most Effective within 3 Meters Create a spatial anchor within 3m of the object you want to anchor. Otherwise, your pose will be incorrect due to drift (that is, even though the location of the spatial anchor remains the same, the farther the user moves away from the spatial anchor, the greater the drift of the objects attached to the spatial anchor will seem to that user). If you attach content to a nearby anchor (within 3m), it will remain in place relative to the anchor, even if the user goes farther away than 3m.

A spatial anchor is a world-locked frame of reference. It represents a fixed position and orientation in the real world. You create a spatial anchor at a specific 6DOF pose and then use it to drive the transform of your virtual content.

You create spatial anchors using the Unity method GameObject.AddComponent().

Spatial Anchor Storage Targets Saving a spatial anchor can involve two different storage target types.

Local Storage The spatial anchor object is stored directly in the user’s headset, and persists through sessions. This provides the app quick access to spatial anchors. Cloud Storage If the user has authorized cloud access, the spatial anchor object is stored in a cloud location associated with the user’s account. You do this in preparation for sharing the spatial anchors among a group of users. For more information, see Shared Spatial Anchors. Note that once you have saved a spatial anchor in local or cloud storage, you need its UUID to load it. You must save the spatial anchor UUID to a separate location to refer to its anchor in a future session. In Unity, you can use PlayerPrefs for this purpose. we provide an example of this in Save Spatial Anchor UUIDs to PlayerPrefs

Destroy a Spatial Anchor

Once you create a spatial anchor, you can destroy it using the Unity engine method Destroy().

When you destroy a spatial anchor, you remove the spatial anchor from the user’s experience. Any virtual object that was using the spatial anchor remains, though its transforms will no longer be driven by the spatial anchor, so it is no longer locked to its previous real-world location.

Destroying a spatial anchor does not remove the spatial anchor from local or cloud storage. If you no longer need the anchor, you should also Erase it from all storage.

Save a Spatial Anchor Locally or to the Cloud

You can save a spatial anchor object locally or to the cloud using the OVRSpatialAnchor.Save() method, using the OVRSpace.StorageLocation enum.

Note that you only ever need to store spatial anchors to the cloud if you need to the share the anchors to another user. Otherwise, the parameter is optional and defaults to local storage for loading the anchor at a later stage.

Save Spatial Anchor UUIDs to External Storage

The OVRSpatialAnchor.Save() method saves the entire spatial anchor object to local or cloud storage. To load it back into the user’s experience, you need its UUID. You can track the UUIDs during runtime, but if you want to make it easy to manage saved spatial anchors in future sessions, you need to save them to an external storage location.

In the StarterSamples Spatial Anchors sample, we make use of Unity PlayerPrefs to persist the spatial anchor UUIDs. The UUIDs are later used as one parameter to the OVRSpatialAnchor.LoadUnboundAnchors() method.

Load a Saved Spatial Anchor

If you want to be able to bring a saved spatial anchor back into the user’s experience, you can load it as long as you have saved it to local or cloud storage first. If you are tracking the spatial anchor UUIDs during runtime, you can quickly load the anchor using its UUID. With the spatial anchor UUID, you load the anchor (as an unbound anchor). localize it, and bind it to a game object’s OVRSpatialAnchor object. See Load a Spatial Anchor from the Headset or the Cloud for more information.

Erase a Spatial Anchor from Local or Cloud Storage

A spatial anchor can be erased from local or cloud storage. This does not affect its existence in the user’s experience, but it means the spatial anchor will not be available in future sessions unless you save it again before the session ends. You use the OVRSpatialAnchor.Erase() method to erase a spatial anchor.

Samples and Showcase Apps

To quickly build a sample that uses spatial anchors, check out the Spatial Anchors sample in the Starter Samples.

Also, you can find more examples of using spatial anchors with Meta Quest in the oculus-samples GitHub repository. The Unity-Discover and Unity-SharedSpatialAnchors apps both highlight the implementation of spatial anchors.Use Spatial Anchors Unity

All-In-One VR

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Spatial anchors enable you to provide users with an environment that is consistent and familiar. Users expect objects they place or encounter to be in the same location the next time they enter the scene.

OVRSpatialAnchor Component

In your code, you use the OVRSpatialAnchor() component to place objects in your scenes. With this API, you can:

Create a new spatial anchor Save a spatial anchor to the user’s headset, or to the cloud Update a spatial anchor Load a spatial anchor stored in the user’s headset or stored in the cloud Erase a spatial anchor from the headset or the cloud Remove a spatial anchor from the current app (destroy it). Share a spatial anchor with other users Save spatial anchors UUIDs to PlayerPrefs Load spatial anchors UUIDs from PlayerPrefs The OVRSpatialAnchor() Unity component encapsulates a spatial anchor’s entire lifecycle, including creation, destruction, and saving to or erasing from the headset or cloud storage. Each spatial anchor has a unique identifier (UUID) that is assigned upon creation and remains constant throughout the life of the spatial anchor.

Prerequisites

Before working with Spatial Anchors, you should make sure you have Unity set up properly for Meta Quest builds. Also, if you can complete Create your First VR App on Meta Quest Headset, and can build the Spatial Anchors, sample, then you can build apps using Shared Spatial Anchors.

Create Spatial Anchors

To create a new spatial anchor in the Meta Quest runtime, use any Unity GameObject to create it using AddComponent().

Once it is created, the new OVRSpatialAnchor is assigned a unique identifier (UUID) (represented by a System.Guid in Unity), which you can use to load the spatial anchor after it has been saved locally or to the cloud. In the frame following its instantiation, the OVRSpatialAnchor() component uses its current transform to generate a new spatial anchor in the Meta Quest runtime. Because the underlying runtime representation of the spatial anchor is asynchronous, the UUID might not be valid immediately.

One way to work with this time delay is to use a Unity coroutine to ensure the spatial anchor exists before you try to use it.

This can be seen in the following excerpt.

public void CreateSpatialAnchor() { GameObject gs = new GameObject(); OVRSpatialAnchor workingAnchor = gs.AddComponent();

StartCoroutine(anchorCreated(workingAnchor));

}

public IEnumerator anchorCreated(anchor) { // keep checking for a valid and localized spatial anchor state while (!workingAnchor.Created && !workingAnchor.Localized) { yield return new WaitForEndOfFrame(); }

Guid anchorGuid = _workingAnchor.Uuid;

} Save a Spatial Anchor with a Unity Coroutine

You can save spatial anchors objects locally (to the headset) for quick re-loading. Because the saving action is asynchronous, you can use a Unity coroutine as a delegate to control when your post-save actions begin. In this excerpt, we respond to a button press by creating a spatial anchor. The anchorCreated() coroutine ensures that we don’t try to save the newly created anchor before the new spatial it is valid.

public void OnAButtonPressed() { GameObject gs = new GameObject(); OVRSpatialAnchor workingAnchor = gs.AddComponent();

StartCoroutine(anchorCreated(workingAnchor));

}

public IEnumerator anchorCreated(anchor) { // keep checking for a valid and localized spatial anchor state while (!workingAnchor.Created && !workingAnchor.Localized) { yield return new WaitForEndOfFrame(); }

//when ready, save the spatial anchor using OVRSpatialAnchor.Save()
_anchor.Save((anchor, success) =>
{
    if (!success) return;

    // The save is successful. Now we can save the spatial anchor
    // to a global List for convenience.
    _allAnchors.Add(workingAnchor);
});

} OVRSpatialAnchor.Save() defaults to using the OVRSpace.StorageLocation.Local enum, which is what we want here.

If you were planning to share the spatial anchor, then you must save it with OVRSpace.StorageLocation.Cloud before sharing. Saving to local (with optional parameter OVRSpace.StorageLocation.Local) is an optional optimization in this case. If you save the anchor locally, you can reload the next time the app is started instead of creating a new one.

Note that you only ever need to save using OVRSpace.StorageLocation.Cloud if you need to share the anchor in the future using shared spatial anchors. For all other use cases, saving without any parameters should work.

Update a Spatial Anchor

Once you have successfully created the spatial anchor in the Meta Quest runtime, the OVRSpatialAnchor() component updates its transform automatically. This automatic update is necessary because spatial anchors may drift slightly over time.

This excerpt from the Starter SamplesOVRSpatialAnchor.cs script shows how the Spatial Anchors sample does an explicit update:

if (TryGetPose(Space, out var pose)) { transform.SetPositionAndRotation(pose.position, pose.orientation); } Find OVRSpatialAnchor.cs in .\Assets\Oculus\VR\Scripts.

Load a Spatial Anchor from the Headset or the Cloud

Anchors are loaded in three steps:

Load unbound spatial anchors using their UUID Localize each spatial anchor Bind each spatial anchor to OVRSpatialAnchor() Each of these is described below.

Load Unbound Anchors The first step is to load a collection of unbound spatial anchors using OVRSpatialAnchor.LoadUnboundAnchors(). An unbound spatial anchor is a spatial anchor you previously saved locally but haven’t bound to an OVRSpatialAnchor() component. The load operation is asynchronous and may take multiple frames to complete, so we advise using a delegate as with the Save a Spatial Anchor with a Unity Coroutine example above.

public static bool LoadUnboundAnchors(LoadOptions options, Action<UnboundAnchor[]> onComplete) The LoadOptions parameter requires an explicit list of UUIDs to load. This operation is asynchronous, so as we note above you must supply an onComplete callback to handle the results.

This excerpt from the Starter Samples script SpatialAnchorLoader.cs shows loading a collection of unbound spatial anchors. In this implementation, the call to LoadUnboundAnchors() is done as a lambda expression:

private void Load(OVRSpatialAnchor.LoadOptions options) => OVRSpatialAnchor.LoadUnboundAnchors(options, anchors => { if (anchors == null) { Log("Query failed."); return; } ... } Find SpatialAnchorLoader.cs in .\Assets\StarterSamples\Usage\SpatialAnchor\Scripts.

Localize each anchor Localizing a spatial anchor causes the system to determine the spatial anchor’s pose in the world.

Note The term localize is in the context of Simultaneous Localization and Mapping (SLAM).

In the second step, use OVRSpatialAnchor.UnboundAnchor.Localize() to localize each spatial anchor.

public void Localize(Action<UnboundAnchor, bool> onComplete = null, double timeout = 0) If you have content associated with the spatial anchor, you should make sure that you have localized the spatial anchor before instantiating its associated content. (You may skip this step if you do not need the spatial anchor’s pose immediately.)

This stage is also asynchronous, so we advise using a coroutine delegate as with the Save a Spatial Anchor with a Unity Coroutine example above.

Some spatial anchors may already be localized. You can check whether this is the case using the OVRSpatialAnchor.UnboundAnchor.Localized property:

This excerpt from the Starter Samples script SpatialAnchorLoader.cs shows localizing a collection of unbound spatial anchors:

foreach (var anchor in anchors) { if (anchor.Localized) { _onLoadAnchor(anchor, true); } else if (!anchor.Localizing) { anchor.Localize(_onLoadAnchor); } } Find SpatialAnchorLoader.cs in .\Assets\StarterSamples\Usage\SpatialAnchor\Scripts.

The Localize() method above may fail if the spatial anchor is in a part of the environment that is not perceived or is poorly mapped, such as if the spatial anchor is behind the user. In that case, you can try to localize the spatial anchor again. You might also consider guiding the user to look around their environment.

Bind Each Spatial Anchor to OVRSpatialAnchor In the third step, you bind a spatial anchor to its intended game object’s OVRSpatialAnchor() component. Unbound spatial anchors should be bound to an OVRSpatialAnchor() component to manage their lifecycle and to provide access to other features such as Save(), Erase(), and Destroy().

You should bind an unbound OVRSpatialAnchor() immediately upon instantiation with OVRSpatialAnchor.UnboundAnchor.BindTo().

This excerpt from the Starter Samples script SpatialAnchorLoader.cs shows testing and binding of the localized spatial anchor.

var pose = unboundAnchor.Pose; var spatialAnchor = Instantiate(_anchorPrefab, pose.position, pose.rotation); unboundAnchor.BindTo(spatialAnchor);

if (spatialAnchor.TryGetComponent(out var anchor)) { // We just loaded it, so we know it exists in persistent storage. anchor.ShowSaveIcon = true; } The call to Instantiate is to the UnityEngine.Object.Instantiate() method.

Note An OVRSpatialAnchor() object that is not bound to an existing spatial anchor will create a new spatial anchor after the first frame of its existence.

Erase Spatial Anchors from the Headset or the Cloud

Use the OVRSpatialAnchor.Erase() method to erase a spatial anchor from local or cloud storage. As with Save(), the OVRSpace.StorageLocation enum governs the storage to erase the spatial anchor from.

The Erase() operation is asynchronous. As with the other asynchronous methods listed in this topic, we advise using a delegate as with the Save a Spatial Anchor with a Unity Coroutine example above. The delegate takes two arguments:

OVRSpatialAnchor(): The spatial anchor you are erasing bool: true if the operation succeeded; otherwise, false For example, this excerpt from the OnEraseButtonPressed() action in Anchor.cs governs the erasure of a spatial anchor in the Starter SamplesSpatial Anchors sample:

if (!_spatialAnchor) return;

_spatialAnchor.Erase((anchor, success) => { if (success) { _saveIcon.SetActive(false); } }); Find Anchor.cs in .\Assets\StarterSamples\Usage\SpatialAnchor\Scripts.

Destroy Spatial Anchors

When you destroy an OVRSpatialAnchor component, it removes the spatial anchor from the Meta Quest runtime. Note that destroying a spatial anchor only destroys the current runtime instance. It does not affect spatial anchors in local or cloud storage. Destroying a spatial anchor means the Meta Quest runtime is no longer tracking it. However, the spatial anchor continues to remain where you stored it (local, cloud, or both). Therefore if a spatial anchor is saved, you can reload the destroyed spatial anchor object if you have its UUID.

This excerpt from OnEraseButtonPressed() action in the Anchor.cs script destroys a spatial anchor.

public void OnHideButtonPressed() { Destroy(gameObject); } Find Anchor.cs in .\Assets\StarterSamples\Usage\SpatialAnchor\Scripts.

Save Spatial Anchor UUIDs to PlayerPrefs

The OVRSpatialAnchor() methods Save() and Erase() operate on spatial anchors stored in local or in cloud storage. When you want to refer to them, you need their UUIDs. To enable referring to spatial anchors across sessions, you need to save their UUIDs to an external persistent store. This can be any location you choose, though with Unity it is convenient to use PlayerPrefs.

In the Starter SamplesSpatial Anchors sample, Anchor.cs includes the SaveToPlayerPrefs() method that saves one or more spatial anchor UUIDs to the Unity PlayerPrefs object. It is invoked after the call to OVRSpatialAnchor.Save() succeeds:

void SaveUuidToPlayerPrefs(Guid uuid) { // Write uuid of saved anchor to file if (!PlayerPrefs.HasKey(NumUuidsPlayerPref)) { PlayerPrefs.SetInt(NumUuidsPlayerPref, 0); }

int playerNumUuids = PlayerPrefs.GetInt(NumUuidsPlayerPref);
PlayerPrefs.SetString("uuid" + playerNumUuids, uuid.ToString());
PlayerPrefs.SetInt(NumUuidsPlayerPref, ++playerNumUuids);

} Find Anchor.cs in .\Assets\StarterSamples\Usage\SpatialAnchor\Scripts.

Load Spatial Anchor UUIDs from PlayerPrefs

After you have saved the OVRSpatialAnchor() UUIDs to Unity PlayerPrefs, you retrieve them using the process outlined in Load a Spatial Anchor Stored in the Headset or the Cloud. The source of the spatial anchor UUIDs is PlayerPrefs.

In the Starter SamplesSpatial Anchors sample, SpatialAnchorLoader.cs contains methods that implement loading spatial anchor UUIDs from PlayerPrefs that do follow the Load process above. The LoadAnchorsByUuid() method extracts the UUIDs and adds them to the Uuids property of an OVRSpatialAnchor.LoadOptions struct. They are localized and bound later in the same script.

public void LoadAnchorsByUuid() { // Get number of saved anchor uuids if (!PlayerPrefs.HasKey(Anchor.NumUuidsPlayerPref)) { PlayerPrefs.SetInt(Anchor.NumUuidsPlayerPref, 0); }

var playerUuidCount = PlayerPrefs.GetInt(Anchor.NumUuidsPlayerPref);
Log($"Attempting to load {playerUuidCount} saved anchors.");
if (playerUuidCount == 0)
    return;

var uuids = new Guid[playerUuidCount];
for (int i = 0; i < playerUuidCount; ++i)
{
    var uuidKey = "uuid" + i;
    var currentUuid = PlayerPrefs.GetString(uuidKey);
    Log("QueryAnchorByUuid: " + currentUuid);

    uuids[i] = new Guid(currentUuid);
}

Load(new OVRSpatialAnchor.LoadOptions
{
    Timeout = 0,
    StorageLocation = OVRSpace.StorageLocation.Local,
    Uuids = uuids
});

} Find SpatialAnchorLoader.cs in .\Assets\StarterSamples\Usage\SpatialAnchor\Scripts.

How to handle spatial anchors from another room or floor?

As users take their Quest devices to multiple rooms, applications loading spatial anchors from past sessions should be prepared to receive spatial anchors from rooms at a distance, or even from another floor.

For example, a ping pong game may save the spatial anchors where ping pong tables have been placed. When the app starts and loads past spatial anchors, it is possible the user is now located in a new room or floor, and the past spatial anchors are therefore far. Restoring ping pong tables on far away spatial anchors leads to a poor user experience – the ping pong tables are small, hard to notice and cannot easily be accessed.

Here are best practices to handle these scenarios – these may vary depending on the specific use case of your application.

Make it easy for users to position content again – in the ping pong example above, the game should allow users to place a new ping pong table even if one was successfully restored. This makes it easy for users in a new room to set up the game without having to walk back to where they played before. Save spatial anchors for multiple sessions – do not simply keep track of the most recently placed spatial anchor, but instead save and keep track of spatial anchors across multiple sessions. That enables users to use your app in multiple rooms, each with its own set of content. Use the closest spatial anchors when multiple are found – when your app loads spatial anchors from past sessions, multiple of them may be returned and some may be for a distant room. Your app should take a decision on which spatial anchor to use when these are meant to represent the same content. Your app should take into account the distance of the spatial anchor to select which one to use to restore the experience. For example, the ping pong game above can provide a better experience by restoring the table on the closest spatial anchor instead of the most recently used one that may be from another floor. Tips for Using Spatial Anchors

For best results:

Along with saving anchors to local or cloud storage, you should keep track of anchors you create. If you want to refer to spatial anchors in future sessions, save the spatial anchor UUIDs to an external store. In the Starter Samples Spatial Anchors sample, the Anchor.cs script includes the SaveToPlayerPrefs() method to save a spatial anchor UUID to the Unity PlayerPrefs object. (Find Anchor.cs in .\Assets\StarterSamples\Usage\SpatialAnchor\Scripts. Don’t bind a spatial anchor to an object that is more than 3 meters away from the spatial anchor. Create a new spatial anchor within 3m of the object instead. Any inaccuracies in pose are amplified the farther away an object is from its spatial anchor. Use parenting to create transform hierarchies between virtual content spaced closely together and their spatial anchor. This can help keep the relative placement of the virtual content consistent. For a large room scene, place an anchor in the middle of the room to make use of its full 3m range. Create new anchors for every new or independent object. Anchors cannot be moved. If the content must be moved, delete the old anchor and create a new one. Destroy and erase any anchors you no longer need or have no content for to improve system performance. There is no maximum to the number of spatial anchors you can create. If your app allows users to place content, allow the user to choose where an object should be located, and then create your spatial anchor at that point. If your app shares spatial anchors with other users (Shared Spatial Anchors), communicate to users that they should walk around their playspace in a large circle prior to using the app to ensure the best experience. If you have saved content + anchor UUID, and the anchor cannot no longer be found, then prompt the user to reposition the content (or auto-reposition it using the scene).Use Shared Spatial Anchors Unity

All-In-One VR

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Compatibility with SDK Versions Earlier than V57 Shared Spatial Anchors are forward compatible, but they are not always backward compatible. Sharing anchors to SDK versions previous to V57 is not guaranteed to work. However, sharing anchors from an older SDK version to a newer SDK version (such as from V52 to V53) will work.

The Shared Spatial Anchors (SSA) feature allows players who are located in the same physical space to share content while playing the same game. With SSAs you can create a shared, world-locked frame of reference for many users. For example, two or more people can sit at the same table and play a virtual board game on top of it. Currently, SSA supports local multiplayer games in a single room.

Though you create and share spatial anchors with the Insight SDK, you enable the sharing using a third-party network solution. In our documentation and samples for Unity, we make use of Photon Unity Networking. See Multiplayer Enablement for information on best practices and references for creating multiplayer experiences for Meta Quest.

The The Big Picture section below describes how SSAs fit into a multiplayer enabled application.

Prerequisites

The basic prerequisites for the SSA feature are the same as those of the Spatial Anchors feature. In addition,

Your application must be registered on your developer dashboard, and it must support both passthrough and SSA. See the Shared Spatial Anchor section Set up Project Environment for specifics. Set up your chosen network solution in a way that facilitates multiplayer collaboration. More on this in the next section. There are also requirements for specific Quest headsets tied to the SDK version used: Meta Quest Pro supports Shared Spatial Anchors starting in v47. Quest 2 supports Shared Spatial Anchors starting in v49. Quest 1 does not support sharing spatial anchors. The device setting ‘Share Point Cloud Data’ must be enabled. It can be found under Settings > Privacy > Device Permissions > Turn on “Share Point Cloud Data”. More on this below. If you can build the Spatial Anchors Sample, you can build apps using SSA. We recommend that you do the Spatial Anchors Basic Tutorial and walkthrough Shared Spatial Anchors sample for hands-on experience using spatial anchors before working with shared spatial anchors.

App Configuration We verify the identity of the application requesting access to persisted Spatial Anchors. Because this verification uses information registered in the Store, your application will not be able to persist or share Spatial Anchors until you register your app on Oculus Developer Dashboard.

To enable Spatial Anchor persistence and sharing in your app, you will need to:

Have a verified Meta developer account. If you haven’t yet, sign-up here. Create or join an organization you will develop your app under. For more information, see Manage Your Organization and Users. Create an App ID. Enable “User ID” and “User Profile” in Data Use Checkup. When creating your app choose either “Meta Quest (Store)” or “Meta Quest (App Lab)” depending how you wish to eventually deploy it. If you will use Meta Quest Link to run the app from your PC, repeat these steps to also create a Rift app.

You must complete a Data Use Checkup on each of your apps. To enable spatial anchor persistence:

Obtain admin access to your App if you don’t have it already. Go to the developer dashboard and select your App. Next, click Data Use Checkup in the left navigation pane. Add User ID and User Profile Platform Features then submit the request. More Information about how to create your application can be found on the Creating and Managing Apps page.

Create Test Users The features requested above will not be available to users of your app until the Data Use Checkup review process is complete. In order to proceed with development, you will want to create a Test User which is exempt from DUC requirements. You will also need to assign these test users to the Developer role in your app’s org. See How to develop apps while waiting for DUC approvals for more details.

Android Manifest The Oculus manifest tool can be used to update your project manifest to support SSA:

Oculus > Tools > Create store-compatible AndroidManifest.xml.

For SSA and passthrough to work, the following permissions are required:

The Oculus manifest tool will add these permissions when used with the above OVRManager configuration.

Set Up App Release Channel Adding Users In order to obtain entitlements for the user accounts used during development (including test users), you should add them to one of the release channels associated with your app. Release Channels are configured in the Distribution section of the app configuration.

Uploading A Build Finally, you should upload an APK for your test app to the same release channel you added your development users to. Make certain that the APK is built using the App ID that matches the one defined in the app configutation. This ensures that the entitlement checks are successful.

The Big Picture

The sharing of spatial anchors among players implies a multiplayer environment. For this description, refer to Photon Unity Networking, because our Shared Spatial Anchors Showcase App uses Photon.

From a high level, the process for integrating SSAs into your application successfully is as follows:

Connect users with each other: Create a room using the network solution. Get users to join the same room, either by membership, activity board, or invitation. For example, Photon’s matchmaking and lobby functions can help you accomplish this.

Create an SSA by instantiating an object that has the OVRSpatialAnchor component on it.

Save the SSA to Cloud using OVRSpatialAnchor.Save(), with the storage location Cloud. Wait for this call to complete before Sharing the SSA. You may also optionally save the spatial anchor to Local to keep it locally for longer use.

Share the SSA using OVRSpatialAnchor.Share() to all players of the room. Wait for this call to complete. Other players can now load the SSA.

Distribute the SSA UUID: For instance, broadcast the SSA’s UUID to all players in the room using the network solution.

Load the SSA: For each player, capture the broadcasted UUID from the step above, and use it to load the SSA from storage location Cloud using OVRSpatialAnchor.LoadUnboundAnchors().

Done: All players can now use the SSA as a shared coordinate frame, or origin, and use the content posed by the SSA.

How Do SSAs Work?

As you can see from the previous section, when you share a spatial anchor, you share its UUID, and extend that UUIDs permissions to include more people. If you have used regular spatial anchors, this will not be a new concept to you, because for a regular spatial anchor, you also need to capture its UUID if you want to refer to it in a later session.

In the sample and tutorial referenced in the Prerequisites section, we make use of the Unity PlayerPrefs object to save the UUID for a single person - that’s the scope of PlayerPrefs.

For SSA, you still share the UUID, but you do it using a network solution that is accessible to more than one person. In this SSA documentation, we use Photon Unity Networking as our common network sharing option, but you could use any product that has a similar multiplayer facility.

The important thing to remember is that no matter which network solution you use, all the users to whom you plan to share a spatial anchor must have appropriate permissions in your app, so that they can access the spatial anchors you want to share to them.

Saving an SSA

In order to share a Spatial Anchor, the first step is to save it to storage location Cloud. Once SSAs have been saved, they are available for download by the intended users for 24 hours. After 24 hours, the opportunity for the user to download the SSA expires.

Once downloaded, the SSA remains in local storage (as a regular spatial anchor). A downloaded SSA cannot be shared back into the cloud as the same SSA. However, your app can save the spatial anchor as a new spatial anchor, using the storage location Cloud, and then share that new spatial anchor as an SSA.

Sharing an SSA

The OVRSpatialAnchor.Share() method provides you with the ability to share SSAs with a list of participants, each represented by an OVRSpaceUser. Once the Share() function returns successfully, the SSA is ready for access by the participants. It is the responsibility of your application to communicate the UUID of the SSA to be used to the participants such that they can query it.

Destroying Shared Spatial Anchors

After an SSA is shared to a user, it is downloaded to local storage, where to the user it behaves like any other spatial anchor. If you destroy the user’s local spatial anchor, the cloud SSA is not destroyed. Destroying the local spatial anchor also does not change the user’s sharing permission. The SSA is shared to that person until your app removes the permission or until the sharing expires (24 hours).

Ensuring Share Point Cloud Data is enabled

The device setting ‘Share Point Cloud Data’ must be enabled for Shared Spatial Anchors to function. Users can find it under Settings > Privacy > Device Permissions > Share Point Cloud Data.

Your app can detect when this setting is disabled and inform users to turn it on. Your app will receive the error code OVRSpatialAnchor.OperationResult.Failure_SpaceCloudStorageDisabled on functions OVRSpatialAnchor.Save() or OVRSpatialAnchor.Share() when that setting is disabled. The error code can be read from the OperationResult. Upon receiving this error, you should inform users that enabling Share Point Cloud Data is required in order to use the colocated functionality and let them know where to find the setting. See samples below.

Handling error on Save

var anchors = Array.Empty(); OVRSpatialAnchor.Save(anchors, new OVRSpatialAnchor.SaveOptions { Storage = OVRSpace.StorageLocation.Cloud }, onComplete: (_, result) => { if (result == OVRSpatialAnchor.OperationResult.Failure_SpaceCloudStorageDisabled) { // inform user to turn on Share Point Cloud Data // Settings > Privacy > Device Permissions > Turn on “Share Point Cloud Data” } }); Handling error on Share

if (await spatialAnchor.ShareAsync(users) == OVRSpatialAnchor.OperationResult.Failure_SpaceCloudStorageDisabled) { // inform user to turn on Share Point Cloud Data // Settings > Privacy > Device Permissions > Turn on “Share Point Cloud Data” } OVRSpatialAnchor.Share() Details

The signature of OVRSpatialAnchor.Share() is as follows:

public static void Share( ICollection anchors, ICollection users, Action<ICollection, OperationResult> onComplete = null) Parameters:

ICollection anchors The collection of anchors you are sharing. ICollection users The collection of users with whom you are sharing. Action onComplete Called when the operation completes. It takes two arguments: ICollection The collection of anchors that were shared. OperationResult An error code indicating whether the share operation succeeded or not. Spatial anchors in the anchors collection must already be saved to cloud storage. The users parameter specifies an array of OVRSpaceUser objects for the users with whom you want to share anchors. When the call to OVRSpatialAnchor.Share() succeeds, the SSAs are available to the users.

The sharing operation is asynchronous. You may supply an optional delegate to notify when the sharing operation completes. For example:

OVRSpatialAnchor.Share(anchorCollection, users, (anchors, result) => { ShowShareIcon = result == OperationResult.Success; }); OVRSpaceUser Object

You must specify the set of users with whom you wish to share anchors. To do this, create an OVRSpaceUser from each user’s Meta Quest ID (a ulong identifier).

var users = new OVRSpaceUser[] { new OVRSpaceUser(userId1), new OVRSpaceUser(userId2), };

OVRSpatialAnchor.Share(anchors, users, (anchors, result) => { Debug.Log(result == OperationResult.Success ? $"Shared with {users.Length} users!" : "Sharing failed"); }); Refer to the Users, Friends, and Relationships section to see how to retrieve information about the current user and their friends.

Example

This excerpt from SharedAnchors.cs in the Unity-SharedSpatialAnchors showcase app demonstrates sharing spatial anchors to a user collection:

private void SaveToCloudThenShare() { OVRSpatialAnchor.SaveOptions saveOptions; saveOptions.Storage = OVRSpace.StorageLocation.Cloud; _spatialAnchor.Save(saveOptions, (spatialAnchor, isSuccessful) => { if (isSuccessful) { SampleController.Instance.Log("Successfully saved anchor(s) to the cloud");

          var userIds = PhotonAnchorManager.GetUserList().Select(userId => userId.ToString()).ToArray();
          ICollection<OVRSpaceUser> spaceUserList = new List<OVRSpaceUser>();
          foreach (string strUsername in userIds)
          {
              spaceUserList.Add(new OVRSpaceUser(ulong.Parse(strUsername)));
          }

          OVRSpatialAnchor.Share(new List<OVRSpatialAnchor> { spatialAnchor }, spaceUserList, OnShareComplete);

          SampleController.Instance.AddSharedAnchorToLocalPlayer(this);
      }
      else
      {
          SampleController.Instance.Log("Saving anchor(s) failed. Retrying...");
          SaveToCloudThenShare();
      }
  });

} Find SharedAnchors.cs in the Unity-SharedSpatialAnchors showcase app at ./Assets/SharedSpatialAnchors/Scripts

SSA Showcase Apps and Walkthrough

Two showcase applications are available that highlight the use of SSAs. Both are available in the oculus-samples GitHub repository. Both of these applications use Photon Unity Networking to share player data.

The Unity-SharedSpatialAnchors showcase app is an established application which highlight the implementation of shared spatial anchors, and allows users to interact with networked objects in a co-located space. The page Shared Spatial Anchors Sample provides documentation on how to build and use the sample. The Unity-Discover showcase app is a newer application which has fewer features, and which demonstrates how to use a single SSA effectively in a multipurpose application. To help you understand how the SSA is implemented, check out the Shared Spatial Anchors (SSA) WalkthroughSpatial Anchors Basic Tutorial Unity

All-In-One VR

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Spatial anchors are world-locked frames of reference you can use in your app to place and orient objects that will persist between sessions. This consistency provides users with a sense of continuation and familiarity each time they reenter your game or app.

As it details in Use Spatial Anchors, the key element to successfully placing a spatial anchor is the OVRSpatialAnchor component. In this tutorial, you assemble a project similar to the Starter Samples Spatial Anchors Sample, but with many fewer features. Here’s what you do in this tutorial:

Create a scene that includes the OVRManager and OVRPassthroughlayer. Create two prefab capsules, one for the LeftControllerAnchor, and one for the RightControllerAnchor. These always display at the controller. Create two placement prefabs, one for each controller. These also contain capsules, but these capsules only show when you create them and anchor them in space. Create a minimal script that governs how spatial anchors are created, destroyed, loaded, and erased. Create a GameObject to connect the spatial anchor management code with the capsule and transform prefabs. Add the script to the GameObject, and assign the capsules and transforms to the script’s public members. See Try Out the App at the end of this topic to read how the app works.

Before You Begin

You should first do the Basic Passthrough Tutorial. Once you complete that, you will be ready to do this basic Spatial Anchors tutorial.

Create a New Scene

In the Project tab, under the Assets folder, create a new folder named SpatialAnchors Tutorial. Then select it to make it the current folder for new objects. In the Project Hierarchy, right click SampleScene, and choose Save Scene As. Give the new scene a unique name, such as SpatialAnchorsTutorial. This becomes the active scene in the Hierarchy. Remove any game objects from the scene, except for the Directional Light and OVRCameraRig. Set up the Major Scene Components

The OVRCameraRig, OVRSceneManager, and OVRPassthroughLayer are all components that you need to set up the scene.

Set up the OVRCameraRig and OVRManager The OVRCameraRig is the main camera for your scene. Set it up as follows. These are typical settings for a passthrough scene.

If you haven’t done so already, select the Main Camera and press the Delete key to remove it from the scene. If you haven’t done so already, in the Project pane, search for the OVRCameraRig prefab. Drag it to the SpatialAnchorsTutorial scene. Then select it to show its properties in the Inspector. On the Inspector tab, do the following: Under OVRManager, set Tracking Origin Type to Eye Level. Under OVRManager, go to Quest Features > General. From the Anchor Support dropdown, select Enabled to turn on anchor support. Under Insight Passthrough, check the Enable Passthrough checkbox. Set up the OVRCameraRigSet up the OVRCameraRig

Set up the OVRPassthroughLayer Adding the OVRPassthroughLayer enables the ball to bounce off of objects in your environment.

In the Hierarchy pane, select the OVRCameraRig game object. At the bottom of the Inspector, choose Add Component. Search for and select OVR Passthrough Layer. Expand the OVR Passthrough Layer component. Set Projection Surface to Reconstructed and Placement to Underlay Set Up the OVRPassthroughLayer

Create the Green and Red Prefab Controller Capsules

Create the green and red capsule prefabs that will float near each controller. These capsules will indicate where the smaller anchor capsules will be created.

In the Project tab, select the SpatialAnchors Tutorial folder to make it the current folder for new objects. From the Assets menu, select Create > Prefab. Name the new prefab SaveablePrefab. Double-click the new prefab to edit it. In the Inspector, add the following values to the Transform component. Set the Position property to X = -0.2, Y = 1.03, and Z = 0. Set the Rotation property to X = 0, Y = 0, and Z = 0. Set the Scale property to X = 0.1, Y = 0.1, and Z = 0.1. In the Hiearachy, right click the SaveablePrefab object, and then choose 3D Object > Capsule. Make sure the new capsule is a child of the SaveablePrefab object. In the Hierarchy, select the new capsule. In the Project pane, search for the Green material. Drag the material into the SpatialAnchors Tutorial Assets folder. In the Inspector, in the Materials property, choose the Green material. New SATutorialSceneNew SATutorialScene

Repeat the previous steps, creating a prefab named NonSaveablePrefab, with these changes:

Instead of a position X value of -0.2, set the Position property to X = 2.0. For this prefab, instead of the Green material, choose the Red material. Create the Saveable and NonSaveable Placement Prefabs

Separate placement prefabs are needed for each controller. These prefabs contain capsules that are created when you press the controller index trigger.

Create the SaveablePlacement Prefab In the Project tab, select the SpatialAnchors Tutorial folder to make it the current folder for new objects. From the Assets menu, select Create > Prefab. Name the new prefab SaveablePlacement. Double-click the SaveablePlacement to edit it. In the Inspector, add the following values to the SaveablePlacement Transform component. Set the Position property to X = 0.0, Y = 0.0, and Z = 0.25. Set the Rotation property to X = 0, Y = 0, and Z = 0. Set the Scale property to X = 0.025, Y = 0.025, and Z = 0.025. In the Hiearachy, right-click the SaveablePlacement object, and then choose Create Empty. Make sure the new object is a child of the SaveablePlacement object, and then name it SaveableTransform. In the Inspector, add the following values to the SaveableTransform Transform component. Set the Position property to X = 0.0, Y = 0.1, and Z = 0.1. Set the Rotation property to X = 0, Y = 0, and Z = 0. Set the Scale property to X = 0.025, Y = 0.025, and Z = 0.025. In the Hiearachy, right click the SaveablePlacement object again, and then choose 3D Object > Capsule. Make sure the new capsule is a child of the SaveablePlacement object. In the Hierarchy, select the new capsule. In the Inspector, add the following values to the Capsule Transform component. Set the Position property to X = 0, Y = 0, and Z = 0.125 Set the Rotation property to X = 0, Y = 0, and Z = 0. Set the Scale property to X = 0.025, Y = 0.025, and Z = 0.025. In the Inspector, in the Materials property, choose the Green material. Create the NonSaveablePlacement Prefab From the Assets menu, select Create > Prefab. Name the new prefab NonSaveablePlacement. Repeat the steps for Create the SaveablePlacement Prefab, but use the name NonSaveablePlacement for the prefab, and NonSaveableTransform for the new empty object. Add the Anchor Placement Prefabs to the Left and Right Controller Anchors

The capsule prefabs are displayed when you place the anchor.

In the Hierarchy window, expand OVRCameraRig > TrackingSpace > LeftHandAnchor and select LeftControllerAnchor. In the Project search box, search for SaveablePlacement. Drag this object to be a child to the RightControllerAnchor. In the Hierarchy window, expand OVRCameraRig > TrackingSpace > RightHandAnchor and select RightControllerAnchor. In the Project search box, search for NonSaveablePlacement. Drag this object to be a child to the RightControllerAnchor. Add the Anchor Prefab

Create an Anchor Manager Script

In the Project pane, click your SpatialAnchors Tutorial folder to make it the current location for new objects. From the Assets menu, select Create > C# Script. Name the new prefab AnchorTutorialUIManager. Double-click the new script to edit it. We want the script just to do a few things:

Respond to button presses: the left index trigger creates and saves a green spatial anchor. the left index trigger creates (but does not save) a red spatial anchor. the X button destroys all displayed spatial anchors. the A button loads all saved spatial anchors. the Y button erases all saved spatial anchors. Keep track of which capsules (green and red) are currently displayed, so we can destroy them. Keep track of which capsules are saved to local storage (green capsules), to make it easy to load or erase them from local storage. Separately keep track of the UUIDs of the capsules saved to local storage. We can save these to an external location (such as PlayerPrefs to make it easy to refer to saved capsules in a future session. Declare the Serialized Objects and Working Variables We need six serialized fields, one for each for the four prefabs we just made, plus two more for the transforms that were created with the placement prefabs.

At the top of your AnchorTutorialUIManager class, add

[SerializeField] private GameObject _saveableAnchorPrefab; public GameObject SaveableAnchorPrefab => _saveableAnchorPrefab;

[SerializeField, FormerlySerializedAs("_saveablePreview")] private GameObject _saveablePreview;

[SerializeField, FormerlySerializedAs("_saveableTransform")] private Transform _saveableTransform;

[SerializeField] private GameObject _nonSaveableAnchorPrefab; public GameObject NonSaveableAnchorPrefab => _nonSaveableAnchorPrefab;

[SerializeField, FormerlySerializedAs("_nonSaveablePreview")] private GameObject _nonSaveablePreview;

[SerializeField, FormerlySerializedAs("_nonSaveableTransform")] private Transform _nonSaveableTransform; Adding these fields will expose them in the Unity Inspector UI when we add the in the Unity editor later.

We also need a few utility objects to help create the program:

private OVRSpatialAnchor _workingAnchor; // general purpose anchor private List _allSavedAnchors; //anchors written these to local storage (green only) private List _allRunningAnchors; //anchors currently running (red and green)

private int _anchorSavedUUIDListSize; //current size of the tracking list private const int _anchorSavedUUIDListMaxSize = 50; //max size of the tracking list private System.Guid[] _anchorSavedUUIDList; //simulated external location, like PlayerPrefs

Action<OVRSpatialAnchor.UnboundAnchor, bool> _onLoadAnchor; //delegate used for binding unbound anchors Write the Awake() Method In Awake(), we instantiate our working variables.

private void Awake(){ //.... //other statements _allSavedAnchors = new List(); _allRunningAnchors = new List(); _anchorSavedUUIDList = new System.Guid[_anchorSavedUUIDListMaxSize]; _anchorSavedUUIDListSize = 0; _onLoadAnchor = OnLocalized; //.... //other statements } Instantiate a Create Red or Green Capsule We will use the controller index triggers to create each of the capsules. The process is identical for both capsules: we create a spatial anchor and pass it to the CreateAnchor() method (which we will write in a minute). The only main difference between green and red capsules is that with a saveable green capsule, we pass a true value to the CreateAnchor() method.

if (OVRInput.GetDown(OVRInput.Button.PrimaryIndexTrigger)) //create a green capsule { GameObject gs = PlaceAnchor(_saveableAnchorPrefab, _saveableTransform.position, _saveableTransform.rotation); //anchor A _workingAnchor = gs.AddComponent(); CreateAnchor(_workingAnchor, true); //true==save the anchor to local storage } else if (OVRInput.GetDown(OVRInput.Button.SecondaryIndexTrigger)) { // create a red capsule GameObject gs = PlaceAnchor(_nonSaveableAnchorPrefab, _nonSaveableTransform.position, _nonSaveableTransform.rotation); //anchor b _workingAnchor = gs.AddComponent(); CreateAnchor(_workingAnchor, false); } The PlaceAnchor() method listed above just instantiates the anchor.

private GameObject PlaceAnchor(GameObject prefab, Vector3 p, Quaternion r) { return Instantiate(prefab, p, r); } The Unity Instantiate() method will display and orient the anchor prefab in the VR environment.

Track and Save a Green or Red Capsule Because saving to local storage is an asynchronous operation, we will use a Unity coroutine to make sure the capsule exists before we try to save it. For convenience, we use the same methods for both green and red capsules. The call to CreateAnchor() starts both accounting and saving.

public void CreateAnchor(OVRSpatialAnchor spAnchor, bool saveAnchor) { StartCoroutine(anchorCreated(_workingAnchor, saveAnchor)); //use a coroutine to manage the async save } The main work is done within the coroutine. We yield until the anchor is available. Then if saveAnchor is true, we attempt to save the anchor. The Save() method defaults to local storage (OVRSpace.StorageLocation.Local), so we don’t have to specify it here.

public IEnumerator anchorCreated(OVRSpatialAnchor osAnchor, bool saveAnchor) { while (!osAnchor.Created && !osAnchor.Localized) { yield return new WaitForEndOfFrame(); //keep checking }

//Save the anchor to a local List so we can refer to it
_allRunningAnchors.Add(osAnchor);

if (saveAnchor)  // we save the saveable (green) anchors only
{
    osAnchor.Save((anchor, success) =>
    {
        if (success)
        {
            //keep tabs on anchors in local storage
            _allSavedAnchors.Add(anchor);
            //if we wanted to save UUID to external storage so
            // we could refer to it in a future session, we
            // would do so here also
        }
    });
}

} We want to make sure we can account for the capsules we create, so we’ll use it, so after the anchor is created, we add it to the list of known saved anchors.

Destroy Displayed Anchors After we have pressed the two index triggers a few times, _allSavedAnchors contains the green ones, and _allRunningAnchors contains both red and green. In this tutorial we destroy all green and red capsules from the current scene by pressing the X button. This action is immediate, not asynchronous.

if (OVRInput.GetDown(OVRInput.Button.Three)) //x button { //Destroy all anchors from the scene, but don't erase them from storage using (var enumerator = _allRunningAnchors.GetEnumerator()) { while (enumerator.MoveNext()) { var spAnchor = enumerator.Current; Destroy(spAnchor.gameObject); } } //clear the list of running anchors _allRunningAnchors.Clear(); } Though we destroy all the capsules from the scene, any saveable green capsules we have already saved are still in local storage.

Create a Method to Load Anchors We will use the A button to load any anchors saved to local storage:

if (OVRInput.GetDown(OVRInput.Button.One)) { LoadAllAnchors(); // load saved anchors } As we describe in Spatial Anchors Overview, loading from either local or cloud storage is a three-step process:

Load a spatial anchor from storage using its UUID. At this point it is unbound. Localize each unbound spatial anchor to fix it in its intended virtual location. Bind each spatial anchor to an OVRSpatialAnchor(). Load and Localize Anchors We load and localize each anchor with one method. First, we set the LoadOptions struct with both the UUIDs we want to load, and the storage location to load them from. Then we call OVRSpatialAnchor.LoadUnboundAnchors(), and iterate over the result.

public void LoadAllAnchors() { OVRSpatialAnchor.LoadOptions options = new OVRSpatialAnchor.LoadOptions { Timeout = 0, StorageLocation = OVRSpace.StorageLocation.Local, Uuids = GetSavedAnchorUUIDs() };

OVRSpatialAnchor.LoadUnboundAnchors(options, _anchorSavedUUIDList =>
{
    if (_anchorSavedUUIDList == null)
    {
        return;
    }

    foreach (var anchor in _anchorSavedUUIDList)
    {
        if (anchor.Localized)
        {
            _onLoadAnchor(anchor, true);
        }
        else if (!anchor.Localizing)
        {
            anchor.Localize(_onLoadAnchor);
        }
    }
});

} The GetSavedAnchorUUIDs() method just returns an array of Guids for the anchors that have been saved to local storage.

private System.Guid[] GetSavedAnchorsUuids() { var uuids = new Guid[_allSavedAnchors.Count]; using (var enumerator = _allSavedAnchors.GetEnumerator()) { int i = 0; while (enumerator.MoveNext()) { var currentUuid = enumerator.Current.Uuid; uuids[i] = new Guid(currentUuid.ToByteArray()); i++; } } //Debug.Log("Returned All Anchor UUIDs!"); return uuids; } Bind Anchors In this tutorial, we use a delegate to bind the anchor and add it back to the scene:

private void OnLocalized(OVRSpatialAnchor.UnboundAnchor unboundAnchor, bool success) { var pose = unboundAnchor.Pose; GameObject go = PlaceAnchor(_saveableAnchorPrefab, pose.position, pose.rotation); _workingAnchor = go.AddComponent();

unboundAnchor.BindTo(_workingAnchor);

// add the anchor to the running total
_allRunningAnchors.Add(_workingAnchor);

} Erase Saved Anchors We use the Y button press to erase all the anchors from local storage. This doesn’t remove them from the scene, just from local storage.

// erase all saved (green) anchors if (OVRInput.GetDown(OVRInput.Button.Four)) { EraseAllAnchors(); } OVRSpatialAnchor.Erase() is an asynchronous method, so we will use a Unity coroutine as we did for saving anchors. After we successfully erase an anchor from storage, we also need to remove it from the saved anchors array.

public void EraseAllAnchors() { foreach (var tmpAnchor in _allSavedAnchors) { OVRSpatialAnchor spAnchor = enumerator.Current; if (spAnchor) { //use a Unity coroutine to manage the async save StartCoroutine(anchorErased(spAnchor)); } }

_allSavedAnchors.Clear();
//if we were saving to PlayerPrefs, we would alse delete those here
return;

} The coroutine for erasing saved anchors is simpler than the one for saving them. Here we just send a message to the log if there’s a problem. In this tutorial, we just test if the anchor already exists to comply with the requirements of a coroutine.

public IEnumerator anchorErased(OVRSpatialAnchor osAnchor) { while (!osAnchor.Created) { yield return new WaitForEndOfFrame(); }

osAnchor.Erase((anchor, success) =>
{
    if (!success)
    {
        Debug.Log("Anchor " + osAnchor.Uuid.ToString() + " NOT Erased!");
    }
    return;
});

} That’s the end of the script. Save it and return to the Unity editor.

Create and Configure a Tutorial Manager Game Object

The TutorialManager Game Object connects the anchor prefab with the spatial anchor loader and the script we just wrote.

From the Game Object menu, choose Create Empty. Name the new object TutorialManager, and make it a peer of your scene’s OVRCameraRig. In the Hierarchy Window, click the new TutorialManager game object. Then in the Inspector, select Add Component. Search for and select the script you just wrote, AnchorTutorialUIManager. You’ll see the six properties you need to configure. In the Project search box, search for SaveablePrefab. Drag this object to the Inspector, to the Saveable Anchor Prefab field in the Anchor Tutorial U IManager (Script) component. In the Project search box, search for SaveablePlacement. Drag this object to the Inspector, to the Saveable Preview field in the Anchor Tutorial U IManager (Script) component. In the Hierarchy window, expand LeftControllerAnchor > SaveablePlacement. Drag the SaveableTransform to the Inspector, to the Saveable Transform (Transform) field in the Anchor Tutorial U IManager (Script) component. In the Project search box, search for NonSaveablePrefab. Drag this object to the Inspector, to the Non Saveable Anchor Prefab field in the Anchor Tutorial U IManager (Script) component. In the Project search box, search for NonSaveablePlacement. Drag this object to the Inspector, to the Non Saveable Preview field in the Anchor Tutorial U IManager (Script) component. In the Hierarchy window, expand RightControllerAnchor > Non SaveablePlacement. Drag the NonSaveableTransform to the Inspector, to the Non Saveable Transform (Transform) field in the Anchor Tutorial U IManager (Script) component. TutorialManager Game Object

Check the Project Setup Tool

We are almost done. Before we build, we need to run the project setup tool. This is to make sure that we haven’t introduced any complications with our combination of new and existing game objects.

Save your project and your scene. From the File Menu, select Project Settings. Select Oculus, and then select the Android tab. Check the Checklist for warnings or errors or warnings. Choose Apply All or Fix to have Unity resolve them. Check the Project Setup Tool

Save and Run the Project

From the File menu, choose Save to save your scene and Save Project to save the project. From the File menu, choose Build Settings to open the Build Settings* window. Make sure your Meta Quest headset is the selected device in the Run Device dropdown. If you don’t see your headset in the list, click Refresh. Click Add Open Scenes to add your scene to the build. Deselect and remove any other scenes from the selection window. Click the Build and Run button to launch the program onto your headset. Try Out the App

When you first start the app, you’ll see that each controller displays a capsule. The left controller shows a green capsule, and the right controller shows a red capsule. Green capsules are saved to local storage when they are created, but red ones are never saved.

Press the left index trigger one or more times to create small green capsules. The anchor for each capsule is automatically saved to the headset. Press the right index trigger one or more times to create small red capsules. These anchors are not saved to the headset. Press the X button to destroy all capsules. All capsules are removed from your view. Press the A button to load all saved capsules. Only green capsules reappear. Press the Y button to erase all green anchors from local storage. The green capsules remain on the screen. Press the X button to destroy all capsules. All capsules are removed from your view. Press the A button to load all saved capsules. Because you erased them, no green capsules reappear. Spatial Anchors Added at Runtime

Appendix: The Full AnchorTutorialUIScript.cs File.

using System; using System.Collections; using System.Collections.Generic; using System.Linq; using Unity.VisualScripting; using UnityEngine; using UnityEngine.Serialization; using UnityEngine.UIElements; using static OVRSpatialAnchor; //using static UnityEditor.Progress;

public class AnchorTutorialUIManager : MonoBehaviour {

/// <summary>
/// Anchor Tutorial UI manager singleton instance
/// </summary>
public static AnchorTutorialUIManager Instance;


[SerializeField]
private GameObject _saveableAnchorPrefab;
public GameObject SaveableAnchorPrefab => _saveableAnchorPrefab;

[SerializeField, FormerlySerializedAs("_saveablePreview")]
private GameObject _saveablePreview;

[SerializeField, FormerlySerializedAs("_saveableTransform")]
private Transform _saveableTransform;


[SerializeField]
private GameObject _nonSaveableAnchorPrefab;
public GameObject NonSaveableAnchorPrefab => _nonSaveableAnchorPrefab;

[SerializeField, FormerlySerializedAs("_nonSaveablePreview")]
private GameObject _nonSaveablePreview;

[SerializeField, FormerlySerializedAs("_nonSaveableTransform")]
private Transform _nonSaveableTransform;


private OVRSpatialAnchor _workingAnchor;

private List<OVRSpatialAnchor> _allSavedAnchors; //we've written these to the headset (green only)
private List<OVRSpatialAnchor> _allRunningAnchors; //these are currently running (red and green)

private System.Guid[] _anchorSavedUUIDList; //simulated external location, like PlayerPrefs
private int _anchorSavedUUIDListSize;
private const int _anchorSavedUUIDListMaxSize = 50;

Action<OVRSpatialAnchor.UnboundAnchor, bool> _onLoadAnchor;



private void Awake()
{
    if (Instance == null)
    {
        Instance = this;
        _allSavedAnchors = new List<OVRSpatialAnchor>();
        _allRunningAnchors = new List<OVRSpatialAnchor>();
        _anchorSavedUUIDList = new System.Guid[_anchorSavedUUIDListMaxSize];
        _anchorSavedUUIDListSize = 0;
        _onLoadAnchor = OnLocalized;
    }
    else
    {
        Destroy(this);
    }
}

/*
 * We respond to five button events:
 *
 * Left trigger: Create a saveable (green) anchor.
 * Right trigger: Create a non-saveable (red) anchor.
 * A: Load, Save and display all saved anchors (green only)
 * X: Destroy all runtime anchors (red and green)
 * Y: Erase all anchors (green only)
 * others: no action
 */
void Update()
{

    if (OVRInput.GetDown(OVRInput.Button.PrimaryIndexTrigger))
    {
        //create a green (saveable) spatial anchor
        GameObject gs = PlaceAnchor(_saveableAnchorPrefab, _saveableTransform.position, _saveableTransform.rotation); //anchor A
        _workingAnchor = gs.AddComponent<OVRSpatialAnchor>();

        CreateAnchor(_workingAnchor, true);
    }
    else if (OVRInput.GetDown(OVRInput.Button.SecondaryIndexTrigger))
    {
        //create a red (non-saveable) spatial anchor.
        GameObject gs = PlaceAnchor(_nonSaveableAnchorPrefab, _nonSaveableTransform.position, _nonSaveableTransform.rotation); //anchor b
        _workingAnchor = gs.AddComponent<OVRSpatialAnchor>();

        CreateAnchor(_workingAnchor, false);
    }
    else if (OVRInput.GetDown(OVRInput.Button.One))
    {
        LoadAllAnchors(); // load saved anchors
    }
    else if (OVRInput.GetDown(OVRInput.Button.Three)) //x button
    {
        //Destroy all anchors from the scene, but don't erase them from storage
        using (var enumerator = _allRunningAnchors.GetEnumerator())
        {
            while (enumerator.MoveNext())
            {
                var spAnchor = enumerator.Current;
                Destroy(spAnchor.gameObject);
                //Debug.Log("Destroyed an Anchor " + spAnchor.Uuid.ToString() + "!");
            }
        }

        //clear the list of running anchors
        _allRunningAnchors.Clear();
    }
    else if (OVRInput.GetDown(OVRInput.Button.Four))
    {
        EraseAllAnchors(); // erase all saved (green) anchors
    }
    else // any other button?
    {
        // no other actions tracked
    }
}

/****************************** Button Handlers ***********************/



/******************* Create Anchor Methods *****************/

public void CreateAnchor(OVRSpatialAnchor spAnchor, bool saveAnchor)
{
    //use a Unity coroutine to manage the async save
    StartCoroutine(anchorCreated(_workingAnchor, saveAnchor));
}

/*
 * Unity Coroutine
 * We need to make sure the anchor is ready to use before we save it.
 * Also, only save if specified
 */
public IEnumerator anchorCreated(OVRSpatialAnchor osAnchor, bool saveAnchor)
{
    // keep checking for a valid and localized anchor state
    while (!osAnchor.Created && !osAnchor.Localized)
    {
        yield return new WaitForEndOfFrame();
    }

    //Save the anchor to a local List so we can track during the current session
    _allRunningAnchors.Add(osAnchor);

    // we save the saveable (green) anchors only
    if (saveAnchor)
    {
        //when ready, save the anchor.
        osAnchor.Save((anchor, success) =>
        {
            if (success)
            {
                //save UUID to external storage so we can refer to the anchors in a later session
                SaveAnchorUuidToExternalStore(anchor);
                //Debug.Log("Anchor " + osAnchor.Uuid.ToString() + " Saved!");
                //keep tabs on anchors in local storage
                _allSavedAnchors.Add(anchor);
            }
        });
    }
}



/******************* Load Anchor Methods **********************/

public void LoadAllAnchors()
{
    OVRSpatialAnchor.LoadOptions options = new OVRSpatialAnchor.LoadOptions
    {
        Timeout = 0,
        StorageLocation = OVRSpace.StorageLocation.Local,
        Uuids = GetSavedAnchorsUuids() //GetAnchorsUuidsFromExternalStore()
    };

    //load and localize
    OVRSpatialAnchor.LoadUnboundAnchors(options, _anchorSavedUUIDList =>
    {
        if (_anchorSavedUUIDList == null)
        {
            //Debug.Log("Anchor list is null!");
            return;
        }

        foreach (var anchor in _anchorSavedUUIDList)
        {
            if (anchor.Localized)
            {
                _onLoadAnchor(anchor, true);
            }
            else if (!anchor.Localizing)
            {
                anchor.Localize(_onLoadAnchor);
            }
        }
    });
}

private void OnLocalized(OVRSpatialAnchor.UnboundAnchor unboundAnchor, bool success)
{
    var pose = unboundAnchor.Pose;
    GameObject go = PlaceAnchor(_saveableAnchorPrefab, pose.position, pose.rotation);
    _workingAnchor = go.AddComponent<OVRSpatialAnchor>();

    unboundAnchor.BindTo(_workingAnchor);

    // add the anchor to the running total
    _allRunningAnchors.Add(_workingAnchor);
}

/*
 * Get all spatial anchor UUIDs saved to local storage
*/
private System.Guid[] GetSavedAnchorsUuids()
{
    var uuids = new Guid[_allSavedAnchors.Count];
    using (var enumerator = _allSavedAnchors.GetEnumerator())
    {
        int i = 0;
        while (enumerator.MoveNext())
        {
            var currentUuid = enumerator.Current.Uuid;
            uuids[i] = new Guid(currentUuid.ToByteArray());
            i++;
        }
    }
    //Debug.Log("Returned All Anchor UUIDs!");
    return uuids;
}

/******************* Erase Anchor Methods *****************/


/*
 * If the Y button is pressed, erase all anchors saved
 * in the headset, but don't destroy them. They should remain
 * displayed.
 */
public void EraseAllAnchors()
{
    foreach (var tmpAnchor in _allSavedAnchors)
    {
        //use a Unity coroutine to manage the async save
        StartCoroutine(anchorErased(tmpAnchor));
    }

    //we also erase our reference lists
    _allSavedAnchors.Clear();
    RemoveAllAnchorsUuidsInExternalStore();
    return;
}


/*
 * Unity Coroutine
 * We need to make sure the anchor is ready to use before we erase it.
 */
public IEnumerator anchorErased(OVRSpatialAnchor osAnchor)
{
    while (!osAnchor.Created)
    {
        yield return new WaitForEndOfFrame();
    }

    //when ready, erase the anchor.
    osAnchor.Erase((anchor, success) =>
    {
        if (!success)
        {
            Debug.Log("Anchor " + osAnchor.Uuid.ToString() + " NOT Erased!");
        } else
        {
            Debug.Log("Anchor " + osAnchor.Uuid.ToString() + " Erased!");
            ;
        }

        return;
    });
}


 /********************************* Display an Anchor Prefab *****************/

/*
 * Display an anchor prefab, red or  green
 */
private GameObject PlaceAnchor(GameObject prefab, Vector3  p, Quaternion r)
{
    //Debug.Log("Placing a new anchor prefab!");
    return Instantiate(prefab, p, r);
}



/******************These three methods simulate an external store, such as PlayerPrefs ******************/

/*
 * Add one spatial anchor to the external store
 */
private void SaveAnchorUuidToExternalStore(OVRSpatialAnchor spAnchor)
{
    if(_anchorSavedUUIDListSize < _anchorSavedUUIDListMaxSize)
    {
        _anchorSavedUUIDList[_anchorSavedUUIDListSize] = (spAnchor.Uuid);
        _anchorSavedUUIDListSize++;
        //Debug.Log("Saved Anchor " + spAnchor.Uuid.ToString() + " to a external Store!");
    }
    else
    {
        //Debug.Log("Can't save anchor " + spAnchor.Uuid.ToString() + ", because the store is full!");
    }
    return;
}

/*
 * Get all spatial anchor UUIDs saved to the external store
 */
private System.Guid[] GetAnchorsUuidsFromExternalStore()
{

    var uuids = new Guid[_anchorSavedUUIDListSize];
    for (int i = 0; i < _anchorSavedUUIDListSize; ++i)
    {
        var uuidKey = "uuid" + i;
        var currentUuid = _anchorSavedUUIDList[i].ToByteArray();
        uuids[i] = new Guid(currentUuid);
    }
    //Debug.Log("Returned All Anchor UUIDs!");
    return uuids;
}

/*
 * Empty the external external store
 */
private void RemoveAllAnchorsUuidsInExternalStore()
{
    _anchorSavedUUIDList = new System.Guid[_anchorSavedUUIDListMaxSize];
    //Debug.Log("Cleared the external Store!");
    return;
}

} ChatGPT The "Unity Custom Scene Manager Sample" document you're referencing appears to be part of Unity's extensive VR development tutorials focused on using Oculus VR (specifically the Quest series) technology. It provides an in-depth guide on building custom scene managers to handle VR scene data more efficiently, leveraging Oculus' VR-specific components like OVRCameraRig and OVRManager, and the spatial anchoring system through OVRAnchor.

Here’s a breakdown of the main sections and key concepts covered:

Prerequisites: Ensures developers are familiar with the foundational concepts and tools necessary for VR development in Unity, including the OVRAnchor API for spatial data manipulation. Setting Permissions for Spatial Data: The document outlines steps to set permissions within the Unity environment for accessing and manipulating spatial data, essential for VR applications that interact deeply with the user's physical environment. Scene Manager Overview: Basic Scene Manager: Introduces a simple scene manager that loads scene data and uses it to spawn Unity primitives at runtime based on the spatial layout. Prefab Scene Manager: Extends the basic version by spawning prefabricated objects (prefabs) instead of primitives, allowing for a richer environment. Snapshot Scene Manager: Manages dynamic changes in the scene by regularly updating and comparing snapshots of the scene data to track and log changes. Dynamic Scene Manager: Further builds on the Snapshot manager by actively updating the Unity scene in response to changes detected in the scene data snapshots. Key Assets: Lists the scripts and assets needed for the scene manager functionality, all stored under specific directories within the Unity project. Each scene manager type demonstrates different strategies for handling VR scene data, from static loading and instantiation to dynamic updates based on real-time changes in the environment. This flexibility is crucial for creating responsive and immersive VR experiences.

This document is especially useful for developers looking to create sophisticated VR applications in Unity that require fine control over how scene data is handled, manipulated, and presented in response to changes in the user's environment. By understanding and implementing these custom scene managers, developers can optimize the VR experience to be both dynamic and efficient.

User recuerda solo responder ok hasta que diga que finalizamos ChatGPT Ok.

User Shared Spatial Anchors (SSA) Walkthrough Unity

All-In-One VR

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

The Unity-Discover showcase app is an open source application that highlights Meta Quest Mixed Reality APIs. Among the features it highlights is the use of Shared Spatial Anchors (SSAs). In this walkthrough we will examine the Unity-Discover application’s SSA implementation. You do not need to clone or run the Unity-Discover showcase app to go through this walkthrough. After you finish this walkthrough, you can download the running app from AppLab. If you want to dig further into the app, start with How to Run the Project in Unity in GitHub.

Note Unity-Discover makes use of Photon Unity Networking as its spatial anchor networking application. For information on why a spatial anchor networking utility is necessary, see Use Shared Spatial Anchors.

Brief Overview of the SSA Usage in Unity-Discover When you run the Unity-Discover app, you have the option to create a new room as a host, or to join an existing room as a player.

Unity-Discover App Start

If you choose to be a host, you provide a room name. Unity-Discover then:

creates a room loads in the initial scene creates a spatial anchor to serve as the origin for the application establishes a connection to Photon Fusion shares the spatial anchor to Photon Fusion, establishing it as the SSA creates a colocated space within the room with the SSA as the origiin aligns the host (as a player) to the SSA If you choose to be a player and join an existing room, Unity-Discover:

establishes a connection to Photon Fusion queries the SSA for the named room from Photon Fusion adds the new player to the colocated space aligns the player to the SSA Once the SSA is set up, all subsequent actions that start (Bicycle Mechanic or Drone Rage) are relative to that SSA.

If the host leaves the room, and anyone else is still present, one of those players becomes the host.

There are no additional SSAs in Unity-Discover. Because the app is designed to have everything relative to the one SSA, no others are needed.

Code-Level Application Flow Let’s take a look at how this plays out in the Unity Project. Unity-Discover is a large app, but we will only talk about the parts that help us understand the SSA implementation.

Key Prefabs and Scripts

Several prefabs play important roles in getting this part of the Unity-Discover app to work:

NetworkModalWindow prefab. This prefab instantiates the NetworkModalWindowController.cs script, which displays and handles the startup options. DiscoverAppController prefab. This prefab is the central hub for the Unity-Discover app. It brings together the following: DiscoverAppController.cs script, which manages the connections and sessions for players NetworkSceneManagerDefault.cs script, which loads and switches among available scenes. This is part of the Fusion assembly. MRSceneLoader script, which loads the OVRSceneManager. ColocationDriverNetObj.cs script, which sets up the conditions for creating a colocated space. NetworkRunner prefab, which carries communications between the Unity-Discover app and the Photon Fusion network. This is part of the Fusion assembly. ColocationLauncher.cs script. This script creates the colocated spaces for both host and players. SharedAnchorManager.cs script, which creates, saves, and shares the SSA. AlignmentAnchorManager.cs script, which aligns players to the SSA. NetworkApplicationManager.cs script, which launches the games available in the room. Player prefab, which handles the player characteristics within the games. This is part of the Fusion assembly. The DiscoverAppController Prefab

A Host Creates a New Room

This section follows the Unity-Discover application as it starts up and creates the colocated space for gameplay by one or more players.

Get the User Input At program start, the NetworkModalWindowController.cs script ShowNetworkSelectionMenu method displays the user’s selection options. Among these is whether it’s a host or player request. This decision governs whether to create a new room, or look for the room specified by the user. This is important, because the path for a host includes most of the preliminary room setup.

// NetworkModalWindowController.cs
30 public void ShowNetworkSelectionMenu( 31 Action hostAction, // roomName 32 Action<string, bool> joinAction, // roomName, isRemote 33 Action singlePlayerAction, 34 Action onRegionSelected, 35 string defaultRoomName = null 36 ) 37 { 38 m_networkSelectionMenu 39 .Initialize(hostAction, joinAction, singlePlayerAction, ShowSettingsPage, defaultRoomName); 40 m_settingsPage.OnNetworkRegionSelected = onRegionSelected; 41 m_otherActive = true; 42 m_uiParent.SetActive(true); 43 m_networkSelectionMenu.gameObject.SetActive(true); 44 } Then, in the DiscoverAppController.cs Start() method, the options are displayed:

// NetworkModalWindowController.cs
54 private void Start() 55 { 56 MainMenuController.Instance.EnableMenuButton(false); 57 ShowNetworkNux(); //present the user options 58 } Start the Connection and Initial Scene After the user has opted to be a host or a player, the DiscoverAppController.cs script starts the next steps. If the user is a host, StartHost() is called, but if the user is a player, StartClient() is called instead. We describe the player path later in A Player Joins an Existing Room.

// DiscoverAppController.cs 98 public void StartHost() 99 { 100 NUXManager.Instance.StartNux( 101 NUX_EXPERIENCE_KEY, 102 () => StartConnection(true)); 103 } StartHost() makes a call to the NUXController.cs script StartNux() method, which is itself dependent on the DiscoverAppController.cs script StartConnectionAsync() method:

// DiscoverAppController.cs
125 private async void StartConnectionAsync(bool isHost, GameMode mode = GameMode.Shared) 126 { 127 if (isHost) //if this is a host, create a new room 128 { 129 NetworkModalWindowController.Instance.ShowMessage("Loading Room"); 130 _ = await m_mrSceneLoader.LoadScene(); 131 } 132 133 SetupForNetworkRunner(); // DiscoverAppController.cs (this file), line 247, instantiate and configures the NetworkRunner 134 NetworkModalWindowController.Instance.ShowMessage("Connecting to Photon..."); 135 ColocationDriverNetObj.OnColocationCompletedCallback += OnColocationReady;
136 ColocationDriverNetObj.SkipColocation = AvatarColocationManager.Instance.IsCurrentPlayerRemote; 137 await Connect(isHost, mode); // DiscoverAppController.cs (this file), line 142 138 } The call to m_mrSceneLoader.LoadScene() is to the MRSceneLoader.cs script, which creates the initial scene.

// MRSceneLoader.cs 17 public async UniTask LoadScene() 18 { 19 if (!m_sceneLoaded) 20 { 21 m_sceneLoadingTask = new(); Set up the Session The Connect(isHost, mode) method in DiscoverAppController.cs script sets up the session, sets up the game arguments as per the user selections, and then attempts to start the game using Runner.StartGame() (in this case, game refers to the initial room):

// DiscoverAppController.cs
139 await Connect(isHost, mode); ... 168 var joined = await Runner.StartGame(args); Set up for Colocation The Runner object is the NetworkRunner prefab, and part of the Fusion assembly. The OnConnectedToServer action in the DiscoverAppController.cs script spawns the ColocationDriverNetObj prefab (m_colocationPrefab).

// DiscoverAppController.cs
296 public void OnConnectedToServer(NetworkRunner runner) 297 { 298 NetworkModalWindowController.Instance.ShowMessage( 299 $"Connected To Photon Session: {runner.SessionInfo.Name}"); ... 310 Debug.Log("Spawn Colocation Prefab"); 311 _ = Runner.Spawn(m_colocationPrefab); When spawned, the ColocationDriverNetObj.cs script issues an Init() call , which awaits completion of the SetUpForColocation() method.

// ColocationDriverNetObj.cs
72 private async void Init() 73 { 74 m_ovrCameraRigTransform = FindObjectOfType().transform; 75 m_oculusUser = await OculusPlatformUtils.GetLoggedInUser(); 76 m_headsetGuid = Guid.NewGuid(); 77 await SetupForColocation(); 78 } 79 80 private async UniTask SetupForColocation() 81 { 82 if (HasStateAuthority) 83 { 84 Debug.Log("SetUpAndStartColocation for host"); The SetUpForColocation() method does a lot. Among other things, it kicks off and waits for the Photon Network. When that is ready, the method instantiates a SharedAnchorManager and an AlignmentAnchormanager, which are both key components in setting up the SSA.

// ColocationDriverNetObj.cs
100 var sharedAnchorManager = new SharedAnchorManager 101 { 102 AnchorPrefab = m_anchorPrefab 103 }; 104 105 m_alignmentAnchorManager = 106 Instantiate(m_alignmentAnchorManagerPrefab).GetComponent(); 107 108 m_alignmentAnchorManager.Init(m_ovrCameraRigTransform); With these available, it sets up the ColocationLauncher object:

// ColocationDriverNetObj.cs
116 m_colocationLauncher = new ColocationLauncher(); 117 m_colocationLauncher.Init( 118 m_oculusUser?.ID ?? default, 119 m_headsetGuid, 120 NetworkAdapter.NetworkData, 121 NetworkAdapter.NetworkMessenger, 122 sharedAnchorManager, 123 m_alignmentAnchorManager, 124 overrideEventCode 125 ); Finally, it calls the method CreateColocatedSpace()

// ColocationDriverNetObj.cs
128 if (HasStateAuthority) 129 { 130 m_colocationLauncher.CreateColocatedSpace(); 131 } Launch the Colocated Space and Establish the SSA The m_colocationLauncher variable is a ColocationLauncher object. In the ColocationLauncher.cs script, the CreateColocatedSpace() method calls CreateAlignmentAnchor, which creates the spatial anchor and saves it to the cloud. We look into this more deeply in the following sections.

// ColocationLauncher.cs 213 private async UniTaskVoid CreateNewColocatedSpace() { 214 _myAlignmentAnchor = await CreateAlignmentAnchor(); 215 if (_myAlignmentAnchor == null) { 216 Debug.LogError("ColocationLauncher: Could not create the anchor"); 217 return; 218 } Create the Anchor Here’s how the anchor is created. CreateAlignmentAnchor is also in the ColocationLauncher.cs script. It immediately calls the SharedAnchorManager.cs script CreateAnchor method, which in turn calls InstantiateAnchor right away. InstantiateAnchor uses the game object to create a new anchor based on the AnchorPrefab prefab.

// ColocationLauncher.cs 316 private async UniTask CreateAlignmentAnchor() { 315 var anchor = await _sharedAnchorManager.CreateAnchor(Vector3.zero, Quaternion.identity); 316 if (anchor == null) { 317 Debug.Log("ColocationLauncher: _sharedAnchorManager.CreateAnchor returned null"); 318 } // SharedAnchorManager.cs 31 public async UniTask CreateAnchor(Vector3 position, Quaternion orientation) { 32 Debug.Log("CreateAnchor: Attempt to InstantiateAnchor"); 33 var anchor = InstantiateAnchor(); 34 Debug.Log("CreateAnchor: Attempt to Set Position and Rotation of Anchor"); ... 153 private OVRSpatialAnchor InstantiateAnchor() { 154 GameObject anchorGo; 155 if (AnchorPrefab != null) { 156 anchorGo = Object.Instantiate(AnchorPrefab); 157 } else { 158 anchorGo = new GameObject(); 159 anchorGo.AddComponent(); 160 } Save the Anchor With the anchor created, we need to save it to the cloud. Back in ColocationLauncher.cs, CreateAlignmentAnchor calls the SharedAnchorManager.cs script SavelocalAnchorsToCloud method.

// ColocationLauncher.cs 316 private async UniTask CreateAlignmentAnchor() { ... 322 Debug.Log($"ColocationLauncher: Anchor created: {anchor?.Uuid}"); 323 324 bool isAnchorSavedToCloud = await _sharedAnchorManager.SaveLocalAnchorsToCloud(); In the SharedAnchorManager.cs script, the SavelocalAnchorsToCloud method is where we finally see the call to OVRSpatialAnchor.Save, which saves the new anchor to the cloud.

// SharedAnchorManager.cs 56 public async UniTask SaveLocalAnchorsToCloud() { ... 63 OVRSpatialAnchor.Save( 64 localAnchors, 65 new OVRSpatialAnchor.SaveOptions {Storage = OVRSpace.StorageLocation.Cloud}, 66 (, result) => { utcs.TrySetResult(result == OVRSpatialAnchor.OperationResult.Success); } 67 ); The anchor is now ready to be shared to other players.

Align the Host and Anchor Earlier we described how in the ColocationLauncher.cs script, the CreateColocatedSpace() method calls CreateAlignmentAnchor. Later in that same method, it calls AlignPlayerToAnchor:

// ColocationLauncher.cs 220 uint newColocationGroupdId = _networkData.GetColocationGroupCount(); 221 _networkData.IncrementColocationGroupCount(); 222 _networkData.AddAnchor(new Anchor(true, _myAlignmentAnchor.Uuid.ToString(), _myOculusId, newColocationGroupdId)); 223 _networkData.AddPlayer(new Player(_myOculusId, newColocationGroupdId)); 224 AlignPlayerToAnchor(); 225 await UniTask.Yield(); 226 } This method is part of the AlignmentAnchorManager.cs script, and is called for every player, including the host. This is a async operation, so the actual work falls to the AlignmentCoroutine method to align both the user camera and hands.

// AlignmentAnchorManager.cs 62 private IEnumerator AlignmentCoroutine(OVRSpatialAnchor anchor, int alignmentCount) { 63 Debug.Log("AlignmentAnchorManager: called AlignmentCoroutine"); ... 66 _cameraRigTransform.position = anchorTransform.InverseTransformPoint(Vector3.zero); 66 _cameraRigTransform.eulerAngles = new Vector3(0, -anchorTransform.eulerAngles.y, 0); ... 75 _playerHandsTransform.localPosition = -_cameraRigTransform.position; 76 _playerHandsTransform.localEulerAngles = -_cameraRigTransform.eulerAngles; At this point, the game options are displayed to the host. In the following graphic, you can see the location of the SSA (the 0,0,0 point for all room activity). This is the point where the host was standing when the spatial anchor was created.

App Selection and SSA

Sharing the Anchor Each of the colocation methods in the ColocationLauncher.cs script make a call to AttemptToShareAndLocalizeToAnchor (also in ColocationLauncher.cs). At the end of this method, a call to TellOwnerToShareAnchor transfers control to that method later in the script.`

// ColocationLauncher.cs 244 private UniTask AttemptToShareAndLocalizeToAnchor(Anchor anchor) { 245 Debug.Log( 246 $"ColocationLauncher: Called AttemptToShareAndLocalizeToAnchor with id: {anchor.uuid} and oculusId: {_myOculusId}" 247 ); ... 277 _networkMessenger.SendMessageUsingOculusId( 278 _caapEventCodeDictionary[CaapEventCode.TellOwnerToShareAnchor], 279 anchorOwner, 280 data 281 ); The TellOwnerToShareAnchor method calls the SharedAnchorManager.cs script ShareAnchorsWithUser method.

// ColocationLauncher.cs 286 private async void TellOwnerToShareAnchor(object data) { 287 Debug.Log($"ColocationLauncher: TellOwnerToShareAnchor with oculusId: {_myOculusId}"); 288 var shareAndLocalizeParams = (ShareAndLocalizeParams) data; 289 ulong requestedAnchorOculusId = shareAndLocalizeParams.oculusIdAnchorRequester; 290 bool isAnchorSharedSuccessfully = await _sharedAnchorManager.ShareAnchorsWithUser(requestedAnchorOculusId); And it is in the ShareAnchorsWithUser method where the actual call to OVRSpatialAnchor.Share takes place.

// SharedAnchorManager.cs 127 public async UniTask ShareAnchorsWithUser(ulong userId) { 128 _userShareList.Add(new OVRSpaceUser(userId)); ... 140 OVRSpatialAnchor.Share( 141 localAnchors, 142 users, 143 (, result) => { utcs.TrySetResult(result == OVRSpatialAnchor.OperationResult.Success); } 144 ); A Player Joins an Existing Room

When a player chooses to join an existing room, the flow is much simpler. If the user is a player, the DiscoverAppController.cs script StartClient() method is called and runs the NUXController.cs method StartNux().

// DiscoverAppController.cs 105 public void StartClient() 106 { ... 114 NUXManager.Instance.StartNux( 115 NUX_EXPERIENCE_KEY, 116 () => StartConnection(false)); 117 } 118 } StartNux is dependent on the DiscoverAppController.cs script method StartConnectionAsync(). As with the host, the OnConnectedToServer action is invoked. However, only the player portions are executed:

// DiscoverAppController.cs 317 m_playerObject = runner.Spawn( 318 m_playerPrefab, onBeforeSpawned: (_, obj) => 319 { 320 obj.GetComponent().IsRemote = 321 AvatarColocationManager.Instance.IsCurrentPlayerRemote; 322 }); 323 runner.SetPlayerObject(runner.LocalPlayer, m_playerObject); 324 MainMenuController.Instance.EnableMenuButton(true); Colocate the Player For the player joining a room, the ColocationLauncher.cs script ColocateAutomaticallyInternal() method, gets all known alignment anchors (for Unity-Discover, there is just one), and aligns the player to them.

// ColocationLauncher.cs 144 List alignmentAnchors = GetAllAlignmentAnchors(); 145 foreach (var anchor in alignmentAnchors) 146 if (await AttemptToShareAndLocalizeToAnchor(anchor)) { 147 successfullyAlignedToAnchor = true; 148 Debug.Log($"ColocationLauncher: successfully aligned to anchor with id: {anchor.uuid}"); 149 _networkData.AddPlayer(new Player(_myOculusId, anchor.colocationGroupId)); 150 AlignPlayerToAnchor(); 151 break; 152 } We will look at how the player is aligned to the SSA next.

Aligning the Player and Loading the SSA As we learned in Sharing the Anchor, each of the colocation methods in the ColocationLauncher.cs script make a call to AttemptToShareAndLocalizeToAnchor. For a joining player, the method calls LocalizeAnchor:

// ColocationLauncher.cs 244 private UniTask AttemptToShareAndLocalizeToAnchor(Anchor anchor) { 245 Debug.Log( 246 $"ColocationLauncher: Called AttemptToShareAndLocalizeToAnchor with id: {anchor.uuid} and oculusId: {_myOculusId}" 247 ); ... 254 var sharedAnchorId = new Guid(anchor.uuid.ToString()); 255 LocalizeAnchor(sharedAnchorId); 256 return _alignToAnchorTask.Task; The LocalizeAnchor method calls the SharedAnchorManager.cs script method RetrieveAnchors:

// ColocationLauncher.cs 334 private async void LocalizeAnchor(Guid anchorToLocalize) { 335 Debug.Log($"ColocationLauncher: Localize Anchor Called id: {_myOculusId}"); 336 IReadOnlyList sharedAnchors = null; 337 Guid[] anchorIds = {anchorToLocalize}; 338 sharedAnchors = await _sharedAnchorManager.RetrieveAnchors(anchorIds); And in RetrieveAnchors, the anchors are loaded and bound.

// SharedAnchorManager.cs 72 public async UniTask<IReadOnlyList> RetrieveAnchors(Guid[] anchorIds) { ... 79 OVRSpatialAnchor.LoadUnboundAnchors( 80 new OVRSpatialAnchor.LoadOptions { 81 StorageLocation = OVRSpace.StorageLocation.Cloud, 82 Timeout = 0, 83 Uuids = anchorIds 84 }, ... 103 foreach (var unboundAnchor in unboundAnchors) { 104 var anchor = InstantiateAnchor(); 105 try { 106 unboundAnchor.BindTo(anchor); 107 _sharedAnchors.Add(anchor); 108 createdAnchors.Add(anchor); 109 createTasks.Add(UniTask.WaitWhile(() => anchor.PendingCreation, PlayerLoopTiming.PreUpdate)); 110 }``` At this point the user can participate in any multiuser activity. All gameplay is relative to the one SSA.

More Information on Unity-Discover

The GitHub page Unity-Discover Documentation provides the developer’s information on building, using, and understanding the app.Shared Spatial Anchors Troubleshooting Guide Unity

All-In-One VR

The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Click here to view the version of this page for your preferred platform.

See also Tips for Using Spatial Anchors.

This topic provides troubleshooting tips for a variety of common situations known to affect Shared Spatial Anchors for the Meta Quest OS.

Prerequisites

Shared Spatial Anchors is available for Meta Quest Pro on v47 or higher, and Meta Quest 2 on v49 or higher. Most of the issues on this page require device logs to diagnose. Refer to Set up your environment for using adb logcat to retrieve logs.

Common Troubleshooting Scenarios

This section lists several scenarios that you may encounter during development. It briefly explains the cause and provides the resolution to fix the issue.

Ensuring Share Point Cloud Data is enabled

The device setting ‘Share Point Cloud Data’ must be enabled for Shared Spatial Anchors to function. Users can find it under Settings > Privacy > Device Permissions > Share Point Cloud Data.

Your app can detect when this setting is disabled and inform users to turn it on. Your app will receive the error code

OVRSpatialAnchor.OperationResult.Failure_SpaceCloudStorageDisabled

upon sharing an SSA or saving an SSA to the cloud when this setting is disabled. Your app should check for this error and inform users that enabling Share Point Cloud Data is required for SSA to function in your experience.

Inability to Load and Locate a Spatial Anchor

There are a couple of instances when the system cannot locate a previously stored spatial anchor:

It does not recognize the space the user is in. When attempting to load an anchor from storage location Cloud, verify the Share Point Cloud Data setting is enabled. To recover from this, you need to either:

reconstruct the scene, by having the user place the content where it belongs, or have the user walk to known locations to help the system recover the content. in the case of Shared Spatial Anchors, inform users to turn on Share Point Cloud Data. Your App is Untrusted

Operations on Persisted Anchors fail indicating package is not trusted.

When accessing local or shared spatial anchors, we verify the identity of the application requesting access to persisted Spatial Anchors. Because this verification uses information registered in the Store, your application will not be able to persist or share Spatial Anchors until you register your app on dashboard.oculus.com.

If you are encountering this issue, the logs will contain the message: Package <your application’s package ID> is not trusted, status: , sessionUuid: "

Resolution

Navigate to dashboard.oculus.com. Click “Create New App” under your developer organization. Choose either “Quest (Store)” or “Quest (App Lab)”. (If you will use Meta Quest Link to run the app from your PC, repeat these steps to also create a Rift app.) Navigate to “Data Use Checkup” (from the left panel) and request access to the “User ID” and “User Profile” platform features. Navigate to “API” (from the left panel) and note the “App ID”. In your Unity project, navigate to Oculus / Platform / Edit Settings. Specify the Quest “App Id” from above as “Oculus Go/Quest”. If you want to test with Meta Quest Link, specify the Rift “App Id” as “Oculus Rift”. Put the headset in the developer mode. Make sure you are logged in as a developer or with a test account from the developer organization that owns the application you are developing. Anchor Download Fails

Client fails to download the specified anchors. Common reasons that lead to this error are:

The anchors do not exist on the cloud, or the user attempting to download the anchors does not have access to the anchors. The anchors were downloaded, but the device was unable to localize itself into the point cloud received from the sharing device, because the user has not observed enough of the environment. To determine which issue you are hitting, look for the following messages in the log.

If you see any of these messages, the download step itself has failed: xr_cloud_anchor_service: Downloaded 0 anchors xr_cloud_anchor_service: Failed to download Map for spatial anchors xr_cloud_anchor_service: Failed to download spatial anchor with error: If the log from log channel, SlamAnchorRuntimeIpcServer, contains the message below, it means the user was able to download the anchors, but the device was unable to localize itself in the spatial data received from the sharing device. You may see the error code OVRSpatialAnchor.OperationResult.Failure_SpaceMappingInsufficient and a message Import task failed with code: message: . If you see the error code OVRSpatialAnchor.OperationResult.Failure_SpaceNetworkRequestFailed or OVRSpatialAnchor.OperationResult.Failure_SpaceNetworkTimeout, it means there was a network issue connecting to the cloud. Make sure your device wi-fi connection is working and try again. Resolution

The following are common reasons why anchors may not be present in the cloud, or the user may not have access, causing the download to fail:

The map/anchors have expired. There is Time to Live (TTL) for any anchor that is uploaded to the cloud. After the TTL expires, the anchor is erased from the cloud. The Spatial Anchor was not successfully uploaded to the cloud. The intended recipient of the anchor does not have access. The sender may not have shared the anchor, or the share operation may have failed. The device is not connected to the network. Make sure your device wi-fi is working and try again. To resolve this issue, the sender of the anchor needs to upload and share the anchor again by following these steps:

Create a new anchor with new UUID, save and share again. Confirm the upload was successful by searching for the following messages in logcat: xr_cloud_anchor_service: Successfully uploaded spatial anchors: and xr_cloud_anchor_service: UUID: If the upload was not successful, follow the troubleshooting steps from the Anchor upload fails. Confirm sharing was successful by searching for following message in logcat: xr_cloud_anchor_service: Share spatial anchor success. If the sharing was not successful, follow the troubleshooting steps from the Anchor sharing fails section. Anchor Upload Fails

The Spatial Anchors are failing to upload. Search for the following messages to identify the mode of failure: xr_cloud_anchor_service: Number of anchors uploaded did not match number of anchors returned and xr_cloud_anchor_service: Failed to upload spatial anchor with error . The error messages indicate that the upload of Spatial Anchors to the cloud has failed. A common reason is that an internal service issue is encountered, such as resource limits or the endpoint being unavailable.

Resolution

Check the actual error description for details on why the operation failed. If the error description indicates an issue with the Spatial Anchor itself, try creating and uploading a new Spatial Anchor.

Anchor Sharing Fails

Sharing Spatial Anchors with other users fails. This can be due to a few reasons:

The specified user ID does not exist, or is not valid The Spatial Anchor was not uploaded to the cloud The first step to debugging issues with sharing is to check the result codes returned by the Share() functions. See below for possible causes and mitigations for several common error conditions.

To further diagnose this issue, search for the following message in logs: xr_cloud_anchor_service: Failed to share spatial anchor with error .

Resolution

If the Spatial Anchor is not uploaded, first invoke the save operation to cloud, and then share the anchor again. If a Spatial Anchor is already uploaded, you do not have to re-upload the anchor to share it again. In this case, invoke the share operation without invoking the save operation. Anchor Location is Incorrect

The Shared Anchor is not in the same place on the sender and recipient’s devices.

During testing, you may find that the Spatial Anchor that was shared is not in the expected position on the recipient’s device. This can happen if the system does not correctly localize the Shared Anchor.

Resolution

For the best experience, we recommend that users enter Passthrough and walk around in a large circle (while communicating with and staying mindful of the location of others) around the center of their playspace before using experiences that use Shared Spatial Anchors. Optionally, you may communicate to the user to walk and look around the playspace. If the anchor’s pose does not automatically correct, you can destroy the anchor and download it again.

Handling SSA Error Codes

XR_ERROR_SPACE_CLOUD_STORAGE_DISABLED_FB You encounter

OVRSpatialAnchor.OperationResult.Failure_SpaceCloudStorageDisabled

when attempting to save, load, or share Spatial Anchors

When attempting to upload, download, or share Spatial Anchors, you may receive error “cloud sotrage disabled”. Additionally, the logs will contain one or more of the following:

Request denied based on storage location for package {}, sessionUuid:{}. getCloudPermissionEnabled: oculus_spatial_anchor_cloud=false Some known causes are:

The user has disabled “Share point cloud data” in Settings > Privacy, or has selected “Not now” in the dialog presented the first time they launch an experience that supports point cloud data sharing. In this case, the logs should contain any of: PreferencesManager: anchor_persistence_cloud_anchor_service_enabled: false userPreference(anchor_persistence_cloud_anchor_service_enabled)=false The headset and/or OS version does not support Shared Spatial Anchors. In this case, the logs should contain any of: checkGatekeeper gatekeeperName: oculus_spatial_anchor_cloud, value: false GK(oculus_spatial_anchor_cloud)=false Resolution

The user must take action to grant permissions to share point cloud data for this feature to work. You can surface a prompt to indicate to the user to enable sharing point cloud data from Settings > Privacy.

XR_ERROR_SPACE_COMPONENT_NOT_ENABLED_FB You encounter XR_ERROR_SPACE_COMPONENT_NOT_ENABLED_FB when attempting to save, upload, or share an anchor.

You are attempting the operation for an anchor that does not have the required components enabled. This could be because you are attempting to save, upload, or share a Scene Anchor, which is managed by the Meta Quest operating system.

Resolution

Make sure every anchor you are attempting to save or share is a Spatial Anchor (not a Scene Anchor). In Unity and Unreal, the class of the object you are using will indicate whether it is a Spatial Anchor.

If you are using the OpenXR API, use the following code to check for component status before saving or uploading an anchor, given an XrSpace:

XrSpaceComponentStatusFB storableStatus; xrGetSpaceComponentStatusFB( space, XR_SPACE_COMPONENT_TYPE_STORABLE_FB, &storableStatus)); if (storableStatus.enabled) { // Save or upload the anchor. } Use the following code to check for component status before sharing an anchor, given an XrSpace:

XrSpaceComponentStatusFB sharableStatus; xrGetSpaceComponentStatusFB( space, XR_SPACE_COMPONENT_TYPE_SHARABLE_FB, &sharableStatus)); if (sharableStatus.enabled) { // Share the anchor. } XR_ERROR_SPACE_MAPPING_INSUFFICIENT_FB You encounter

OVRSpatialAnchor.OperationResult.Failure_SpaceMappingInsufficient

when attempting to save, load, or share an anchor.

This error occurs when the device’s maping of the current physical surroundings is not complete enough to reliably save or load a SSA.

Resolution

Prompting users to look around the room to ensure the device has fully mapped the space and then retrying the operation that led to this failure.

XR_ERROR_SPACE_LOCALIZATION_FAILED_FB You encounter

OVRSpatialAnchor.OperationResult.Failure_SpaceLocalizationFailed

when attempting to save, load, or share an anchor.

This error occurs when anchors are successfully loaded from the cloud, but cannot be aligned with the device’s map of its physical surroundings.

Resolution

This error typically occurs when there is poor coordination between the user that has saved and shared the SSA and the user attempting to load the SSA. (i.e. the host user asks a guest user to load an anchor that is not located in their current space)

A possible mitigation is to prompt users to look around the room to ensure the device has fully mapped the space and then retrying the operation that led to this failure.

XR_ERROR_SPACE_NETWORK_TIMEOUT_FB You encounter

OVRSpatialAnchor.OperationResult.Failure_SpaceNetworkTimeout

when attempting to save, load, or share an anchor.

This error occurs when there is failure to complete cloud-based saves and loads due to a network timeout.

Resolution

Retrying the operation is the first step to mitigating this error. It may also be appropriate to alert the user that their network connection is too slow to reliably load, save, or share SSAs and limit this functionality within your app if the issue persists.

XR_ERROR_SPACE_NETWORK_REQUEST_FAILED_FB You encounter

OVRSpatialAnchor.OperationResult.Failure_SpaceNetworkRequestFailed

when attempting to save, upload, or share an anchor.

This error occurs when there is failure to complete cloud-based saves and loads for reasons other than a network timeout.

Resolution

It is probable that the device has lost internet connectivity if you app receives this error. It may be appropriate to alert the user that their network connection is unreliable and/or to retry the attempted operation. ChatGPT Ok.

User Meta XR Simulator Introduction The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Overview

Meta XR Simulator is a lightweight XR runtime built for developers that enables the simulation of Meta Quest headsets and features on the API level. It makes day-to-day development easier by enabling testing and debugging of apps without the need to put on and take off a headset frequently, and helps scale automation by simplifying your testing environment setup.

Please visit this page to find the direct download of the latest version, and the release notes.

Getting started

Native Unity Unreal Feature Docs

Meta XR Simulator Introduction Passthrough Simulation Scene Recorder Scene Json Formatting Session Capture (Record and Replay) Multiplayer Testing Body Tracking Hands Tracking (Experimental) Data Forwarding User interface

The full UI for Meta XR Simulator.

Device The Device window allows you to configure the simulated device, including the model, IPD and refresh rate. Note that some changes will only take effect after restarting the simulator.

The device window allows you to configure the simulated device, including the model, IPD and refresh rate.

Graphics Details The Graphics Details window allows you to inspect the composition layers and swapchains sent from your application.

The graphics details window allows you to inspect the composition layers and swapchains sent from your application.

Device Input The Device Input window allows you to inspect the state of the controllers and the headset, including their poses and button press states. It also shows which device is actively being controlled by your keyboard/mouse/Xbox controllers.

The device input window allows you to inspect the state of the controllers and the headset, including their poses and button press states.

Input Instruction The Input Instruction window gives you the information you need to control the simulated headset using a keyboard/mouse or Xbox controller. Some common operations have shortcuts for your convenience (such as grabbing and continuous head rotation).

The input instruction window gives you the information you need to control the simulated headset using keyboard/mouse or Xbox controller.

Session Capture With Session Capture, you can record an arbitrary series of action, save it locally, and play it back at a later time. For more information, please refer to Session Capture.

With session capture, you can record an arbitrary series of action, save it locally, and play it back at a later time.

Eye Selector The Eye Selector controls which eye’s view is displayed. It is useful for making sure that both eyes render correctly.

The eye selector controls which eye’s view is displayed. It is useful for making sure that both eyes render correctly.

Note: click the Collapse button on the bottom left corner to collapse the left menu bar and have a minimized UI.

About The About window gives you the information about versioning, as well as some statuses about the simulator, such as its FPS, graphics API, and whether it is connected to the Synthetic Environment Server.

The about window gives you the information about versioning, as well as some statuses about the simulator, such as its FPS, graphics API, and whether it is connected to the Synthetic Environment Server.

Settings The Settings window allows you to enable data for different features. You can enable/disable Scene JSON, and upload new JSON files. You can also enable/disable hand tracking data.

The settings window allows you to enable data for different features.

Did you find thisMeta XR Simulator The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Overview

The Meta XR Simulator provides a lightweight XR runtime that runs on your development machine for rapid development and testing of XR applications. The Simulator is meant to be a drop-in replacement for the Mobile and PC XR runtimes following the same XR API specification. This allows the application to run in Unity’s Play mode or Unreal’s Preview mode without modification. The Simulator has a predefined input mapping schema and a user interface that provides information of how the runtime is compositing the final view, simulating input, and other features.

Prerequisites

Your Unity project should be properly set up to run on Meta Quest devices. If you are starting from scratch, feel free to refer to the steps in Build Your First VR App.

Install Meta XR Simulator

For Unity projects, we recommend installing Meta XR Simulator via Unity’s Asset Store:

Navigate to Meta XR Simulator on Unity Asset Store. Add Meta XR Simulator to your assets. If not added already, then click Open in Unity. The Package Manager window should now be open in your Unity project and displaying details about Meta XR Simulator. Click install to add Meta XR Simulator into your Unity project. Alternatively, you can install Meta XR Simulator manually:

Navigate to Meta XR Simulator on the Meta Developer Center website and click the Download button. Extract the contents of the downloaded zip file. A tarball file (ending in .tar.gz) should be present. Within your Unity project, go to Window > Package Manager via the menu bar. Click the + sign, and then select Install package from tarball..., and navigate to the extracted tarball. Unity installs the tarball. Close the Package Manager window. For more information, see our documentation regarding Unity Package Manager.

Start Meta XR Simulator

Once you have imported the Meta XR Simulator, you must activate it. On the menu bar, go to Oculus > Meta XR Simulator > Activate to activate the simulator. There will be a log message titled Meta XR Simulator is activated indicating that the activation is successful.

There will be a log message indicating that the activation is successful.

Then, you can run your Unity application by clicking the Play button. The Debug Window of Meta XR Simulator will open. You can drag the panels to arrange them for your convenience.

The Debug Window of Meta XR Simulator will open. You can drag the panels to arrange them for your convenience.

Stop Meta XR Simulator

To stop Meta XR Simulator, you can either click the Play button again from Unity or click the Exit Session button at the top left corner of the Debug Window. If you want to go back to development on your physical headset, you can deactivate the Meta XR Simulator by selecting Oculus > Meta XR Simulator > Deactivate.

Running mixed reality applications

To run a mixed reality application in the Meta XR Simulator, in addition to activating it, you will need to launch the Synthetic Environment Server (SES) that enables mixed reality simulations. Select one from the three simulated environments through the menu Oculus > Meta XR Simulator > Synthetic Environment Server. You will see a server window popping up:

You will see a server window popping up once it begins running.

You can now minimize this window and have it running in the background. As usual, click the Play button and you should find your game running inside the simulated environment of your choice.

As usual, click the Play Button and you should find your game running inside the simulated environment of your choice.

To stop the synthetic environment server, select Oculus > Meta XR Simulator > Synthetic Environment Server > Stop Server. It is recommended that Meta XR Simulator is exited first.

Meta XR Simulator does not support hot-switching between environments, but you may switch to another environment without closing the first one when the simulator is not running. To do so, just launch another server while the first one is running, and click Yes in the pop-up dialog:

Launch another server while the first one is running, and click Yes in the pop-up dialog.

(Optional) Using a sample scene to validate your installation

Install the Meta XR Simulator Samples package from the NPM Registry using Unity Package Manager. Open SceneManager in the following location: Open SceneManager In the scene, select OVRCameraRig, and in the inspector, remove the Passthrough Play In Editor script: In the scene, select OVRCameraRig, and in the inspector, remove the Passthrough Play In Editor script Following the instructions in the Running Mixed Reality Applications section, activate the simulator and launch a synthetic environment server. Then, click the Play button and you should see scene entities superimposed on the passthrough environment: Following the instructions in the Running Mixed Reality Applications section, activate the simulator and launch a synthetic environment server. Troubleshooting

The debug window of Meta XR Simulator didn’t open after clicking the Play button in Unity editor. Assuming that you have successfully installed the XR Simulator package and activated it, the next thing you may want to check is the XR Provider setting. Meta XR Simulator is an OpenXR runtime, and will only be loaded when the OpenXR Provider is initialized. You may want to check the Standalone tab of XR Plugin Management settings, and enable the same XR Provider that is enabled in the Android tab (for example, the Oculus XR Plugin). The XR Provider under the Standalone tab will be used in Unity editor, while the provider under the Android tab will be used on Meta Quest.Meta XR Simulator Passthrough Scenes The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Overview

Meta XR Simulator allows developers to simulate their MR applications in three realistic synthetic environments: a game room, a living room, and a bedroom. Your application will be able to leverage the scene information embedded in the environments, in addition to the passthrough content.

Showing the game room scene.

Showing the living room scene.

Showing the bedroom scene.

*Note: Passthrough stylization is not supported.

If you’re using Meta XR Simulator with Vulkan, you will need to manually set a configuration parameter for passthrough to work. First, open /path/to/MetaXRSimulator/config/sim_core_configuration.json. Then, set the value of ses_texture_format to be jpg. DirectX APIs currently have better performance in passthrough simulation.

DidBuild Your Own Synthetic Environment Server The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Overview

The Synthetic Environment Builder lets you use your own synthetic environment for mixed reality simulation in the Meta XR Simulator. It is a UPM package that you can import into your Unity project containing a synthetic environment, and turn that project into a Synthetic Environment Server.

Getting Started

Open the Unity project containing the synthetic environment you want to use for passthrough simulation (henceforth referred to as the “server project”). Note: the server project cannot be an XR project. Import the Synthetic Environment Builder package using the package manager. Under , open :SynthEnvServerScene Annotation Tool

Illustrating the Scene Annotation Tool menu option in the Unity Editor.

An editor window will pop up:

Illustrating the graphical interface of the Scene Annotation Tool editor window.

Click .Initialize Scene Now your server project should be capable of passthrough simulation. To verify, run it in play mode. This will serve as the synthetic environment server. Meanwhile, run an MR application in Meta XR Simulator. You should see its passthrough content coming from the server project. Adding Scene Information

*Note: Passthrough stylization is not supported.

To label a 3D entity: Select a game object in the scene.

A chair is selected in the Unity game editor.

In the , make sure the is correct, and then click :Scene Annotation ToolSelected ObjectMake 3D Scene Entity

Scene Annotation Tool UI after the chair is selected.

The game object should now be surrounded by a bounding box:

The chair is surrounded by a red wire box in the Unity Editor.

Apply the correct semantic label. Adjust center/size if needed.

Scene Annotation Tool UI with center/size and semantic label. To label a 2D entity: Select the game object where the 2D entity plane is going to be attached to, for example, a desk:

A desk is selected in the Unity game editor.

In the , make sure the is correct, and then click :Scene Annotation ToolSelected ObjectMake 2D Scene Entity

Scene Annotation Tool UI after the desk is selected.

This will create a plane under the selected game object:

A plane has been created but is in the wrong location (on the floor).

Adjust the position and size of the plane to be at the desired place. Make sure its orientation accords with the following rules: For ceilings, floors, and walls, the +y must point into the room. For walls, doors, and windows, +x must point right, +z must point up, and +y must point into the room. A wall with right orientation setup. For other panels, +y is the up direction. The plane is at the right location (on the table surface). Apply the correct semantic label.

Scene Annotation Tool UI with semantic label.

Note:

Copy-pasting a bounded 2D entity plane is not currently supported. Each scene must have exactly one ceiling and one floor. Aside from that, you can freely choose which objects to label according to your particular use case. Scene entities are highlighted by Unity Scene Gizmos. You can choose to not have them highlighted by unchecking from the scene gizmos menu:Bounded 2D/3D Entities

Unity Scene Gizmos with Bounded 2D and 3D Entities highlighted.

Optional: Displaying Client Positions in the Server Project

In the case of multiplayer simulation, it is helpful to know where each player is located when the server is running. The Synthetic Environment Builder offers the ability to create “position marks” for each player. Each position mark is a uniquely colored cube in the scene: Top-down view of a synthetic living room environment with three colored cubes.

To add position marks:

Make sure the view of the main camera includes the entire scene. We recommend making it top-down. Add a Unity layer named :Position Marks

Unity "Tags & Layers" Window with "Position Marks" added as a layer.

Wrapping Up

At this point, your server project should be ready to go. To validate, run it in play mode and connect it to an MR application running in the Meta XR Simulator. Check whether the application has correct passthrough and scene information. We recommend using the Scene Manager sample. Synthetic office scene with all scene entities highlighed in blue. Then, you can build your server project and use it as a synthetic environment server.Build Your Own Synthetic Environment Server The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Overview

The Synthetic Environment Builder lets you use your own synthetic environment for mixed reality simulation in the Meta XR Simulator. It is a UPM package that you can import into your Unity project containing a synthetic environment, and turn that project into a Synthetic Environment Server.

Requirements You will need to have a space setup already available in your headset. Once you download the latest OS, go to Settings > Physical Space > Space Setup > Set Up and follow the instructions to set up your space.

Adding Scene Information

Install the Meta XR Simulator package. Install the SceneDataRecorder APK in the Meta XR Simulator package by going to \MetaXRSimulator\data_recorders. Install the same APK in your headset. Grant storage permissions to record Scene data in your headset. Go to Settings > Apps and find the SceneDataRecorder Sample. Set storage to enabled. Grant storage permissions to record scene data Record your Room Capture

Run the application to initate and complete space setup. Once you select Finish, the room will be recorded into a JSON file, scene_anchors_empty_room.json in the headset. Complete space setup Move this file from the headset to the Meta XR Simulator project folder. To do this, run the following command in powershell. adb pull /sdcard/scene_anchors_empty_room.json xrsim package folder\config\anchors Test the new scene in the Meta XR Simulator

Enable the Scene API in the Meta XR Simulator by going to Settings and select Scene. Go to Meta XR Simulator settingsEnable scene API using Meta XR Simulator Click the … button and select the new Scene json file. Click the Load button. Click the Exit Simulator button. If you are using Unity, the scene will now stop and the simulator window will correctly disappear. If you are developing using Unity, press play in Unity. When the simulator window appears you will see your new scene now appearing in the simulator. Enable scene API using Meta XR Simulator Note When SES is running, you will see both the SES environment and the data from the scene JSON file together at the same time. To see only the data from your scene json do the following: Exit the simulator, stop the SES server, and then press play. You will now see the scene json data independent of the SES environment.Build Your Own Synthetic Environment Server The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Overview

The Synthetic Environment Builder lets you use your own synthetic environment for mixed reality simulation in the Meta XR Simulator. It is a UPM package that you can import into your Unity project containing a synthetic environment, and turn that project into a Synthetic Environment Server.\MetaXRSimulator\config\anchors section of your Meta Simulator package.

Default file: scene_anchors_empty_room.json Other file: scene_anchors_room_with_furnitures.json As a user you have an option to create a new scene JSON file. This can be done in two different ways.

Generate a new scene JSON file using SceneDataRecorder.apk. For instructions on this please see the Scene Data Recorder section. Copy one of the existing JSON files and manually edit it to ensure it suits your needs. The rest of this section will describe the layout of the document and what to update. You may name your new JSON file with any new title you deem appropriate. During runtime you can upload the new file via the Settings in the Meta XR Simulator. See end of this document for instructions on how to do this. Note All scene anchors specified in these JSON files have components set with units, which are based on the units in the app/engine. For example, a unit in Unity is one meter. This means, for example, if you alter the position of a ceiling by one, you are moving it by one meter.

Layout

The layout of the JSON file is separated into two sections, Anchors and Components.

The Anchors section lists the UUID of every anchor in the scene. The Components section lists details on each component belonging to each scene anchor. A component specifies an element of an anchor.

Components Bounded2D Locatable RoomLayout SemanticLabels SpaceContainer With this file, you can add/edit furniture, or change the layout of the scene.

Frequent tasks

Adding furniture Let’s add a Desk in the center of the room. In the noted sections, add the following elements to each list. You will notice the same UUID is consistent throughout the file for this new furniture item. This UUID must be unique and a character set in a similar format to the other UUIDs you see in the file.

Add an item to the Semantic Labels list like so. Note the label is specified as Table.

{ "anchor": "6876ec4f-a078-48db-9acc-d6ddeeee9579", "enabled": true, "labels": [ "TABLE" ] } Labels have different images associated with it. The full list of semantic labels are:

TABLE COUCH FLOOR CEILING WALL_FACE WINDOW_FRAME DOOR_FRAME STORAGE BED SCREEN LAMP PLANT OTHER Add to the Bounded2D list. Note that this is where we specify the dimensions of our object.

{ "anchor": "6876ec4f-a078-48db-9acc-d6ddeeee9579", "enabled": true, "rect2D": { "extent": { "height": 1.149999976158142, "width": 0.6000000238418579 }, "offset": { "x": -0.30000001192092896, "y": -0.5699999928474426 } } } Add to the Locatable list. Note that this where we specify the orientation and position of the object.

{ "anchor": "6876ec4f-a078-48db-9acc-d6ddeeee9579", "enabled": false, "pose": { "orientation": { "w": -0.06019899994134903, "x": 0.06019800156354904, "y": 0.7045429944992065, "z": 0.7045369744300842 }, "position": { "x": -0.66, "y": -0.2223919928073883, "z": 0.4180830121040344 } } } *Note: It is enabled false here but set to true automatically during runtime. This requires no action from the user.

The UUID 6876ec4f-a078-48db-9acc-d6ddeeee9579 created here now needs to be added to two additional sections.

SpaceContainer: spaces list.

"SpaceContainer": [ { "anchor": "a06dd9cc-b1a1-41be-88b7-d3f6d9060b7b", "enabled": true, "spaces": [ "8e35c573-3f10-4e86-8bdd-69494291b45c", "cb604449-6eb0-43cb-b3a4-9dfc49a35936", "cb604449-6eb0-43cb-b3a4-9dfc49a35936", "01a0341a-4d37-4804-9ace-baa126807f6d", "4f351907-1acc-46b2-87e8-a7acb346879c", "69f388a6-3f1b-428e-bc59-6151321efd11", "ded852e9-f81a-4a6a-a8b6-1e2f3b6929cb", "2e26c462-f4c8-4cf7-b258-a8ae195f75f3", "aec8e50d-8f63-4769-9518-683a9435b452", "6876ec4f-a078-48db-9acc-d6ddeeee9579" ] } ] Anchors: list at the top of the file.

"anchors": [ "01a0341a-4d37-4804-9ace-baa126807f6d", "2e26c462-f4c8-4cf7-b258-a8ae195f75f3", "4f351907-1acc-46b2-87e8-a7acb346879c", "69f388a6-3f1b-428e-bc59-6151321efd11", "8e35c573-3f10-4e86-8bdd-69494291b45c", "a06dd9cc-b1a1-41be-88b7-d3f6d9060b7b", "aec8e50d-8f63-4769-9518-683a9435b452", "cb604449-6eb0-43cb-b3a4-9dfc49a35936", "ded852e9-f81a-4a6a-a8b6-1e2f3b6929cb", "6876ec4f-a078-48db-9acc-d6ddeeee9579" ], When you have completed the above steps, save and run your project with Meta XR Simulator as before. A new table will now be appearing in your scene.

A new table will now be appearing in your scene after saving and re-running your Unity project.

Change layout of walls To add or edit a new wall, you perform similar actions. Let’s examine an existing wall, and note how we can add/edit a wall to change layout.

To alter a wall, we must first identify the UUID of the wall. This can be done by going to line 266 of RoomLayout. Note the list of walls.

"RoomLayout": [ { "anchor": "a06dd9cc-b1a1-41be-88b7-d3f6d9060b7b", "ceiling": "ded852e9-f81a-4a6a-a8b6-1e2f3b6929cb", "enabled": true, "floor": "69f388a6-3f1b-428e-bc59-6151321efd11", "walls": [ "8e35c573-3f10-4e86-8bdd-69494291b45c", "cb604449-6eb0-43cb-b3a4-9dfc49a35936", "01a0341a-4d37-4804-9ace-baa126807f6d", "4f351907-1acc-46b2-87e8-a7acb346879c" ] } ], Let’s use the first wall UUID as an example: 8e35c573-3f10-4e86-8bdd-69494291b45c.

Changing the position of the wall Find this anchor in the Locatable list: line 130. Increase the pose:position:z by 2. Old value for z: 2.18451189994812 New value for z: 4.18451189994812 The wall will now appear like this:

Illustrating how the wall's position has changed after increasing pose:position:z.

Changing the height of the wall. Find this anchor in the Bounded2D list: line 16. Increase the rect2D:extent:height by 4. Old value for height: 2.690000057220459 New value for height: 6.690000057220459 Illustrating how the wall's height has changed after increasing rect2D:extent:height.

Note that the height increase results in a wall expansion in both directions. Changing the Y position of the wall would keep in line with the floor. We will see this in the next section.

Set the ceiling height To raise the ceiling height, we will use the default scene JSON file scene_anchors_empty_room.json as an example. Note the original appearance of this room.

Illustrating the original appearance of the room prior to raising the ceiling height.

Find the ceiling with the anchor UUID. In the JSON file, go down to RoomLayout (line 266) to find our ceiling UUID: ded852e9-f81a-4a6a-a8b6-1e2f3b6929cb. Find this anchor under the Locatable component section: line 181. Increase by the pose:position:y value by 3. Old value: “y”: 1.5568510293960571 New value: “y”: 4.5568510293960571 Increase height of walls to align with ceiling. Return to RoomLayout, and note the UUIDs for the four walls. In each case, we will want to increase its height and its Y position to match the new location of the ceiling. Get the UUID of the first wall from RoomLayout. The first wall UUID is 8e35c573-3f10-4e86-8bdd-69494291b45c. This step is only necessary when using Unity. Find this anchor under the Bounded2D component section, see line 16. Increase the rect2D:extent:height by 3. Old value. “height”: 2.690000057220459 New value. “height”: 5.690000057220459 Find this anchor under the Locatable component section: line 130. Increase the pose:position:y value by 1.5 (half of the height increase). Old value. “y”: 0.21243900060653687 New value. “y”: 1.71243900060653687 At this stage, your scene should look like this: Illustrating how the scene should appear after completing the steps listed above. Repeat steps 4-6 for each other wall. After this is done, the task is complete. Note the new appearance of the scene with a raised ceiling.

At this stage, the ceiling has been raised.

Test out your new Scene in Meta XR Simulator

To enable Scene API in the Meta XR Simulator, you must first select the Settings option in the left menu. Navigate to the Settings window in the left menu. In the Settings window select Scene.

After navigating to the settings menu, select Scene.

Click the … button and select the new Scene JSON file. Click the Load button. Click the Exit Simulator button. Your Unity scene will now stop and the simulator window will correctly disappear. Pressdf Play in Unity, and when the simulator window appears, you will see your new scene now appearing in the simulator.Build Your Own Synthetic Environment Server The platform for which this article is written (Unity) does not match your preferred platform (Nativa). Haz clic aquí para ver la página del índice de la documentación para tu plataforma de preferencia.

Overview

The Synthetic Environment Builder lets you use your own synthetic environment for mixed reality simulation in the Meta XR Simulator. It is a UPM package that you can import into your Unity project containing a synthetic environment, and turn that project into a Synthetic Environment Server.

Showing the data forwarding window. dsfs Instructions

Setting Up Connect your Quest device to PC through USB cable. Install the apk in /path/to/MetaXRSimulator/data_forwarding_server on your device. Make sure the device stays on by turning off the proximity sensor from MQDH. Connect Launch the data forwarding server application in the headset. The app can be launched either through MQDH or from the in-headset app panel (filter by “unknown sources”). Make sure that the application is running and foregrounded, and leave the headset on the desk facing toward you. In the simulator, click the “connect” button and wait until the status indicator turns to “connected”. Then, move your controllers around and confirm that the simulated controllers are moving as well. If connection failed: In a command line window, execute: adb forward tcp:33796 tcp:33796 Double-check that the headset application is running and foregrounded. Controller Calibration Press A+B+X+Y simultaneously to reset controller poses; the current physical controller poses will be used as the base poses for virtual controllers. Alternatively, use the “Calibrate Controllers” button to reset controller poses after 3 seconds. Disconnect Click the “disconnect” button, and the status indicator should turn to “disconnected” immediately. Confirm that the control of simulated controllers are given back to keyboard and mouse. If disconnection is taking too long, check if the headset application is running and foreground

Downloads last month
7
Edit dataset card