Copied....
  • XR Technology
  • Business

Enhancing Mixed Reality Experiences with Mesh API and Depth API

authorimage
Savio MeniferellipsOct 9, 2024ellips5 min read

With the public release of SDK v60, developers now have access to powerful tools like Mesh API and Depth API for both Unity and Unreal, which can significantly enhance the way virtual and real-world elements blend in mixed reality applications. These APIs allow you to enrich interactions and bring a sense of realness to MR experiences by enabling more accurate layering, masking, and collision detection. But what do we mean by making an app experience feel believable?

Believable MR Experiences: The Core Challenge

When virtual objects behave unnaturally or do not interact convincingly with the real world, it breaks the illusion of mixed reality. For example, if a virtual object is rendered in front of a physical object that it should be behind, it causes a visual “hiccup,” instantly disrupting the user's immersion. The goal of MR development is to ensure that virtual and physical objects interact seamlessly, mirroring the physics and behaviors of the real world.

In early AR and MR experiences, one common problem has been that digital content is always rendered in front of real-world objects. This can lead to unrealistic interactions, where a virtual object that should be hidden behind a physical object is still fully visible, reducing the immersion. The advent of APIs like the Mesh API and Depth API significantly mitigates these issues by providing tools to more accurately place and mask digital content within the physical environment.

Introducing the Mesh API

The Mesh API gives developers access to a geometric representation of the environment, known as the Scene Mesh. Meta Quest 3’s Space Setup feature automatically generates this Scene Mesh by scanning room elements such as walls, floors, and ceilings, which can then be queried in your app using the Scene API.

Compared to Quest 2, Scene Mesh on Quest 3 offers a far more detailed and granular representation of the physical environment. This high-fidelity environment model opens up a world of creative possibilities in MR apps:

1. Generating Accurate Physics: With Scene Mesh, you can create realistic physics-based interactions like bouncing balls, shooting projectiles, or throwing virtual objects that accurately collide with real-world surfaces.

2. Navigation and Obstacle Avoidance: The Scene Mesh can help apps understand where physical obstacles exist, enabling virtual characters or objects to avoid collisions and behave naturally within the user’s real space.

3. Precise Content Placement: Developers can now allow users to attach and place virtual objects with much higher accuracy, enhancing creative freedom and user engagement.


Depth API: Creating Seamless Occlusion

In MR, immersion suffers when virtual objects do not interact with real-world ones in expected ways. Imagine a virtual pet walking behind a couch, but instead of disappearing behind the furniture, it remains fully visible. This instantly breaks the believability of the experience. Here’s where the Depth API comes in.

The Depth API on Meta Quest 3 provides a real-time depth map representing the physical environment as seen from the user’s point of view. This depth data enables dynamic occlusion, which allows virtual objects to be hidden behind physical ones in real-time. Whether it’s a virtual character moving behind a real-world object or a fast-moving projectile, Depth API ensures that objects are correctly masked based on depth.

Some key use cases for Depth API include:

- Immersive Social and Gaming Experiences: Physical objects and users can move freely in a room without disrupting the surrounding 3D virtual elements.

- Believable Hand Interactions: Users can interact with virtual objects using their hands, which can be accurately occluded when they are in front of virtual objects, creating a more natural interaction.

Combining Mesh API and Depth API for Maximum Effect

To build truly realistic MR applications, it’s essential to use both Mesh API and Depth API together, as they complement each other. While Depth API handles real-time occlusion, Mesh API provides a detailed model of the room for accurate physics and content placement. 

Key Considerations When Using Mesh API and Depth API

Both Mesh API and Depth API are designed to leverage the broader Presence Platform capabilities for creating dynamic interactions between virtual content and users’ physical environments. While Depth API can be used independently, using it alongside Scene API for better understanding of the environment will yield richer and more believable MR experiences.


Conclusion: Elevating the MR Experience

The Mesh API and Depth API, available in SDK v60, give developers the power to create more believable and immersive MR experiences. By providing accurate physical representations and seamless occlusion, these APIs help virtual objects behave more naturally within real-world environments, reducing the chances of breaking user immersion.

As MR continues to evolve, the blending of virtual and physical worlds will be critical to delivering experiences that feel magical and lifelike. With the right tools, developers can push the boundaries of what's possible and make digital interactions indistinguishable from reality.

Table of Contents
  • Introduction
Share this Article
link

More blogs

Subscribe to our Blog