Back
Crafting mixed reality experiences for the future
Toronto Metropolitan University, 2022
My master’s research project at Toronto Metropolitan University set the perfect stage for me to expand my product design horizons beyond screen-based devices. I explored both Augmented and Virtual reality independently only to realize the shortcomings of each technology. This led me to build a Mixed reality system that could merge aspects of AR and VR under a single experience.
RESEARCH PAPER
TIMELINE
April 2022 - December 2022
TOOLS USED
Unity
Figma
Visual Studio Code
Blender
MY ROLE
Product Design
AR/VR Development
Academic Research
THE PROBLEM
When comparing both technologies, Augmented Reality boasts a wide array of applications across industries such as tourism, fashion, commerce, and education. However, the current medium for experiencing AR, typically smartphones, falls short of providing a truly immersive experience.

On the other hand, Virtual Reality offers complete immersion, but its applications are predominantly centered around gaming and entertainment. While there are some specialized training and educational uses, the fully immersive nature of VR often constrains its ability to address real-world problems effectively due to users losing their sense of physical space.

This is where Mixed Reality experiences come into play, offering the potential to blend the strengths of both AR and VR, delivering highly immersive experiences across a wide range of applications.
PROJECT GOALS
Design a mixed-reality system that can offer components of both virtual and augmented reality under a single experience.
Elevate the present XR retail experiences by offering better immersion, brand recognition, and purchase conversions.
IMPLEMENTATION PROCESS
The project was designed for Meta Quest 2 headset as it allowed users to step outside from virtual environment and see a real-time view of their surroundings. Although the headset primarily uses it as a security feature, I used it as a way to seamlessly integrate real and virtual environments which eventually acted as the foundation of my project.
UPDATE
Quest 2 was one of the most advanced and accessible XR device during the development of this project. Devices such as Quest Pro and Apple Vision Pro came into the market a few months later and launched much better MR features.
In order to develop and deliver the intended MR experience, the project was divided into multiple stages, each one of them serving a core part of the product development process.

These stages are discussed in detail in the following sections:
Brainstorming and Prototyping
After going though a lot of different ideas, the final one was ot create an immersive shopping experience for Apple products, simply because of the recent interest Apple has shown in XR (And surely enough Apple announced vision pro few months after this :D). Since Apple’s core business is selling physical products like iPhones and MacBooks, the MR experience would offer a much more realistic shopping experience than merely viewing images on screen-based devices.

Once I was content with the idea, I began crafting rough sketches and mockups in Figma to map out the entire user journey. This phase provided valuable insights, extending beyond UI challenges to focus on interactions in XR, seamlessly transitioning between AR and VR, and determining how users would navigate UI elements in a 3D space.

Here's a glimpse of the initial prototype of the experience created in Figma:
Users start in a fully virtual environment.
Some user interaction triggers passthrough and the environment switches from VR to AR.
Users can now explore the product in their real world environment.
Interacting with other options lead users to view more information about the product.
Passthrough: Toggling between AR and VR
The core of this MRP was to create a system that allows users to seamlessly switch between being completely immersed in a virtual environment (VR) and being in their own environment while seeing virtual objects overlaid (AR). Hence, the majority of the time and effort was dedicated to this stage of the process, where different solutions were built and tested to see which one could most accurately simulate a Mixed Reality experience for the users.

Three solutions were created and tested, leading to the final version:
1. Pinch interaction
Simple and Intuitive
Manual trigger by pinching fingers so no seamless switching on its own
2. Double pinch interaction
More realistic as it creates an illusion of a seamless toggle while being an active user interaction by overlapping the passthrough toggle and object grab simultaneously
Perfect timing would be needed for both the hands to pinch, resulting in a trigger success rate of only 20%
3. Collision interaction
Collision between two objects triggers passthrough so users enter AR mode when they lifted the laptop off the table, and returning it to the table seamlessly transitioned them back into the fully immersive virtual environment
100% success rate and much smoother toggle experience
3D Environment and User Interface Design
While designing the environment wasn't the primary focus of this MRP, it played a pivotal role in delivering a complete and visually immersive experience for user engagement. The environment structure was created from scratch in Unity, with 3D assets imported from various marketplaces and edited using Blender.
Two distinct user interfaces were created in Unity using the initial Figma designs as the reference point.
Onboarding UI which provided users with basic information to begin the main experience.
Info card UI added over individual products to learn more about them and progress through the experience.
PROBLEM
Users would roam around the 3D virtual environment only to realize they could no longer see their UI to progress through the experience.
SOLUTION
A camera follow script was added to the UI elements. This ensured that the camera was always pointed directly at the user interface, resulting in no difficulties reading or interacting with it.
Interaction Design
Based on the literature I studied, hand-tracking interactions emerged as a more intuitive and user-friendly option for both AR and VR scenarios.
1. UI Interactions: Interacting with the UI was made possible with the help of ray-casting. Pointing cursors were added to the UI canvas that would follow users' index finger movements and update the cursor location in real time, similar to a mouse pointer on a computer. Users simply needed to pinch their thumb and index finger together to interact with any UI element, such as buttons.
2. Hand Grab Interactions: To interact with any of the 3D game objects in the virtual scene such as tech gadgets, users can just reach out to them with their hands, grab the product, and then lift the product up. This grabbing interaction, combined with the passthrough toggle script, created the core experience. For added realism, users had to use both hands to lift gadgets, mirroring real-life actions. Additionally, sound feedback was introduced to signal successful object pick-up and drop-off, reinforcing that their interaction was successful.
3. Hand Pose Detection: With the pinch interaction already in use to interact with the UI elements, another method was required to trigger a UI element associated with a specific product in order to learn more about it. After experimenting with various approaches like double pinching and automatic triggers after a set time, I went forward with hand pose detection.

A script was written that detects whenever the user makes a "Thumbs Up" pose and toggles the UI element. Users can interact with the UI using pointing and pinching interaction and toggle off the UI element using the “Thumbs down” pose.
RESULTS
Due to time limitations, the experience finishes at the “Buy Now” button but ideally, the full checkout experience would continue in mixed reality. Users could utilize the seamless toggle from VR to AR again in case they need to look at their credit card info or consult a known person before purchasing the product.
CONCLUSION
The project was presented to various academic and industry professionals at my Master’s showcase event and received over-exceeding love and appreciation.

The cherry on top was to have my project featured in The cretive school at Toronto Metropolitan University Top5 newsletter. Here is a link to check the full article.

Moreover, I was particularly thrilled to observe striking similarities between my project and Apple's approach to Vision Pro. The ability to seamlessly switch between AR and VR on the headset, as well as features like Pinch interactions, affirmed the soundness of design decisions made.

If someone from Apple is reading this, please hire me 😬. Let's collaborate on creating outstanding MR experiences for Vision Pro. Jokes aside, I'm genuinely interested in extending this project further, and making a MR version of Apple Store app for Vision Pro.
NEXT PROJECT

Tracking variety of web3 assets at one place

3racker, 2022