Khoamartin_Dreamstime_146390243
65afcc1456b7df001dfaad8e Promo Khoamartin Dreamstime Xxl 146390243 Web

Mixed Reality Gets Real

Jan. 23, 2024
Spatial computing is all about augmented, virtual, and mixed reality. What stage are we at with the technology, and will the latest advances move it into prime time?

This article is part of the TechXchange: Mixed Realities.

What you’ll learn:

  • Why augmented, virtual, and mixed reality (AR/VR/MR/XR) are finally ready for prime time.
  • Remaining challenges to adoption.
  • Where can you use this technology?

 

We’re not yet at the level of Star Trek’s holodeck or Ready Player One’s virtual-reality (VR) environment or even something that can rival The Matrix. But mixed reality (MR), which includes augmented and virtual reality (AR/VR)—and XR, or eXtended Reality, referring to the combination of all of them—are finally at the stage where these capabilities can move into the mainstream. This isn’t to say that it will be as ubiquitous as the smartphone. However, it will start to be the cutting-edge technology applied in areas such as real-estate sales or automobile design (weigh in on mixed reality by taking our poll at the end of the article).

I recently covered the Consumer Electronics Show (CES) where XR was almost but not quite as ubiquitous as artificial intelligence (AI). Ok, AI was everywhere and XR was just in a lot of places. Anyway, I also recently acquired a Meta Quest 3 headset (Fig. 1) and took it on the plane. More on that later. Some of the feedback here is based on my heads-on (hands-on doesn’t seem quite right) view of XR using my headset.

One of the big announcements was about Qualcomm’s Snapdragon XR2 Plus Gen 2 mixed-reality platform. It supports 4.3K resolution/eye and can handle more than a dozen cameras concurrently. Typically, a pair provide VR glasses with a view of the real world while the others provide tracking information about a person's body and the surrounding environment. This chip, which will be used in upcoming headsets, addresses one of my issues—resolution.

The current crop of reasonably priced XR headsets, including the Quest 3, has a resolution on the order of 2k/eye. I think that ideal point will be 8k/eye, but 2k/eye is sufficient and 4k/eye will probably work for a few years.

But resolution isn’t the only differentiator. Lower latency, frame rates, hand and eye tracking, body movement, and sensing the surroundings, plus better pass-through for VR headsets, is making major differences in improving the visual feedback.

Spatial computing is the latest buzzword. It encompasses AR/VR/MR/XR as well as apps and content. Digital twins are likely to come up in any conversation about special computing, as XR is one way of presenting and manipulating these virtual constructs.

Why XR is Hard to Explain to the Uninitiated

The number of people who have really used MR is pretty small compared to the general population. The cost of headsets, the lack of apps and content, plus motion sickness have limited the use of XRs. However, lower prices and improvements in all other areas are changing the game, so to speak.

Unfortunately, it means most people are unfamiliar with XR and it’s something that really needs to be experienced over the long term to really appreciate. Popping on a headset for five minutes is a nice way to introduce the technology, but it takes long-term use with a variety of applications to really get a feel for what it can do. I’ve talked with many who have never used it that say it’s either great or worthless, but they would never use the technology.

One example is the virtual workplace, where you can have any number of large-screen displays available for use. You can also collaborate with other, remote users in this environment. MR headsets can open up a window around a keyboard and mouse in case you don’t touch-type without looking at the keyboard.

Interestingly enough, power is the limiting factor with most headsets. Their batteries often run a couple hours at most. However, an extra battery pack or a wired connection (not recommended for gaming or physical exercise) can provide almost unlimited use. I often don a headset for hours at a time, although I still use my multimonitor system for most work right now. If I didn’t have all of my large screen monitors, I would probably switch to a headset.  

The other aspect that requires longer usage is experiencing a full 3D environment. Having earbuds or a quiet room with built-in audio is the way to provide an immersive experience that’s unattainable with a conventional 2D device like a PC or smartphone. One can literally walk through a museum or an outdoor environment and come away with an experience that’s superior to watching a 2D rendition. It’s not the same as being there, but it’s very close.

One considerable difference is the increased use of hand and body tracking, enabling gesture recognition to be used instead of relying on handheld controllers. There are limitations because a typical hand controller has multiple buttons and joysticks. They can be more efficient when a lot of controls are in the mix, such as in gaming, but are less of a requirement for some applications.

Users also need to understand the uses and limitations of each system. AR puts the real world first and overlays the virtual. VR puts the virtual first and if it has a passthrough mode, then it’s really an MR system. Here, the quality of the real-world presentation comes into play as it will be a subset—essentially VR first, real world second.

XR Targeting Corporations and Movies

One of the companies I have talked with is Lenovo, (watch Virtual Reality on the Next Level). Their ThinkReality VRX headset is complemented with a development and deployment framework that targets companies looking to deploy a lot of systems (Fig. 2).

2. Lenovo’s ThinkReality VRX headset and framework targets companies that deploy many devices.

Lenovo is working with application companies like Engage, which provide applications and tools to support everything from interactive learning to digital-twin creation and interaction.

One of the things I’ve viewed a lot of are 180- and 360-degree movies. These are essentially videos recorded using cameras that can capture these video views. The challenge with these is twofold. First is resolution and the second is perspective.

Such 3D videos are often noted by resolution. Viewing a 4K 3D video on a headset with a 2k/eye resolution doesn’t necessarily provide a high-res 2k image, though, because it has to cover the entire viewing area, not just what you’re looking at. If you want high-quality imaging, then you need something on the order of 16K on up. Of course, higher resolution translates into more bandwidth and storage.

Perspective is the other challenge with 3D recordings. You’re essentially locked to the position of the recording device. In many cases, this is a reasonable way to work, but moving your head and perspective, not just pivoting, results in the movement of the video rather than changing the viewpoint.

V-Nova and PresenZ take a different approach when it comes to 3D movies because they generate all of the possible views within a box that’s about a meter on each side. You can move your head but what you view stays still. This means you can essentially look around objects, providing a more natural presentation from the viewer’s perspective that in turn essentially eliminates any motion sickness. This technology is new and would require adoption by 3D videographers, but it has lots of potential.

People should not shy away from 3D videos because of these limitations. Just be aware that quality and presentation are more limited by the recording than the headset.

As for 2D videos, XR delivers big screen advantages even when sitting in the middle coach seat on your favorite airplane. VR tends to give the best experience and apps like 4XVR and Bigscreen on the Quest 3 emulate a theatrical experience. Just make sure you turn off exterior tracking before trying to do this on a Quest 3 while in a dark airplane.

It’s All About the Apps

I’ve worked with a few VR apps on the Quest 3, but I’ve used four of them quite a bit, and they highlight the advantages and challenges of using MR.

The first is Arkio, a collaborative spatial design tool (Fig. 3). It’s available on a number of platforms, not just the Meta Quest, and can be used to design buildings and other structures. The tool is a cooperative environment, allowing multiple people to be in the same metaverse.

3. Arkio is a spatial design tool for individuals or groups.

I tried it a while ago, and it’s improved significantly over time. Arkio has become more responsive and easier to use. It’s possible to use the tool in a mixed-reality mode, where windows expose the outside world.

The next two are games. Asgard’s Wrath 2 is a role-playing game (RPG) for a single person (Fig. 4). You get to play a character and a god providing two different perspectives. It provides a remarkable environment to explore with massive deserts, mountains, and large structures. Looking up and around shows the vastness of a virtual environment.

4. Asgard’s Wrath 2 is a role-playing game.

The Quest 3 doesn’t use ray tracing, but the graphics are very good—they’re more than enough to be enjoyable and challenging. Audio plays a big part in the immersion. Watching a 2D version doesn’t do it justice.

Dungeons of Eternity is a Dungeons and Dragons-style first-person shooter (Fig. 5). Up to three people can play in a cooperative game, battling monsters and finding treasure. It doesn’t present the grandeur of Asgard’s Wrath 2, but it’s so much fun to play that it doesn’t matter.

5. Dungeons of Eternity is a first-person, Dungeons and Dragons-style game that can include up to three individuals.

These three apps have similar 2D counterparts that I’ve played in the past. However, the difference between them is significant and hard to describe if you haven’t worked with both. The gaming apps require more movement versus their PC or game console counterparts that tend to require finger and wrist control, but little else.

I wasn’t going to call out any of the exercise apps, as I got plenty of exercise with the gaming apps. I will say that they’re engaging, and you do get a workout. Most are subscription-based, so you might not save on a gym membership, but it saves travel time.

One thing to note about the last three apps is that you need some open space. It’s possible to use many VR apps sitting down. However, if you’re going to interact with the virtual environment in a more natural fashion, then being able to move around without hitting a wall or other object requires some open space.

The Lonely Metaverse

Meta’s Horizon Workrooms is one of a number of apps designed to provide a metaverse-style environment where people can meet. Of course, the app can also be used solo to provide a workplace with multiple displays tied to your PC.

I spent some time in a public room where a few other people were visiting, but there was little or no interaction. Though everyone has an avatar, it’s not like interacting in a video conference. You have positional information, and the audio interaction is fine. Still, avatars don’t provide the video cues that come from a normal face and body.

Eye and head tracking provide more realistic avatar interaction. Additional sensing support will track more of the body, in particular legs and feet. Right now, an avatar’s gait usually isn’t synchronized with a person’s actual movement.

The challenge to getting people to interact in a metaverse is defining who and why people will be interacting. The small group interactions with games like Dungeons of Eternity are easy, but gathering a small group together to watch a movie or interact in a large-scale environment like a virtual building or city requires some reason for people to be there. The actual act of getting into the environment is just a click away with an MR headset.

While some talk spins up about large-scale metaverse environments, I think that small MR groups will be where interaction will be most successful. Getting together to look at a car or building design, or other collaborative effort, are already common interactions that will only increase as the costs drop and performance ramps up.

Read more articles in the TechXchange: Mixed Realities.

About the Author

William G. Wong | Senior Content Director - Electronic Design and Microwaves & RF

I am Editor of Electronic Design focusing on embedded, software, and systems. As Senior Content Director, I also manage Microwaves & RF and I work with a great team of editors to provide engineers, programmers, developers and technical managers with interesting and useful articles and videos on a regular basis. Check out our free newsletters to see the latest content.

You can send press releases for new products for possible coverage on the website. I am also interested in receiving contributed articles for publishing on our website. Use our template and send to me along with a signed release form. 

Check out my blog, AltEmbedded on Electronic Design, as well as his latest articles on this site that are listed below. 

You can visit my social media via these links:

I earned a Bachelor of Electrical Engineering at the Georgia Institute of Technology and a Masters in Computer Science from Rutgers University. I still do a bit of programming using everything from C and C++ to Rust and Ada/SPARK. I do a bit of PHP programming for Drupal websites. I have posted a few Drupal modules.  

I still get a hand on software and electronic hardware. Some of this can be found on our Kit Close-Up video series. You can also see me on many of our TechXchange Talk videos. I am interested in a range of projects from robotics to artificial intelligence. 

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!