What Stands Between Us and the Mass Adoption of VR in Our Daily Lives?
One of the challenges for VR to become mainstream technology is hardware and its form factor. Image quality is far from being photo-realistic and devices are bulky and “socially-unacceptable” for most of the people. Improving image quality directly relates to rendering 3D digital content with high (8K+) quality. This, however, requires solid GPU and that’s why today’s premium VR headsets are connected to a powerful gaming PC that does the heavy lifting. Stand-alone VR headsets are beginning to evolve but they are far from perfect image quality.
The other challenge is the lack of quality 3D digital content – it is still relatively expensive to create. This is a typical deadlock situation – each problem is waiting for the other to be solved first and the bottom line is that both problems remain persistent through the years.
5G for Cloud Rendering
The dawn of 5G can play a significant role in high-quality 3D rendering on demand. Farm rendering in the cloud has been available for more than 15 years, but prior to 5G a VR experience may not bet on such remote rendering because of the latency (also known as “lag”). With its virtually zero latency, 5G will simply allow 8K+ 3D content to be rendered at powerful render farms on demand and instantly served to end users. This, on its hand, will enable hardware vendors to remove the GPU requirement from the equation and focus on lenses, displays and ergonomics.
Zero Latency is Critical for Streamed Content
If you try a VR experience that doesn't respond immediately to your body movement then you get the so called VR sickness (or nausea). A nausea-free latency is considered everything that takes no more than 16 milliseconds to update. With its 5-8 milliseconds at most to update - this is by specification - the 5G network can easily distribute a 100+ frames per second streamed VR experience to any viewer device.
Image quality can also be greatly improved by applying the so called foveated rendering technique. This is basically an approach that uses precise eye-tracking system to detect the current area of focus and render high quality image only within this area. Which saves greatly on GPU overload - after all why render a 360 8K image when the human eye sees in high definition only within a small 5° circle.
HTC Vive Pro Eye already provides this functionality but its bulky factor still remains. We need the foveated rendering technique applied to a device like Oculus Quest. With its new hand-tracking feature announced, the Quest removes the need for controllers, thus lowering the entry-level barrier for adoption.
Breakthrough in Lenses
A recent breakthrough could soon revolutionize almost every optical instrument produced today, including AR & VR headsets. Scientists have developed an electronic lens that works better than the human eye.
The flat lens design uses tiny nano-structures to focus light. In this way, it's able to focus the entire visible light spectrum at a single point. By contrast, traditional lenses use multiple elements to achieve the same feat, which is why they get so bulky. This will further help with reducing the form factor and focusing on a socially-acceptable device that people are not concerned to wear in public.
Quality 3D Content is Still Expensive to Create
The other reason why not everyone is having a VR device at their home is that 3D models and environments are still too expensive to produce at scale. Being expensive, makes VR app development an expensive process as well. Today, only a few top games are selling well on the marketplace. Consumer apps, besides media players, are lacking and chances are that if you are not a gamer you will see little value in using a VR headset in your daily life.
On the business side, one of our struggles has been to get quality, still mobile-friendly environments for use case simulation. Most of the pre-built assets on the Unity Asset Store are not of the desired quality. While most of the assets outside the Unity ecosystem are in 3D Studio Max format or similar and it takes a lot of custom 3D-artist work to adapt them for Unity.
One innovation that seems to have huge potential is Nvidia's 3D-world generation from training a neural network on 2D-videos.
Using a conditional generative neural network as a starting point, the team trained a neural network to render new 3D environments, after being trained on existing videos. This AI breakthrough will allow developers and artists to create new interactive 3D virtual worlds for automotive, gaming or virtual reality by training models on videos from the real world. This will lower the cost and time it takes to create virtual worlds.
A New Quantum Leap is Coming - But When?
VR is here today but it is still far from being the massively adopted technology that we've all been awaiting since the past several years. The good news is that for all of the above mentioned barriers, there are prototypes and researches that show the solution.
5G plus Foveated rendering can solve the photo-realistic quality that will be the breakthrough.
Super-tiny electronic lenses to reduce the device form-factor are coming.
Photo-realistic avatars and full body tracking - like what Facebook already showed - will fully unlock the social potential of VR.
Real-time environment scans - Facebook demonstrated this with their AI habitat project, together with AI networks like Nvidia's will allow for beautiful 3D worlds to be created with minimal efforts.
When will this future come is still unclear but it is more a question of when, not if. However, VR for business is already here and specific business processes can benefit a lot by innovating them with VR, although the technology is not yet massively adopted.