Augmented-and-Virtual-Reality
October 18 2017
| By
ImagineAR

How do you choose between Augmented and Virtual Reality?

Many companies that come to us for guidance are striving to build an AR or VR solution, but there is a lot of confusion. We are often asked to create an AR solution when VR makes more sense and vice versa. What should you use when? When you boil it down, they each have their own strengths. The type of solution you want to create will dictate whether AR or VR makes sense. However, both of these technologies are still emerging and VR has an undeniable head start due to its accessibility. Today, VR technology is simply cheaper to acquire. You can spend less than $100 and transform your phone into a VR machine. However, there are instances when AR is the right pathway because the experience breaks down in VR. This discussion focuses on AR and VR solutions that are worn on your head and is a synopsis of the kind of thinking we have to do when guiding our customers.

The type of solution you want to create will dictate whether AR or VR makes sense.

From a development perspective, both AR and VR rely on similar technologies (e.g. Unity and Unreal Engine). Despite sharing technology, there is not an apples to apples comparison for complexity of development. The approach to AR is different from VR. VR-based solutions rely on a complete virtualization of your entire environment. You can be transported to anywhere in the universe instantly. AR-based solutions rely on your physical environment for context, and adding to it where it makes sense. In VR, you must supply the environment. This is the core question that will drive you to AR or VR – do you need the physical environment for your solution to make sense? If you do not, then VR wins. For example, if you wanted to transport someone to a famous city and let him or her explore it, AR or VR could work. If you had to choose, VR would win that battle because VR hardware is cheaper to acquire and you would have a larger audience for your experience. What if you wanted to train someone on how to inspect an airplane? This is something you could do in VR, but the experience is arguably better in AR. Imagine if the trainee could walk the physical airplane and be guided through in AR, they are likely to have better retention of the experience because of the physical context. As they walk the plane, in AR virtual data points and interactive information show in context on the airplane. The trainee does not need to learn how to move through the environment as they would in VR, because in AR they are physically walking throughout the aircraft. In VR, the trainee would need to understand how to control the environment to allow their virtual self to move through the experience.

AR wins when you need the physical environment to make the experience work.

Relying on physical context does come with complexity. It is not trivial to create a quality experience that weaves in the real world. Consider an experience that demonstrates safety capabilities on a vehicle. In VR, everything, including the vehicle, is digital. In AR, the vehicle is real and everything else is digital. In this example, we want to demonstrate how far the rear bumper distance sensors see. This might be depicted as animated waves flowing away from the sensors to some relative distance off the rear bumper. In VR, this animation would be placed off the rear bumper of the digitized car when the experience was created during design and compile time in the design tools. In AR, the difficulty at design time is that you will likely not know the scale and orientation to render the 3D objects. In the real world, that vehicle may be on a platform and put on an angle which makes it impossible to render 3D objects relative to physical objects at compile time. For this reason, in AR, the same animation would need to be placed off the real car at runtime. This fundamental difference makes effective AR solutions much more difficult to create because each 3D asset requires consideration of how they need to be placed into the environment. We did this in our Hololens Showroom app. These types of considerations can make AR more difficult to implement than VR so you need to weigh the value of the physical environment to make the experience effective.

There is a lot of disparity in VR control input.

A current challenge with VR is what type of control input will be available to interact with the experience. Control input is what you can physically interact with to affect the VR experience. There are many different VR hardware implementations out there with varying capabilities of control input. The lowest common denominator is your gaze and a simple tap. VR can determine where you are looking and use a simple tap to make selections. Some VR implementations can leverage Bluetooth with external devices such as joysticks and simple controllers that accept a click and swipe. While other implementations integrate the control right on the headset. Handheld devices that allow you to virtually see your hand positions within the VR experience enable more advanced control input. These more complex setups can require additional sensors to orient you and your hands together in a cohesive manner. In the airplane training example above, consider that you might not have physical access to an airplane. The airplane needs to be digitized along with the training material. How can an effective training experience be created? Will you have control over the training environment or will this be BYOD? If it’s BYOD, then you need to simplify your interactions due to the limited control input, which will detract from the training experience. If you have complete control over the hardware, then you can build a VR experience that is curated to the advanced control input you will provide. There are many choices and standards are still emerging, so designing for the least amount of control input required can broaden your audience.

AR may still be the right answer even when the physical environment is not required.

It is apparent that there are some cases, even when physical objects are not at play, being in AR triumphs over VR. AR is grounded in the physical world. The visual sensory input we receive when walking and moving around 3D objects is arguably better than a completely simulated environment provided by VR. You may have heard of reports that VR can make people sick due to the simulated environment. The sensory input you receive is not quite matching what your brain is expecting, and this can cause problems for some people. Handheld control input where your hand positions are rendered into your environment can help ground a person into the virtual experience. When you find yourself relying on more and more complex control input to support your VR experience, AR becomes more relevant. AR relies on your body movement to move within the experience versus the swipes and clicks on a controller. In AR, not all control input needs to be virtualized since you are relying on physical movement and interactions that you naturally do.

How do you decide?

So how do you decide? It depends. If your solution is dependent on BYOD, then VR will triumph. The answer will not be so obvious once AR becomes more accessible. If you have control over the hardware the experience will leverage, you need to ask if the physical environment is required to make the experience effective. If it is, then AR is the solution. If it is not required, then it is likely a VR solution. You then need to ask if the experience you are creating requires complex control input. If it does not, then definitely VR starts to make sense. If it does, then consider how physical movement in the environment, which is more natural then using control input, can simplify the types of interactions you need to do.

A scenario we are exploring for AR.

We have been working through a problem that makes more sense to us in AR than VR, even though there are no physical objects being leveraged. We are building a Hololens app that can enable people to build molecules. The act of the molecule being rendered in true 3D provides benefits. Remember those organic chemistry classes with all those plastic pieces to create molecules? Our brains are spatially oriented and can understand certain problems much better when dealing with it in 3D. This is a great example for using AR and VR. But where AR excels is when you have complex interactions along with normal movement. We need to, for example, understand where to place an atom on a molecule. This is about as complex of an interaction that will occur in AR and VR. We want people to be focused on atom placement and not on moving around the molecule. If this were a VR-based solution, we would need to consider how we would effectively move a person AND where to place an atom on the molecule. In AR, we can just focus on atom placement.

There is lot to consider when making this choice. Are you wanting to build an AR or VR solution? We can help. Contact us at curious@interknowlogy.com.