Twenty Milliseconds Logo

This is Twenty Milliseconds, a site documenting what works and what doesn't in virtual reality design.

Likely User Interface Mistakes in Coming Years of VR

Summary: Early mobile development was characterized by poor usability and applications ill suited for mobile devices. This phenomenon is likely to reoccur with VR development.

Many people are curious about how the VR landscape will evolve. The hardware is finally getting to a point where immersive experiences are possible. We can’t say for sure how the landscape for inputs and for interacting with VR devices will involve, but we can get an idea.

Specifically, let’s look at mistakes that people made designing interfaces for other platforms, and see if we can find common trends that would apply to the developing area of virtual reality usability

Early interfaces on smartphones were bad

In 2007 Apple launched the smartphone and ushered in a new field of user interfaces designed for a screen that fit in your pocket. Nielsen Norman Group ran tests in 2009 (two years after the launch of the iPhone), with simple tasks like “Your friend is flying in from Munich at noon, see if her flight is on time”, “Find information about this specific bottle of wine” or “Find a good Indian restaurant nearby”. They found the success rate was 59%, or in other words, 41% of users on mobile devices failed to accomplish the task they were trying to do.

The failures fall into three main categories:

It took several years for companies to realize that people would try to do everything on a mobile device that they do on a desktop, and invest resources in appropriate, new, user interfaces for mobile devices.

Applying these lessons to VR

So what can early usability problems with phones tell us about virtual reality? It allows us at least to make guesses about things people will try, that won’t work.

Content designed for other platforms won’t work

It’s tempting, when you had a hit first-person game for XBox or PC, to try to just port it to Oculus. I can picture an exec saying, “Hell, why not, the tech is experimental and it wouldn’t cost too much to just port the assets.” Hence the popularity of drivers like the VorpX which attempt to allow games built for 2D to run in your VR headset.

This approach won’t work (well, anyway) for a variety of reasons.

A central server won’t work

The maximum allowable latency in a VR system is 20 milliseconds, before jitter begins affecting the user experience and causing motion sickness. This makes it virtually impossible to make a request to a central server to retrieve data in response to user input, and render it in the next frame.

The nearest CDN to my home is in San Jose, CA, a distance of 48 miles in the car. The round trip time for a ping to this datacenter is 50ms, or 2.5 frames in VR. In theory, a round trip could go as low as 0.7 milliseconds, with a fiber optic cable in a straight line from my machine to the datacenter. So if the line latency is very low and the server can respond quickly, maybe there is hope of using a network the future. Yet, for the vast majority of users, networks will not be quick enough in the near future to support server round-trips in a way that does not cause motion sickness.

Guns and attacks and things are going to need to be slow, Dune-style, game creators are going to need to get creative, or they are going to suffer from lag in a VR system.

Apps with dropped frames won’t work

On a similar note, there are lots of technical and performance reasons that an app may not be able to reach one frame every 20 milliseconds. The managers may demand more features that compromise the game’s performance, or the team building the game may forget that most people are running machines that are not as powerful as the custom-built rigs they use in the office.

The same way bloated sites caused slow downloads on mobile sites, bloated games may cause users to be sick, and no one wants to play a game that makes them feel physically ill.

We probably need new types of inputs

In the early going, VR applications may try using a keyboard and mouse for input, or a console controller. These devices have the obvious benefit that they exist, and many people currently own them. The jury is still out, but the odds are that these types of input won’t be as good when you are moving the screen around by moving your head, and requiring different types of interaction with the screen.

How to design a usable application in the face of uncertainty

There are obvious traps and several precautions a game developer would be wise to take: