Twenty Milliseconds Logo

This is Twenty Milliseconds, a site documenting what works and what doesn't in virtual reality design.

Here's Why Speakers are an "Evolutionary Dead End" for Virtual Reality

Headphones are preferable to speakers because they can simulate sound in all directions, block out sound from external sources, and work better with top-of-the-line audio algorithms.

The most definitive statement Brian Hook made during his talk at Oculus Connect was that using speakers for virtual reality audio was an “evolutionary dead end”, and that users should definitely be using headphones while wearing a headset.

Brian Hook at Oculus Connect Brian Hook at Oculus Connect

Headphones vs Speakers

Numbers on headphone use versus speaker use are tough to track down, but a survey on the GiantBomb forum reports that 53-87% of users use headphones, and 19-53% of users use speakers (with some switching between them depending on the situation).1

This presents virtual reality software designers with a difficult problem, as you have to localize audio differently if the user is wearing headphones or speakers. If the user turns their head 90 degrees to the left and they are wearing headphones, a sound that was previously entirely in the left ear should now be split evenly between the left and right ears.

Speaker Positioning Presents Problems

Fortunately, if someone is using speakers, you don’t have to update the left/right balance as the user moves their head. So if a sound was coming from the left speaker and a user moves your head 90 degrees left, the user is now “looking” straight at the sound, and you don’t need to adjust it. At a glance, it seems like this would be easier for developers who want to make a game, as they don’t have to update the sound source as the user’s head moves.

However, most users with speakers do not have surround sound, particularly if a sound is supposed to emanate from behind the user. “This is a significant issue, particularly with stereo only,” said Hook. People with computer speakers generally, position the left/right speakers on or under a desk, so a sound that is supposed to come from behind the user can be tricky to represent. “[It’s] hard to do!” said Hook.

No Isolation

Headphones send sound to your ears, and they also help block sounds from the outside world. If you’re using speakers, the cacophony of friends’ conversations, sirens wailing, laundry machines running, etc. can be heard alongside the sound from the speakers and does not help a sense of presence.

Extra Reflections

Compare the echoing voice you hear from listening to a coach in an empty gymnasium with a friend speaking in a room filled with carpets. It turns out that sound is bouncing off of surfaces all the time before it reaches our ears, particularly your outer ear, shoulders, and torso.

The Oculus SDK uses a head-related transfer function to translate a sound with a fixed position in the world to the sound experienced by both of your ears. The HRTF used by Oculus is designed for headphone use and accounts for sound bouncing off of your head, shoulders, torso, and inner ears.

If you use speakers, Oculus will translate the sound for the left and right speakers, accounting for reflections, and then as the sound travels from the speakers to your ear, it bounces off of those surfaces again, causing a “‘double down’ on head related effects”, according to Hook.

Crescent Bay prototype Crescent Bay prototype, with attached headphones.


These factors combine to make speakers a very challenging environment, causing Oculus to announce the inclusion of integrated headphones in its latest Crescent Bay prototype. Other headset developers (and users!) should take note.

I’ll post the full contents of Hook’s talk when it becomes available online.

1. If anyone has better data, please send it my way!