Twenty Milliseconds Logo

This is Twenty Milliseconds, a site documenting what works and what doesn't in virtual reality design.

Learnings from Unreal Engine Integration & Demos

Engine Integration Overview

Things had to render in stereoscopic - no native support in Unreal. One projection for each eye. Some diagrams of stereoscopic rendering - had to move image to the left & right for each eye by 12 of the IPD distance.

Conversion between IPD and Unreal units - there’s a world-to-meter scale. Also used for positional tracking.

Easiest approach for implementing stereo rendering was to repeat draw calls for each eye. Doubles the number of draw calls but is super easy to implement. First thing that they tried.

A better approach is to do double rendering in the shader - moves the work into the GPU. Much more invasive to the render pipeline.

Distortion - Rift optics try to stretch the image a little bit so you have a wider FOV. Need to use a filter to “un-distort” the image so it looks correct through the distorted optics. Content in the middle of the screen looks slightly better. SDK takes care of this for you.

You can scale the percentage of content visible - tradeoff between scalability (visual quality) and render time. Higher screen percentages provide antialiasing benefits.

Tradeoff of performance and latency reduction in frame buffering. r.FinishCurrentFrame forces the video card to render frame, move to the next one.

Discussion of time warping as a tool to hide latency - rotating the image right before you present it to the user. Time warping only works over short intervals.

Content of Demos

What are we going to make? Oculus wanted something “epic” with explosions, cinema

Near field stuff looks amazing in stereo vision

Needed to reuse existing assets to be efficient with time. Reused a robot, some explosions & effects. Demo involved robot shooting at everyone. Ended up not being that great of a decision becaues they had to adjust it all anyway for VR

2 people working on the content for 5 weeks.

Couch Knights hardly got 75fps, needed to rip off bandaid to make the content render at 90fps.

Stereo problem - a 2D sprite looks very 2D when viewed in stereo - needed to make transformations to make it look like a 3D object.

Designing around problems - made the explosions in slomo, which makes the best of short animations. 6 seconds of animations became 2 minutes.

Sprite particles looked like sprite particles. Holds up well for small things & perfectly spherical things. Others looked not so good. Ended up using real geometry - a highly tesselated sphere. Thick smoke was pretty difficult, used tesselated “sausage tube” to make the thick smoke.

Used a second layer in the material so it didn’t look like a single surface.

Attached “dynamic” blob shadows to character’s feet, moved with the character.

Profiled with realtime stats - overlaid the draw time on the screen. First try ran at 30 frames/second.

Particles hogged resources, increased renter time. Toggling particles off saved lots of time. If Draw, GPU times move in parallel when you make changes, you are render bound - need to improve rendering.

CPU profiler in Unreal - lets you profile the render thread and see what the CPU is doing.

Found a really expensive effect, drawing 2600 times, updating every frame, just killed it.

Design Lessons

Elemental VR - people don’t mave like they’re in a video game - had to cut speed to 13 of a normal video game.

Slopes made people feel like their “stomach was dropping” - on a staircase.

Volcano skiing was really cool - vertical movement can be really fun - unintended use case.

Clean lines didn’t read well on the screen because of poor pixel resolution.

“Magic missile” - bouncy blob of particles that was fun to look at - made players look around.

Spatialized audio really helped.

What do we do when the camera loses track of the player?

“Dust motes” in the air to help sell positional tracking.

Avatar note - people said “Huh, my pants aren’t that color”

Couch Knights start - put players’ hands behind your back, lock them in room, force them to escape.

Toys that came to life and started fighting sounded more fun. Player in game is holding a controller so it matches what you’re doing in real life. Very important to make that match up to the visuals you have in the game.

Body language is really important - seeing other people’s head tracking in VR is important.

Hand size mattered - avatar’s gender matters.

Added some facial expressions, but it made the uncanny valley worse - people noticed more when you didn’t had facial expressions, if they did have facial expressions during some scenarios.

Eye tracking was really important. Switching angle had to be finely tuned otherwise it would look really creepy.

Most people bonded with the knight caracter instead of the human, because it was abstract.

“Exploring the effect of Abstract Motion in Social Human-Robot Interaction” (paper) - People can bond with abstract objects if they perceive their motion to be lifelike - can be a stick or a broom or whatever.

Nick Whiting (nick.whiting@epicgames.com) (other name showed up too fast to type)