Job got you down? Has your love life lost its spark? Are you sick and tired of real reality? Lucky for you, all of that can change thanks to the modern miracle of virtual reality!
Okay, that sounds a little cheesy, right? Okay fine, it’s a lot cheesy. Virtual reality, however, is much less so and has the potential to revolutionize, well, everything! We’ve been trying to put virtual reality to use in more and more applications over the years, and now it’s poised to go beyond expensive simulators and make its bold entrance into our living rooms, and even our pockets!
In the past, virtual reality has been limited to specific applications: aircraft simulators, driving simulators, golf simulators, combat simulators, and (more recently) specialized medical equipment.
In each scenario the user is placed into an immersive environment, surrounded by scenery, sounds, lighting, and a very limited number of objects with which they can interact.
These simulators, unlike modern video games, are typically powered by many computers — sometimes dozens of them — each responsible for one area of the simulation. Video games are usually powered by just one computer, but they’re only responsible for showing you what’s on your screen. Anything beyond the display is basically ignored and requires almost no processing time. But virtual reality is different.
Unlike immersive games, VR has to be ready for everything that’s around you. When playing a game (whether on a console, computer, or mobile device), turning is generally done with your character’s entire body, bringing a new section of the map into view. In real life, however, we glance around a lot. We take cues from our peripheral vision. We need access to the “map” around us in ways that current games just can’t provide.
Let’s say you’re playing a game, but it’s very slow and laggy. One of the “tricks” to speed it up it is by lowering the resolution. Doing so cuts down on the number of pixels the graphics engine has to paint, and speeds up your gameplay. Of course that means it’s not going to look as good, but the goal is to find the right balance between the two. When we start talking about immersive configurations we suddenly jump up in the number of pixels — a lot! Instead of a simple 12-inch by 18-inch flat screen, we now have to consider a 360-degree panorama, horizontally and vertically.
Back to dedicated simulators.
These situations can’t afford not have the periphery already painted and displayed. That’s why they have so many separate computers powering them. You just need to move your head or flick your eyes to see the data on one of the other screens — screens that are stitched together so they look like one continuous display (well, almost).
VR does away with all those other screens, but the requirement for the data on those screens to be ready for you to “glance” at still remains. How fast are those glances, anyway?
Mobile devices are somewhere around 60fps. Movies are generally 24fps, but newer shows (like The Hobbit) were also released at 48fps. According to many so-called experts, the human eye can still “see” 60fps, so most computer monitors are around 75-90fps, though some are much faster.
The more frames displayed per second, the more data the GPU has to to process. Jumping from 24 to 48 (or 30 to 60) fps requires a doubling of processing power, and 60fps still isn’t “fast enough” to look “real”.
The real lag
As if all that weren’t bad enough, now there’s an additional component to take into consideration: sensors.
Various studies have shown that we can detect a latency when it’s greater than 50 milliseconds. As soon as that threshold is surpassed and the user becomes aware of the latency (even on an unconscious level), the illusion of the artificial environment is lost.
We’ve gotten used to all sorts of gizmos and sensors in our smart devices. Some measure the ambient light, others measure elevation, rotation, and even magnetic fields. Inputs from each of these must be read, translated into usable data, then acted upon by the operating system. From there, the source or intensity of a sound may need to be adjusted and the corresponding signal sent to a set of virtual-surround speakers. The picture must pan or tilt, shadows must be cast, lighting adjusting, lense flare applied, and particles mapped. All of which must happen in an instant. In reality, it takes much, much longer.
The prototypes of the Oculus Rift used an off-the-shelf sensor. The company quickly realized it needed something better, something faster. Ultimately it developed its own sensor, the Oculus VR sensor. This marvel of engineering supports sampling rates “up to 1000hz” — that’s a thousand measurements every second. This reduces the time between a user’s head moving and the virtual reality environment being able to begin reacting down to roughly 2 milliseconds. That’s a huge improvement, but it’s only the sensing and detecting component. From there, the OS and app must apply logic, the CPU and GPU have to get to work, and that data must be translated and transmitted to the screens and speakers in the virtual reality headset.
Each step along the way stacks on additional time, until the magic sub-50ms bar is frighteningly close. Add to that data lag, and it’s game over. The illusion is lost, and instead of a believable world, now you’re just a lonely person wearing a ridiculous pair of glasses.
Let’s bring this full-circle. Current virtual reality systems achieve their magic by using near-real-time operating systems and very customized hardware.
Regardless of what OS is inside your smartphone, or what SoC is powering it, you simply don’t have anywhere close to “real-time” processing.
When all is said and done, it looks like latency will most certainly doom smartphone-based virtual reality.