| The New York Times
FORT LAUDERDALE, FLA. — A 3-D animated creature, affectionately named Gerald, appears to walk in circles while floating in front of an elaborate viewer that resembles something from an optometrist’s office. Though only half a foot high, and with four arms, he looks remarkably lifelike through lenses that transmit what computer scientists and optical engineers describe as a “digital light field” into the eyes of a viewer.
The technology, once downsized into a pair of glasses, is intended to overcome the most significant technical challenges blocking an explosion of virtual reality.
Though the industry could radically transform entertainment, gaming and other forms of computing, it has an Achilles’ heel: Many people become queasy after pulling viewing devices over their eyes and slipping into an immersive world that blurs the line between physical reality and computer-generated imagery.
Oculus Rift, the crowdfunded start-up acquired by Facebook for $2 billion in March, has been trying to correct this motion sickness. But it is a steep challenge. Nearly 50 years after the computer scientist Ivan Sutherland pioneered head-mounted computer displays, there are more than two dozen companies that have attempted to commercialize various forms of the technology, called near-eye displays, with little success.
“The near-eye-display market is a graveyard of broken dreams,” said David Luebke, senior director of research at Nvidia, a Silicon Valley computer graphics company.
Whether it comes in bulky goggles that block out the actual world (the virtual reality of Oculus Rift, for instance), or in sleeker glasses that allow users to see their true surroundings blended with computer images (known as augmented reality), the technology is based on showing images to both eyes at once.
Inevitably, though, many users endure “simulator sickness” and other kinds of discomfort, like headaches and fatigue. Some of it stems from a disorienting lag between the rapid turn of one’s head and the time it takes for the computer to catch up and generate a new set of images to reflect the changing scenery.
Another issue is the disconnect between where the images appear to be — picture a cloud in the sky far away — and where they actually are — on small screens only inches from the user’s eyes. Experts call this unsettling dissonance the “vergence-accommodation conflict.”
The consumer electronics industry has taken note of the problems. One Sony head-mounted stereo 3-D display even comes with a warning: “Watching video images or playing games by this device may affect the health of growing children.”
In a 2007 University of Minnesota study, nine volunteers used a head-mounted display to play the video game Halo, but eight of them complained of motion sickness severe enough to quit after playing for a short period.
“Visual head-mounted display devices are causing a variety of symptoms in patients,” said Dr. Joseph F. Rizzo III, a professor of ophthalmology at Harvard Medical School. “Prolonged use of devices that create symptoms might induce more chronic change.”
Google Glass gets around these issues by using a single display in the corner of one eye, not two. But it cannot produce 3-D images or create the immersive experience that many gamers and other kinds of users crave.
Gerald’s creator, the start-up Magic Leap Inc., is trying a different approach, using a digital light field. Unlike a conventional digital stereo image, which comes from projecting two slightly displaced images with different colors and brightness, Magic Leap says its digital light field encodes more information about a scene to help the brain make sense of what it is looking at, including the scattering of light beams and the distance of objects.
Magic Leap and other researchers in the field say that digital light fields will circumvent visual and neurological problems by providing viewers with depth cues similar to the ones generated by natural objects. That will make it possible to wear augmented-reality viewers for extended periods without discomfort, they say.
Light-field technology is already being used in a new generation of digital cameras that offer the ability to change the point of focus after a picture has been taken. Researchers at Nvidia demonstrated a head- mounted light-field system last year, and scientists at the M.I.T. Media Lab have shown an “autostereoscopic,” or glasses-free, 3-D display based on what they described as a “compressed light field.”
So far, however, one of the principal obstacles facing light-field cameras and displays is that they require as many as five or six times as many pixels to create the resolution equivalent to a conventional digital image.
Magic Leap claims to have solved the resolution challenge with a proprietary technology that projects an image, which it describes as a “3-D light sculpture,” onto the viewer’s retina. Rony Abovitz, a biomedical engineer who founded Mako Surgical, a successful robotic surgery company, before creating Magic Leap in 2010, said that his system would even offer a resolution close to the power of the human eye.
Virtual- and augmented-reality aficionados foresee a world in which conventional computer screens and televisions are obsolete, and it is possible to project lifelike animations into meetings anywhere. They describe a next generation of technology beyond personal computing and smartphones based on a new set of approaches they call “perceptual computing.”
“Playing games is the dessert,” Mr. Abovitz said. “Our real market is people doing everyday things. Rather than pulling your mobile phone in and out of your pocket, we want to create an all-day flow; whether you’re going to the doctor or a meeting or hanging out, you will all of a sudden be amplified by the collective knowledge that is on the web.”