Anyone who reads the technology news can’t have failed to notice a certain preoccupation in the past couple of years on the part of developers to bring viewers close to the action of TV, films and computer games through virtual reality. Every other day, it seems, we hear of yet another allegedly ground-breaking solution in the quest for “immersion”. The next person to claim to have invented a Star-Trek-like Holodeck is going to get a Vulcan neck pinch from me.
Frustratingly, this marketing hype actually seems to be working so well that VR headsets, be they binocular, biocular or monocular (such as Google’s Glass), have become a “must have” item. Even in traditionally sceptical and risk-averse sectors such as defence, aerospace, energy and education, they are fast becoming de rigeur in training exercises.
Many other technology observers, from swooning journalists to corporate futurologists — a good too many of whom appear to believe everything they read or see online — are also fuelling the rush to invest in a holodeck.
It seems inevitable that large investments will be made with little return, giving some of us a distinct sense of déjà vu.
We were first exposed to the “wonders” of head-mounted displays in the late 1980s and throughout the 1990s, when we were told that they would revolutionise VR and telepresence. Even then, head-mounted displays were by no means a new concept. A patent filed as long ago as 1960 described the Telesphere Mask, a stereoscopic “television apparatus for individual use” developed by the late, great Mort Heilig — best known for his later Sensorama “kiosk” with its canned stereoscopic films, artificially generated smells and vibrating seat experience.
A year later, Philco, the US electronics company famous for providing US space agency Nasa with its early Mission Control consoles, announced Headsight, a single cathode ray tube “telepresence” head-mounted display and in 1968, Ivan Sutherland’s Sword of Damocles, a ceiling-linked, mechanically head-tracked stereoscopic device, enabled users to look around a simple 3D graphic as if it were “floating” in the room in front of them.
But it wasn’t until the late 1980s that commercially available products such as VPL’s EyePhone, LEEP’s CyberFace, Virtual Research’s Flight Helmet and the unbelievably unwearable Virtuality Visette, with its patented “Ergolock” head restrainer, captured the attention of the press, thus heralding a decade of false promise, high expenditure and end user disappointment.
Big names like Nintendo, Olympus, Phillips and Sony all came, experimented and retreated, either disgruntled at the poor domestic market uptake or concerned about the possibility of litigations over so-called cybersickness.
Sony’s withdrawal of its early Glasstron product range came about, allegedly, as a result of health and safety worries yet it is heavily involved in this latest hypefest with the rather expensive HMZ “Personal Viewer” series and, more recently, the Morpheus for the PS4 games system.
I’ve used most of the devices produced since the 1980s and, indeed, have even been involved in the sale of many since the 1990s. Today, my own “Headset Hall of Shame” lecture now consists of four PowerPoint slides with thumbnail images of most (but certainly not all) HMD devices ever to reach existence.
And that’s why I think the cult status given to the latest generation of VR headsets simply beggars belief. We’ve had decades of developments with no real breakthroughs made. And with the possible exception of the defence sector, there has been little evidence that the actual needs and limitations of the end user have been taken into consideration when these devices are being designed.
Since it took the world by storm by achieving a $2,4m cash injection on Kickstarter, the Oculus Rift has become the leading example of how over-hyped marketing and inflated statements by celebrity gaming personalities can influence a generation of potential adopters. We managed to obtain three Rifts for student projects and academic research and it soon became obvious that we, too, had inherited cult status, simply by the gasps and stares that greeted their every appearance at open days. Yet, almost without exception (the exception being a very small handful of hard-line gamers), those who donned the Rift were either unwilling or unable to continue with its use after two minutes or, in many cases, far less.
The Rift is certainly more comfortable than its 1990s ancestors but its image resolution is still limited, it still blurs pixels and it still offers an inadequate field of view.
Users continue to report disorientation and eyestrain, making it hard to imagine how the device can possibly be recommended as an “essential” interface for gamers or anyone else.
In March, Oculus was acquired by Facebook for a staggering US$2bn. Many of the people involved in the development of head-mounted devices believe this to be a step in the wrong direction for a variety of reasons — some moral, some technical — but the floodgates have been opened, whichever way you look at it. The frenetic race to beat Oculus at its own game, even with the features of its promised DK2 and consumer editions, is now on.
Technologies come and go, but the human user remains the one constant factor. No matter how good the specifications become, it will be some time before a display technology is developed that satisfies the majority of the end-user population and it may never happen at all.
Research (summarised in an MoD-sponsored human factors document) has shown that as many as 56% of individuals between the ages of 18 and 38 have one or more problems which can compromise their binocular vision. Individuals with stereoscopic or binocular vision defects cope by exploiting monocular depth and distance cues, such as motion parallax, light and shadows, focus, geometric overlap (interposition), aerial perspective, relative size and size/shape constancies. Even if it becomes possible to screen out users with binocular deficits, this may still not be sufficient to prevent usability and “cyber sickness” problems with head-mounted displays.
One of the well-known human factors issues with 3D displays is the mismatch between visual accommodation and convergence. When observing a real-world scene, your eyes will both converge on objects in the scene and re-focus to keep the imagery sharply registered on the retinas as your view changes. But when viewing 3D virtual environments via a display, your eyes begin to behave asynchronously. They converge on the virtual object, but the focus remains more or less constant as a result of the fixed position of the plane of the screen (or, indeed, the structure onto which the screen is mounted).
This mismatch can rapidly promote visual fatigue, discomfort and disorientation — three of the key precursors to cyber sickness.
The concepts of virtual reality and total immersion are, without doubt, as powerful today as they were when they first appeared in the late 1980s. Of that I have little doubt, and the interactive visualisation and training domains have much to benefit from the real-time interactive quality of today’s VR software toolkits.
But meaningful content design really needs to happen before VR headsets can come anywhere near meeting the high bar being set for them at the moment. Despite ridiculous claims to the contrary, we are nowhere near the day when we can walk into the equivalent of Star Trek’s Holodeck and experience a truly multisensory computer-generated reality without having to endure the cumbersome and often malaise-causing technologies we are being encouraged to buy today.
- Robert Stone is chair in interactive multimedia systems at the University of Birmingham
- This article was originally published on The Conversation