VR: The time is now
Immersive virtual reality is not exactly a new technology. Futurists have been talking about it for decades, and engineers and tinkerers have been building headsets for nearly as long. But, only recently has VR begun to grip the world in a way that makes many long-time VR enthusiasts think, “maybe this time it will stick?”. The differences between more recent iterations of VR hardware and software and older versions are aplenty. Computing power has improved, as has screen technology. The movement tracking is far more precise, all but eliminating motion sickness, and many headsets are now wireless, allowing mobility and freedom of movement. As well, instantaneous sharing of resources makes it easier than ever to build your own VR environment for your or your company’s unique goals. This nexus of events may be the driver of what is bringing VR to the mainstream.
Groups across many industries are beginning to note the power that VR holds to educate and train personnel in low-risk situations before letting them loose in the real world. It’s far better to immerse a firefighter trainee in a virtual simulation of a collapsing, flame-engulfed high-rise, for example, before physically putting them in a real-world training situation. It allows the trainee to see what it’s like to be surrounded by flames, without the actual risk of being injured. Similar potential exists for training heavy equipment operators, pilots, and other high-risk careers. But, it’s not just groups looking to keep trainees out of harm’s way physically who are adopting VR, as large box stores like Walmart use it to train their employees before they venture into the sometimes unpredictable arena of customer service.
How can we keep improving on VR’s effectiveness?
As VR tightens its grip on the world, one important question that continues to pop up is, “How can VR immersion be maximized to improve the efficacy of the experience?”. Or, in simpler terms, “How can VR be improved to increase performance in the real world?”. One way is by giving people a visualization of limbs that connect to their body. In a recent study, Dr. Craig Chapman and I ran a simple VR experiment at the University of Alberta where we tasked participants with moving a virtual box of pasta into and out of a set of shelves. In both conditions, participants held plastic controllers in their hands, and pressed the trigger to initiate an interaction with the box. In one condition, participants saw a replica of the plastic controllers they held, while in the other condition, participants saw a set of dynamic human-like limbs that extended into their virtual torso. It should be noted that the interaction mechanism between the two conditions was exactly the same, and that the only difference between the conditions was visual limb representation.
Having “limbs” changes the way you work in VR
Our first major finding from this experiment is that participants moved differently between the two conditions. When seeing a replica of the controllers they held, participants chose to stick the controller straight into the box. But, when seeing the set of dynamic limbs, participants chose to rotate and position the controller they held in their hand to force the virtual hand to make a plausible grasping pattern around the box. There had been no instruction for them to do this, and they could have stuck the virtual hand straight into the box, as they did in the previous condition. This shows not only that the visual system is used as a mechanism to correct motor plans of actions, but also that people are willing to undergo actual, real-world, biomechanical discomfort to give the visual image that their limb is “doing what it should be doing”. How your body looks in VR affects how you actually move.
What’s more, we collected responses from participants following each condition, where they rated their agreement level on statements about the Ownership, Location, and Agency of their virtual body. These are the three major components of Embodiment, the feeling we all have of knowing that our body is ours. We found, unsurprisingly, that participants had significantly higher levels of Ownership towards the virtual limbs, compared to the virtual controllers, meaning that they felt like the virtual limbs were actually part of their body! How your body looks in VR impacts how you feel about your body.
The most impactful finding from this study, is that we found a strong correlation between movement and body ownership ratings. Participants who moved more differently between the two conditions, reported greater differences in body ownership between the two conditions. In other words, people who felt more embodiment towards the virtual limbs also moved more differently, wanting to make those virtual limbs look and act like real limbs! This is an intriguing finding, teasing a connection between feelings of embodiment, and actual real-world movement differences.
Takeaway: Learners are more connected to the experience if they have virtual limbs
What is the takeaway from this? If you’re building virtual applications, you should be aiming to give users, at the very least, a human-like set of limbs. It will increase their ownership over their virtual body, and potentially increase their level of immersion in the virtual world, leading to increased efficacy of whatever the virtual application is being used for. Whether it’s a simulation for surgical training, heavy equipment operating, firefighting, or education, providing a virtual body for VR users increases their embodiment, pushes them to want their bodies to move more naturally, and likely increases the efficacy of the virtual tool on their performance in the real world.
Stay in the Know
Want to stay up-to-date with what is going on in the world of immersive training? Subscribe to the Motive Blog.