Skip to main content

Feasibility of a walking virtual reality system for rehabilitation: objective and subjective parameters

Abstract

Background

Even though virtual reality (VR) is increasingly used in rehabilitation, the implementation of walking navigation in VR still poses a technological challenge for current motion tracking systems. Different metaphors simulate locomotion without involving real gait kinematics, which can affect presence, orientation, spatial memory and cognition, and even performance. All these factors can dissuade their use in rehabilitation. We hypothesize that a marker-based head tracking solution would allow walking in VR with high sense of presence and without causing sickness. The objectives of this study were to determine the accuracy, the jitter, and the lag of the tracking system and its elicited sickness and presence in comparison of a CAVE system.

Methods

The accuracy and the jitter around the working area at three different heights and the lag of the head tracking system were analyzed. In addition, 47 healthy subjects completed a search task that involved navigation in the walking VR system and in the CAVE system. Navigation was enabled by natural locomotion in the walking VR system and through a specific device in the CAVE system. An HMD was used as display in the walking VR system. After interacting with each system, subjects rated their sickness in a seven-point scale and their presence in the Slater-Usoh-Steed Questionnaire and a modified version of the Presence Questionnaire.

Results

Better performance was registered at higher heights, where accuracy was less than 0.6 cm and the jitter was about 6 mm. The lag of the system was 120 ms. Participants reported that both systems caused similar low levels of sickness (about 2.4 over 7). However, ratings showed that the walking VR system elicited higher sense of presence than the CAVE system in both the Slater-Usoh-Steed Questionnaire (17.6 ± 0.3 vs 14.6 ± 0.6 over 21, respectively) and the modified Presence Questionnaire (107.4 ± 2.0 vs 93.5 ± 3.2 over 147, respectively).

Conclusions

The marker-based solution provided accurate, robust, and fast head tracking to allow navigation in the VR system by walking without causing relevant sickness and promoting higher sense of presence than CAVE systems, thus enabling natural walking in full-scale environments, which can enhance the ecological validity of VR-based rehabilitation applications.

Background

Virtual reality (VR) is an increasingly common term to refer to the real-time computer-generated simulation of a three-dimensional environment that replaces the natural sources of stimulation of the real world by artificial stimulation in different channels. The extent to which an individual is unable to acknowledge that an experience is computer-generated, known as the level of presence [1, 2], and other elicited subjective perceptions in VR are determined by user and media characteristics [3, 4]. User characteristics involve demographic aspects, motor or cognitive limitations, personality, and mood. Media characteristics involve not only the technological properties of the presentation of sensory information and the facilitation of the interaction, but also the content of the virtual environment (VE). The technology-mediated simulation of enriched environments and the provision of controlled sensory stimulation enable immersion in potentially hazardous and ecologically valid environments. These characteristics can be specially interesting for rehabilitation because they may allow to exceed the boundaries of the clinical setting and provide customized training to each participant [5, 6].

Interaction within VE has been facilitated through specific body movements [7, 8], finger touches [9, 10], or specific devices [5]. Even though different solutions have been successfully used to track upper [11] and lower limb movements [4, 8, 12], the implementation of walking navigation in VR still poses a technological challenge for current motion tracking systems, which are only capable of tracking a limited working area. Different metaphors have been proposed to simulate walking in VR applications. VR setups that involve computer screens or Cave automatic virtual environment (CAVE) systems, a cube composed of display-screens that completely surround the viewer [13], commonly use flysticks, gamepads, or 3D mice [5, 14]. These devices translate hand movements into displacements in the VE without involving real gait kinematics [14], limiting presence and affecting performance, which can even dissuade their use in rehabilitation. Gait training systems commonly incorporate treadmills while the VE is displayed in projectors [1517] or TV screens [18, 19], showing real-world video-recording [16] or virtual scenarios [1719]. However, besides its essential equivalence, only straight walking is allowed and kinematics of human walking have been reported to differ in treadmill and overground [2022]. All of these factors must be taken into account because the way that sensory stimulation is provided and how interaction is facilitated regulate the capability of delivering an illusion of reality to the senses of a human participant [23], and also modulates the movement kinematics [24] and, in the end, the ecological validity of the simulation [25].

Real walking in VR has been recently enabled by an infrared camera-based motion tracking solution [26, 27], similar to those used in gait analysis laboratories [28]. Different cameras, arranged around the working area, estimate the position of a constellation of markers that are attached to a head mounted display (HMD), which provides visual feedback of the VR according to the head location and orientation. Since the working area covered by each camera is limited, a dedicated space with a remarkable number of cameras is needed, which increases the cost and the complexity of the system. A hybrid tracking solution that estimates the relative movement of both feet with inertial sensors and corrects the drift error from GPS data has been presented for outdoor walking navigation, using an HMD as a display [29]. However, the system provides visual stimulation without taking the position of the head into account, which can affect presence, balance, and even produce sickness [30], and estimates displacements from inertial sensors, which can produce relevant inaccuracies [31].

Fiducial markers are artificial landmarks added to the real world that enable accurate pose detection for applications ranging from augmented reality to robot navigation [32]. Marker-based algorithms obtain the camera pose from correspondences between specific features of the markers in the real world and their camera projections with high speed and accuracy [33]. We hypothesized that a marker-based head tracking solution would be accurate and robust enough to provide consistent head tracking in the VE, which, in turn, would enable navigation through walking in a room-size environment without causing sickness. Moreover, we conjecture that these factors would promote a stronger sense of presence than that elicited by laboratory-grade CAVE systems, which have been shown to provide higher levels of presence than HMD [14, 34]. Therefore, the objectives of this study were: first, to determine the accuracy, jitter, and the lag of a marker-based head tracking system, and second, to compare the elicited presence and sickness of a walking VR system with that of a CAVE system.

Methods

A room-size walking VR system using a marker-based head tracking was designed and implemented to test the two hypotheses of the study, which were investigated in separate studies. The aim of the first study was to determine the objective parameters of the tracking system, defined by the accuracy, the jitter, and the lag. The aim of the second study was to compare the sickness and the subjective sense of presence elicited by the walking VR system and a CAVE system.

Instrumentation

Walking VR system

The experimental system presented in this paper consisted of 1) an HMD, the Oculus DK2 (Oculus VR, Irvine, CA); 2) a RGB camera, the PlayStation®Eye Camera (Sony® Corporation, Tokyo, Japan) with an additional lens (4.3 mm, 70° FOV); 3) a pattern of markers fixed to the ceiling at 2.65 cm of height; 4) and a laptop (Fig. 1). The hardware components of the laptop included an 8-core Intel® Core™ i7 Haswell @ 2.50 GHz, 8 GB of RAM, and a NVIDIA® Geforce® GTX 860 M with 2GB of GDDR5.

Fig. 1
figure 1

Setup of the walking virtual reality system. Experimental setup of the walking virtual reality system: a head mounted display; b RGB camera; c pattern of fiducial markers; and d) snapshot virtual environment that is being displayed to the user

The camera was fixed at the top of the HMD, pointing upwards. The additional lens was mounted on the camera to improve image quality. The camera was configured to capture standard video with a frame rate of 75 Hz at a 640 × 480 pixel resolution. The pattern of markers of the experimental setting consisted of 17 × 26 square fiducial markers, 18 × 18 cm each, separated 4 cm, thus covering an area of 3.78 × 5.76 m in the ceiling of the experimental room. However, wider areas can be covered using more markers (until 1024) or increasing their size. Four spotlights were placed in each corner of the room and were indirectly oriented to the ceiling to create homogeneous lighting conditions. The pattern of markers were generated using the ArUco library [33], a minimal library for augmented reality applications based on OpenCV. The library allowed to estimate the camera pose (position and rotation) from the correspondences between known points in the pattern of markers and their camera projections. Even though this library provides information of both position (x, y, and z coordinates) and orientation (yaw, pitch, and roll), only positional tracking was considered. The orientation of the head, and thus the orientation of the camera, was provided by the HMD.

It is important to highlight that even though the Oculus DK2 includes an external infrared camera to provide positional tracking, it is only capable of tracking short displacements of the head and users must be in front of the camera. Since the requirement of the system was to allow walking, this tracking capability was discarded.

CAVE system

The CAVE system consisted of 1) a conventional four-wall CAVE configuration (three vertical walls and floor), 2) four projectors F35 AS3D (Barco N.V., Poperinge, Belgium), one for each wall; 3) four infrared Trackpad cameras (Advanced Realtime Tracking GmbH, Munich, Germany) fixed to the upper frame of the vertical walls and pointing to the center of the CAVE; 4) a pair of 3D glasses Crystaleyes 3 (StereoGraphics, San Rafael, USA) with a constellation of infrared reflective markers; 5) a Flystick3 interaction device (Advanced Realtime Tracking GmbH, Munich, Germany), and 6) five high-end graphics computers, one master and four slaves connected through a high speed network (Fig. 2). The hardware components of the computers included an Intel® Xeon® CPU ES-2620 @ 2.00 GHz, 16 GB of RAM, and a NVIDIA® Quadro® 5000.

Fig. 2
figure 2

Setup of the CAVE system. Experimental setup of the CAVE system: a 3D glasses; b interaction device; and c infrared tracking cameras

The walls were 350 cm wide and 204 cm long. The projectors back-projected stereoscopic images in each wall with a resolution of 1868 × 1200 pixels at 120 Hz. The infrared cameras tracked the 3D glasses, which provided information of the position and orientation of the head. The Flystick3 was used to navigate within the VE using a small joystick on the top of the device.

Virtual environment

The environment represented an aisle of a grocery store, defined by two shelves with different kind of sodas. The shelves were 4 m long and 2 m tall. The separation between them was 1.5 m. Each shelf consisted of six racks with 72 different items. The price of each soda was indicated in the shelf (Fig. 3). Users were not represented.

Fig. 3
figure 3

Virtual environment. A virtual aisle of a grocery store with 72 different kind of sodas was used in the experiment

The VE was programmed using Unity 3D (Unity Technologies ApS, San Francisco, CA).

Study 1: Objective parameters

Procedure

To estimate the accuracy and the jitter of the head tracking, a 6 × 10 grid with 50 cm × 50 cm squares was marked on the floor under the pattern of markers, covering an area of 15 m2. The RGB camera was fixed in all of the intersection points of the grid (77 in total) using a specifically designed device that prevented movement and allowed height adjustment. The position and the jitter at each point was estimated during 5 s at 3 different heights: 3 cm (on the floor), 1.3 m (approximate height of a subject’s head in sitting position), and 1.7 m (approximate height of a subject’s head in standing position). Since accuracy and jitter at a given height depended on the number of markers detected, this value was registered in the center of the grid at each height.

To estimate the relationship between the speed of the head and the number of markers detected, an experimenter equipped with the modified HMD walked in a straight line across the walking VR system at three different speeds. The starting and finishing line were marked on the floor. The speed of the head and the number of markers detected at each frame were recorded.

Finally, to estimate the lag of the system, the camera was attached to a stick and the experimenter violently swayed it and stop it for 20 times. Another PlayStation®Eye camera simultaneously recorded the movements of the experimenter and the display of the HMD at 75 Hz.

Data analysis

Accuracy (e) was estimated as the mean difference between the position of the camera measured in the real world and its estimated position. The jitter (j) was defined as the standard deviation in the estimated position in the time interval as follows:

$$ \begin{array}{l}e=\frac{1}{N}{\displaystyle \sum_{i=1}^N}\left|{X}_i-{\tilde{X}}_i\right|\hfill \\ {}j=\sqrt{{\displaystyle \sum_{i=1}^n}\frac{X_i^2}{N}-{\overline{X}}^2}\hfill \end{array} $$

where N is the number of measurements provided by the tracking system in 5 s; X i is the real position of the camera; \( {\tilde{X}}_i \) is the estimated position; and \( \overline{X} \) is the mean position during these 5 s. Regarding jitter, the mean and the standard deviation for each coordinate were calculated [4].

The lag was estimated as the delay between the frame where the experimenter stopped the movement and the frame where the display of the HMD represented the end of the movement of the VE.

Study 2: Subjective responses

Participants

Healthy subjects older than 18 years without motor or cognitive limitations were recruited for this study. Forty-seven participants (26 men and 21 women) were finally involved. Subjects had a mean age of 28.1 ± 5.3 years old, had an education of 22.1 ± 4.4 years, and rated their experience with videogames as 5.8 ± 3.3 over 10. All of the participants provided informed consent to participate in the study.

Procedure

Two experimenters were in charge of conducting the sessions, equipping the participants, and providing safety, guidance, and comfort. Participants wore the 3D glasses in the CAVE and the modified HMD in the walking VR system. Navigation was performed through the Flystick3 in the CAVE (small displacements in this environment, even possible, were not allowed) and by natural ambulation in the walking VR system. In this latter system an experimenter carried the laptop and handled the wires that connected the HMD and the camera to the computer to avoid tangles.

Before experimenting with each system, subjects were briefly introduced to the technology and were allowed to interact with them during 5 minutes. For this exploratory session, the same VE without items (the same aisle with empty racks) was used. After that, subjects were located in the center of the CAVE system or in one end of the walking VR system, and the experiment started. Participants, who were initially located in one end of the aisle in the VE, were required to find the price of five items. An experimenter asked participants for the price of the items consecutively and subjects had to explore the VE to find the items and tell the price. If the price was correct, the experimenter gave the description of the next item to be found. If the price was incorrect, the experimenter repeated the description of the current item. Participants had a maximum of 5 minutes to find all of the items. Subjects performed the task in both VR systems in counterbalanced order. Ten minute breaks were allowed between systems.

After each condition, participants rated the sickness or vertigo experienced during the virtual exposure in a seven point rating scale and were required to fill two questionnaires about presence. Assessment included the original Slater-Usoh-Steed Questionnaire [35] and a modified version of the Presence Questionnaire [36]. The Slater-Usoh-Steed Questionnaire consists of three items rated on a seven point rating scale, that assess the sense of being in the virtual environment, the extent to which the VE becomes real, and the extent to which the VE is thought of as a place visited. Scores to this questionnaire range from 3 to 21. The modified version of the Presence Questionnaire consisted of 21 items rated on a seven point rating scale that assessed presence taking into account the influence of the visual aspects, interaction, consistency with the real world, and subjective factors (Table 1). Scores to this questionnaire range from 21 to 140.

Table 1 Modified presence questionnaire

Data analysis

Scores to the questionnaires were compared with independent sample t-tests. The α level was set at 0.05 for all analyses (two-sided). All analyses were computed with SPSS for Windows®, version 22 (IBM®, Armonk, NY, USA). Investigators performing the data analysis were blinded.

Results

Study 1: Objective parameters

Both accuracy and jitter proved to be dependent on the height (Table 2). Higher accuracy (lower error) and lower jitter were registered at 1.3 and 1.7 m. In contrast, worse results were obtained on the floor. Interestingly, these parameters did not depend on the other spatial coordinates. Values remained almost invariant for a given height.

Table 2 Objective parameters

Likewise, the number of markers detected was also dependent on the height (Table 2), but, in contrast, this value proved to be not very dependent on the walking speed (Fig. 4).

Fig. 4
figure 4

Number of markers detected at different walking speed. Number of markers detected each 10 cm at three different speeds: 0.38 m/s (red), 1.01 m/s (blue), and 1.79 m/s (black)

Finally, the lag of the system was shown to be 120 ms (nine frames at 75 Hz) in all the repetitions.

Study 2: Subjective responses

All of the participants finished the experiment and reported that the experience with the systems did not cause relevant levels of sickness. The walking VR system caused slightly higher sickness (2.4 ± 0.6) than the CAVE (2.2 ± 0.7), but no statistical significance was found between the ratings (p = 0.641).

In contrast, results showed significant differences in the sense of presence elicited by both systems (Table 3). Participants reported to have experienced higher presence in the walking VR system than in the CAVE in both the Slater-Usoh-Steed Questionnaire (17.6 ± 0.4 vs 14.6 ± 0.6) and the Presence Questionnaire (107.8 ± 2.0 vs 93.5 ± 3.2). A more in depth analysis showed that both systems were reported to have similar visual characteristics but significantly different interaction mechanisms, which elicited the greatest difference between systems. Participants also reported that the experience elicited different subjective factors and different perception of the consistency with the real world.

Table 3 Subjective responses

Discussion

This paper presents an experimental VR system that enables head tracking through a fiducial marker-based solution, accurate, robust, and fast enough to allow natural navigation in the VE by walking in the real world without causing relevant sickness or vertigo. The system has been shown to elicit significantly higher sense of presence than CAVE systems.

Fiducial marker tracking has been extensively used in augmented reality applications to estimate the camera pose with high accuracy and low computational cost, thus allowing to add virtual elements with adequate location and orientation in real time [33, 37, 38]. Different studies have determined the accuracy in the position estimation from a single marker. Reported errors vary from 11 cm [39] to 1.4 cm at 1 m [40]. The use of a more reliable tracking library in combination with multiple markers, which defined an overdetermined system of equations, could explain the higher accuracy achieved by our solution, even at larger distances [33]. In accordance with previous reports [39, 40], not only accuracy but also jitter were distance dependent. The worst results were achieved on the floor, while estimations at heights simulating the sitting and standing position were very similar. It is important to highlight that jitter values at those heights were less than 1 mm, which can be considered jitterless considering that it was imperceptible while wearing the HMD. Far from being still, the head is continuously stabilized in space to provide a steady reference, not only in standing but also during walking [41], which could have made impossible to differentiate between the natural head sway from that caused by the jitter of the tracking system. Our results could evidence a trade-off between distance to the markers and the number of markers taken into account for the calculation. Interestingly, the number of markers proved to be almost not dependent on the speed of the head. Even at faster speeds than the considered comfortable walking speed (0.9–1.3 m/s) [42], a minimum of four markers were detected in every moment. With regards to the lag of the system, experimental results showed values slightly higher than, according to Bloch's Law [43], the minimum duration that visual stimuli should last to be physiologically detected with independence of their intensity. It is also important to highlight that the lag of the system was shown to be smaller than that reported to cause sickness [44, 45]. These results should be also emphasized since delayed visual feedback has been shown to affect performance, but not so much for delays of 120 ms [46].

The low jitter values and lag can be specially relevant because visual manipulation in VR can cause a sensory mismatch that has been shown to produce postural instability. Visual-vestibular conflict produced by rotating the world around the head has been shown to increase spatial disorientation [47]. Furthermore, the direction and velocity of visual flow in the VE modulate the postural reorienting responses in the real world [48], which, in turn, have been shown to be dependent on visual stimuli that are specific to the display device [49]. In addition, sensory mismatch has been reported to be an important cause of sickness in VR [44, 50, 51]. The absence of serious reports of sickness or vertigo in our study could also highlight the performance of the tracking system.

In contrast to our study, previous reports have shown that CAVE systems facilitate greater sense of presence than HMDs [14, 34, 52, 53]. The technological advances of the current HMDs and the natural navigation in the walking VR system could have promoted higher sense of presence in the walking VR system, which could justify the differences regarding interaction and consistency. Also supporting this, participants reported that visual aspects were similar in both systems but navigation caused the strongest differences. Interestingly, the match between proprioceptive information from human body movements and computer-generated sensory stimulation has been reported to modulate presence in VEs [14]. Our results are also in accordance with the role of immersion in presence, which is expected to contribute to increasing this sense [3, 5456].

All these findings can be specially interesting for VR-based rehabilitation applications that involve navigation, because the solution presented here allows to replace navigation metaphors based on upper-limb movements [57] or joysticks [5] by natural walking in the real world, which can have special implications on spatial and visual memory, orientation, and spatial cognition [58]. It is important to highlight that the tracking area can be resized to fit wider working areas preserving the same accuracy and jitter characteristics, which could enable street-crossing or shopping in full-scale, thus enhancing the ecological validity of the simulation [25].

Limitations of this study must be taken into account. First, the characteristics of the sample, healthy young adults, could limit extrapolation of the results to other populations. Second, the visual estimation of the lag may be subject to errors. In addition, the different sources of the lag were not differentiated. Third, no avatar was used to represent the participants in the VE, which has been shown to modulate subjective factors in VR [5961]. Fourth, even significant, the clinical relevance of the differences detected in both environments is unknown. Even though different attempts have been made to determine objective correlates of the sense of presence [62, 63], the subjectivity of the sense makes its assessment a challenge for researchers. Finally, the weight of the wearable devices and the particular characteristics of the visual display could alter the gait kinematics while walking in the VR system. Further research should address these issues.

However, the high performance of the head tracking, which provided accurate and robust location of the head and imperceptible lag, and the visual stimulation provided by a last-generation HMD allowed natural locomotion in VR without causing remarkable sickness and promoting higher sense of presence. These characteristics, together with the modularity of the system, enable natural walking in full-scale VEs, which can enhance the ecological validity of VR-based rehabilitation applications.

Conclusions

This paper presents an experimental VR system that enables head tracking through a fiducial marker-based solution, accurate, robust, and fast enough to allow navigation in the VE by walking in the real world without causing relevant sickness or vertigo and promoting higher sense of presence than CAVE systems. These characteristics, together with the modularity of the system, enable natural walking in full-scale VEs, which can enhance the ecological validity of VR-based rehabilitation applications.

Abbreviations

CAVE, cave automatic virtual environment; HMD, head mounted display; VE, virtual environment; VR, virtual reality

References

  1. Lee KM. Presence. Explicated Communication Theory. 2004;14(1):27–50.

    Article  Google Scholar 

  2. Riva G. Is presence a technology issue? Some insights from cognitive sciences. Virtual Reality. 2009;13(3):159–69.

    Article  Google Scholar 

  3. Banos RM, et al. Immersion and emotion: their impact on the sense of presence. Cyberpsychol Behav. 2004;7(6):734–41.

    Article  CAS  PubMed  Google Scholar 

  4. Llorens R, et al. Tracking systems for virtual rehabilitation: objective performance vs. subjective experience. A practical scenario. Sensors (Basel). 2015;15(3):6586–606.

    Article  Google Scholar 

  5. Navarro MD, et al. Validation of a low-cost virtual reality system for training street-crossing. A comparative study in healthy, neglected and non-neglected stroke individuals. Neuropsychol Rehabil. 2013;23(4):597–618.

    Article  PubMed  Google Scholar 

  6. Parsons TD. Virtual reality for enhanced ecological validity and experimental control in the clinical, affective and social neurosciences. Front Hum Neurosci. 2015;9:660.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Cameirao MS, et al. Neurorehabilitation using the virtual reality based Rehabilitation Gaming System: methodology, design, psychometrics, usability and validation. J Neuroeng Rehabil. 2010;7:48.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Llorens R, et al. Improvement in balance using a virtual reality-based stepping exercise: a randomized controlled trial involving individuals with chronic stroke. Clin Rehabil. 2015;29(3):261–8.

    Article  PubMed  Google Scholar 

  9. Llorens R, et al. Videogame-based group therapy to improve self-awareness and social skills after traumatic brain injury. J Neuroeng Rehabil. 2015;12:37.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Fong KN, et al. Usability of a virtual reality environment simulating an automated teller machine for assessing and training persons with acquired brain injury. J Neuroeng Rehabil. 2010;7:19.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Levin MF, Weiss PL, Keshner EA. Emergence of virtual reality as a tool for upper limb rehabilitation: incorporation of motor control and motor learning principles. Phys Ther. 2015;95(3):415–25.

    Article  PubMed  Google Scholar 

  12. Llorens R, et al. Effectiveness, usability, and cost-benefit of a virtual reality-based telerehabilitation program for balance recovery after stroke: a randomized controlled trial. Arch Phys Med Rehabil. 2015;96(3):418–25. e2.

    Article  PubMed  Google Scholar 

  13. Cruz-Neira C, et al. Scientists in wonderland: A report on visualization applications in the CAVE virtual reality environment. In: 1993. Proceedings IEEE 1993 Symposium on Research Frontiers in Virtual Reality. 1993.

    Google Scholar 

  14. Juan MC, Perez D. Comparison of the levels of presence and anxiety in an acrophobic environment viewed via HMD or CAVE. Presence. 2009;18(3):232–48.

    Article  Google Scholar 

  15. Yang YR, et al. Virtual reality-based training improves community ambulation in individuals with stroke: a randomized controlled trial. Gait Posture. 2008;28(2):201–6.

    Article  PubMed  Google Scholar 

  16. Cho KH, Lee WH. Virtual walking training program using a real-world video recording for patients with chronic stroke: a pilot study. Am J Phys Med Rehabil. 2013;92(5):371–84.

    Article  PubMed  Google Scholar 

  17. Darter BJ, Wilken JM. Gait training with virtual reality-based real-time feedback: improving gait performance following transfemoral amputation. Phys Ther. 2011;91(9):1385–94.

    Article  PubMed  Google Scholar 

  18. Yang S, et al. Improving balance skills in patients who had stroke through virtual reality treadmill training. Am J Phys Med Rehabil. 2011;90(12):969–78.

    Article  PubMed  Google Scholar 

  19. Walker ML, et al. Virtual reality-enhanced partial body weight-supported treadmill training poststroke: feasibility and effectiveness in 6 subjects. Arch Phys Med Rehabil. 2010;91(1):115–22.

    Article  PubMed  Google Scholar 

  20. Riley PO, et al. A kinematic and kinetic comparison of overground and treadmill walking in healthy subjects. Gait Posture. 2007;26(1):17–24.

    Article  PubMed  Google Scholar 

  21. Alton F, et al. A kinematic comparison of overground and treadmill walking. Clin Biomech. 1998;13(6):434–40.

    Article  Google Scholar 

  22. Lee SJ, Hidler J. Biomechanics of overground vs. treadmill walking in healthy individuals. J Appl Physiol. 2008;104(3).

  23. Slater M. Measuring presence: a response to the witmer and Singer presence questionnaire. Presence. 1999;8(5):560–5.

    Article  Google Scholar 

  24. Viau A, et al. Reaching in reality and virtual reality: a comparison of movement kinematics in healthy subjects and in adults with hemiparesis. J Neuroeng Rehabil. 2004;1(1):11.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Parsons TD, et al. The potential of function-led virtual environments for ecologically valid measures of executive function in experimental and clinical neuropsychology. Neuropsychol Rehabil. 2015;11:1–31. doi:10.1080/09602011.2015.1109524.

    Article  Google Scholar 

  26. Aravind G, Lamontagne A. Perceptual and locomotor factors affect obstacle avoidance in persons with visuospatial neglect. J Neuroeng Rehabil. 2014;11:38.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Darekar A, Lamontagne A, Fung J. Dynamic clearance measure to evaluate locomotor and perceptuo-motor strategies used for obstacle circumvention in a virtual environment. Hum Mov Sci. 2015;40:359–71.

    Article  PubMed  Google Scholar 

  28. Whittle MW. Chapter 4 - Methods of gait analysis. In: Whittle MW, editor. Gait analysis. Edinburgh: Butterworth-Heinemann; 2007. p. 137–75.

    Google Scholar 

  29. Hodgson E, et al. WeaVR: a self-contained and wearable immersive virtual environment simulation system. Behav Res Methods. 2015;47(1):296–307.

    Article  PubMed  Google Scholar 

  30. Akizuki H, et al. Effects of immersion in virtual reality on postural control. Neurosci Lett. 2005;379(1):23–6.

    Article  CAS  PubMed  Google Scholar 

  31. Thies SB, et al. Comparison of linear accelerations from three measurement systems during "reach & grasp". Med Eng Phys. 2007;29(9):967–72.

    Article  CAS  PubMed  Google Scholar 

  32. Fiala M. Designing highly reliable fiducial markers. IEEE Trans Pattern Anal Mach Intell. 2010;32(7):1317–24.

    Article  PubMed  Google Scholar 

  33. Garrido-Jurado S, et al. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition. 2014;47(6):2280–92.

    Article  Google Scholar 

  34. Kim K, et al. Effects of virtual environment platforms on emotional responses. Comput Methods Programs Biomed. 2014;113(3):882–93.

    Article  PubMed  Google Scholar 

  35. Slater M, Steed A. A virtual presence counter. Presence. 2000;9(5):413–34.

    Article  Google Scholar 

  36. Witmer BG, Singer MJ. Measuring presence in virtual environments: a presence questionnaire. Presence Teleop Virt. 1998;7(3):225–40.

    Article  Google Scholar 

  37. Martín-Gutiérrez J, et al. Design and validation of an augmented book for spatial abilities development in engineering students. Comput Graph. 2010;34(1):77–91.

    Article  Google Scholar 

  38. Lopez-Mir F, et al. Design and validation of an augmented reality system for laparoscopic surgery in a real environment. Biomed Res Int. 2013;2013:758491.

    CAS  PubMed  PubMed Central  Google Scholar 

  39. Abawi DF, Bienwald J, Dorner R. Accuracy in optical tracking with fiducial markers: an accuracy function for ARToolKit. In: Third IEEE and ACM International symposium on mixed and augmented reality, ISMAR 2004. 2004.

    Google Scholar 

  40. Malbezin P, Piekarski W, Thomas BH. Measuring ARTootKit accuracy in long distance tracking experiments. In: The first IEEE International workshop augmented reality toolkit. 2002.

    Google Scholar 

  41. Paquette C, Paquet N, Fung J. Aging affects coordination of rapid head motions with trunk and pelvis movements during standing and walking. Gait Posture. 2006;24(1):62–9.

    Article  PubMed  Google Scholar 

  42. Graham JE, et al. Walking speed threshold for classifying walking independence in hospitalized older adults. Phys Ther. 2010;90(11):1591–7.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Gorea A. A refresher of the original Bloch’s Law paper (bloch, july 1885). i-Perception. 2015;6:4.

    Article  Google Scholar 

  44. Moss JD, Muth ER. Characteristics of head-mounted displays and their effects on Simulator sickness. Hum Factors. 2011;53(3):308–19.

    Article  PubMed  Google Scholar 

  45. Draper MH, et al. Effects of image scale and system time delay on Simulator sickness within head-coupled virtual environments. Hum Factors. 2001;43(1):129–46.

    Article  CAS  PubMed  Google Scholar 

  46. Fujisaki W. Effects of delayed visual feedback on grooved pegboard test performance. Front Psychol. 2012;3:61.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Keshner EA, et al. Augmenting sensory-motor conflict promotes adaptation of postural behaviors in a virtual environment. Conf Proc IEEE Eng Med Biol Soc. 2011;2011:1379–82.

    PubMed  Google Scholar 

  48. Slaboda JC, Keshner EA. Reorientation to vertical modulated by combined support surface tilt and virtual visual flow in healthy elders and adults with stroke. J Neurol. 2012;259(12):2664–72.

    Article  PubMed  Google Scholar 

  49. Tossavainen T. Comparison of CAVE and HMD for visual stimulation in postural control research. Stud Health Technol Inform. 2004;98:385–7.

    PubMed  Google Scholar 

  50. Akiduki H, et al. Visual-vestibular conflict induced by virtual reality in humans. Neurosci Lett. 2003;340(3):197–200.

    Article  CAS  PubMed  Google Scholar 

  51. Duh HBL, et al. Effects of field of view on balance in an immersive environment. In: Virtual Reality, 2001. Proceedings. IEEE. 2001.

    Google Scholar 

  52. Krijn M, et al. Treatment of acrophobia in virtual reality: the role of immersion and presence. Behav Res Ther. 2004;42(2):229–39.

    Article  PubMed  Google Scholar 

  53. Mania K, Chalmers A. The effects of levels of immersion on memory and presence in virtual environments: a reality centered approach. Cyberpsychol Behav. 2001;4(2):247–64.

    Article  CAS  PubMed  Google Scholar 

  54. Gorini A, et al. The role of immersion and narrative in mediated presence: the virtual hospital experience. Cyberpsychol Behav Soc Netw. 2011;14(3):99–105.

    Article  PubMed  Google Scholar 

  55. Fromberger P, et al. Virtual viewing time: the relationship between presence and sexual interest in androphilic and gynephilic Men. PLoS One. 2015;10(5), e0127156.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Slater M, et al. Visual realism enhances realistic response in an immersive virtual environment. IEEE Comput Graph Appl. 2009;29(3):76–84.

    Article  PubMed  Google Scholar 

  57. Nir-Hadad SY, et al. A virtual shopping task for the assessment of executive functions: Validity for people with stroke. Neuropsychol Rehabil. 2015;11:1–26. doi:10.1080/09602011.2015.1109523.

    Article  Google Scholar 

  58. Vasilyeva M, Lourenco SF. Development of spatial cognition. Wiley Interdiscip Rev Cogn Sci. 2012;3(3):349–62.

    Article  PubMed  Google Scholar 

  59. Banakou D, Groten R, Slater M. Illusory ownership of a virtual child body causes overestimation of object sizes and implicit attitude changes. Proc Natl Acad Sci U S A. 2013;110(31):12846–51.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  60. Yee N, Bailenson JN, Ducheneaut N. The proteus effect: implications of transformed digital self-representation on online and offline behavior. Commun Res. 2009;36(2):285–312.

    Article  Google Scholar 

  61. Baylor AL. Promoting motivation with virtual agents and avatars: role of visual presence and appearance. Philos Trans R Soc Lond B Biol Sci. 2009;364(1535):3559–65.

    Article  PubMed  PubMed Central  Google Scholar 

  62. Clemente M, et al. Assessment of the influence of navigation control and screen size on the sense of presence in virtual reality using EEG. Expert Sys App. 2014;41(4, Part 2):1584–92.

    Article  Google Scholar 

  63. Clemente M, et al. An fMRI study to analyze neural correlates of presence during virtual reality experiences. 2013. Interacting with Computers.

    Google Scholar 

Download references

Acknowledgements

The authors wish to thank the staff of LabHuman for their support in this project, especially José Miguel Martínez and José Roda for their assistance.

Funding

This study was funded in part by Ministerio de Economía y Competitividad of Spain (Project NeuroVR, TIN2013-44741-R and Project REACT, TIN2014-61975-EXP), by Ministerio de Educación y Ciencia of Spain (Project Consolider-C, SEJ2006-14301/PSIC), and by Universitat Politècnica de València (Grant PAID-10-14).

Authors' contributions

All the authors designed the study and interpreted the results. In addition, AB and JL contributed to the data acquisition, and RL, AB, and JL designed the hardware and software components of the system. All the authors have revised the manuscript and have given their final approval for publication.

Competing interests

The authors declare no potential conflicts of interest, including relevant financial interests, activities, and relationships.

Consent for publication

The participants provided informed consent to publish both data and images.

Ethics approval and consent to participate

The study was approved by the Institutional Review Board of Universitat Politècnica de València. All the participants provided written consent to participate in the study.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roberto Llorens.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Borrego, A., Latorre, J., Llorens, R. et al. Feasibility of a walking virtual reality system for rehabilitation: objective and subjective parameters. J NeuroEngineering Rehabil 13, 68 (2016). https://doi.org/10.1186/s12984-016-0174-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12984-016-0174-1

Keywords