Although there is much potential for the use of immersive virtual reality environments, there are problems which could limit their ultimate usability for assessment and rehabilitation. Some users have experienced side-effects during, and after exposure to virtual reality environments. The symptoms experienced by these users are similar to those which have been reported by users of simulators with wide field-of-view displays. These symptoms have been collectively referred to as simulator sickness. Simulator sickness is characterised by three classes of symptoms:

Problems of disorientation and nausea in simulators and virtual reality environments have been ascribed to sensory conflict, or the discrepancy between visual and vestibular information about body orientation and motion (Kennedy et al, 1995). Lags between head movements in an immersive virtual reality environment and the corresponding movement of the displayed images are one source of conflict between visual and vestibular perception of self-motion. Lags between the sensing of head and limb position and the movements of images have also been shown to have a direct effect on the performance of tracking, manipulation and reading tasks in simulators and virtual reality systems.

The above problems could lead to:

Many of the undesirable effects of exposures to virtual reality environments can be ascribed to limitations in the temporal performance of the system hardware. These limitations result in lags between movements of the head or hands and corresponding movements of displayed images. Susceptibility to side-effects can be affected by age, ethnicity, experience, gender, state of health and postural stability, as well as the characteristics of the display, the virtual environment and the tasks (Pausch et al, 1992; Kolasinski, 1995).

1. Effects of lags and other factors on operator performance

1.1 Effects of lags on performance

1.2 Factors influencing the effects of lags on performance

2. Side effects of exposure to VEs

2.1 Disorientation, nausea and cue conflict in VEs

2.2 Factors influencing side effects of VEs


1. Effects of lags and other factors on operator performance


1.1 Effects of lags on performance

Experimental studies have investigated the effects of lags in head-coupled systems on tasks involving tracking virtual targets with the head, tracking and manipulating virtual targets with the hands, simulated vehicle control, and target search and recognition. Findings are summarised in the following four sections.


Target capture and tracking using the head

So and Griffin (1995b) investigated the effects of lags on the time taken to capture a stationary target using a head-slaved aiming reticle. They showed that the capture time was significantly increased by imposing an additional lag of 67 ms on a head-coupled system with a basic system lag of 40 ms. In an experiment, subjects were required to place the aiming reticle inside a target circle as quickly and accurately as possible, and to keep it inside the circle for at least 350 ms.

In an experiment involving the tracking of a continuously moving target, tracking errors were significantly increased by imposing an additional display lag of only 40 ms on a 40 ms basic system lag (So and Griffin, 1992, 1995a, 1995c). The correlation between the head motion and the target motion also decreased with increasing lag, particularly at higher motion frequencies (So and Griffin, 1995c). So and Griffin (1995b) showed that it is possible to simulate the effects of lags on continuous tracking with a head-coupled system by adding an additional signal to the target motion. The additional signal was equivalent to the additional image movements induced by the system lag. These results suggested that it is the position errors of the images on the display, due the combination of head movements and system lag (Bryson and Fisher, 1990), that are primarily responsible for the measured decrements in tracking performance.


Capture and manipulation of virtual objects by the hands

In the experiments by So and Griffin (1992, 1995a, 1995b) the head was used as a controlling device for following visual targets. Time lags also degrade the performance of tasks in virtual environments which involve manipulation and control by the hands. Pure lags of 100 ms or more have been shown to degrade performance of a simulated tele-operated manual control task and a pick-and-place task, which were both controlled by a pair of joysticks (Liu et al, 1991, 1993; Tharp et al, 1992). The tracking task involved keeping a ball inside a box, which moved along an unpredictable three-dimensional trajectory. The pick-and-place task involved placing four balls into appropriately sized boxes. There was an approximately linear increase with increasing delay in both tracking error and pick-and-place completion time.

The tasks in the experiments described by Liu et al (1991, 1993) were viewed on a head-slaved head-mounted display and the lags affected both the visual display and the control task. The tasks simulated a tele-robotic system in which head orientation and joystick positions were relayed to a remote site. The head orientation (which was subject to a communication delay) controlled the direction of a camera, which relayed an image of a robot manipulator back to the operator via the head-mounted display. The movements of the manipulator were slaved to the joystick commands (which were also subject to a communication delay).

Liu et al (1991, 1993) also investigated the effect of display update rates, in the range 0.1 to 100 frames per second, on the tracking task described above. This experiment closely resembles an immersive virtual reality environment involving a manipulation task, in which the lags are caused by position sensing and rendering delays as described in Section 2.1. With three inexperienced subjects, the root-mean-square tracking error was significantly increased by update rates below 10 frames per second. However, two experienced subjects were able to maintain performance of the tracking task with update rates down to 2 frames per second, suggesting that more experienced subjects are better able to adapt to degraded conditions.

Richard et al (1996) investigated the effect of display update rates between 1 and 28 frames per second on the manipulation of virtual objects by 160 subjects. Hand position was measured using a data glove. The task was displayed either monoscopically on a CRT display monitor, or stereoscopically on a LCD head-mounted display. The task was not immersive, since the position and orientation of the images were not adjusted according to head position. The task involved tracking and grabbing a target ball, which moved in three dimensions within a virtual room. There were no significant differences in task completion times for frame rates between 14 and 28 frames per second on either display. Performance was progressively degraded by lower frame rates, although the degradation was greater with the monoscopic display.

With update rates of 14 frames per second and higher, performance was consistent across ten consecutive trials. With lower update rates the capture times were initially high, but decreased with experience. The difference in performance between the first and the last trial was smaller with the stereoscopic display. The authors suggested that high frame rates and the stereoscopic display required less adaptation since these conditions are closer to natural human interaction. With frame rates 14 and 28 frames per second the subjects moved their hands continuously in space, but with frame rates less than 7 frames per second their tracking movements became saccadic with both displays.


Simulated vehicle control

Frank et al (1988) found that lags of 170 ms or greater in either the visual system or the motion system of a driving simulator significantly degraded vehicle control, although the visual delays were found to be more detrimental to performance than the motion delays. The simulator used a wide field-of-view fixed display. The lags determined the time between making a controlling action and corresponding movements of the vehicle on the display, so the lags primarily affected the control loop.


Target search and recognition

Lewis et al (1987) have reported the effect of lags on a character search and recognition task in the yaw axis using a head-coupled display. Random alphanumeric characters were generated by a computer system at positions from -70 to +70 from the boresight along the horizontal axis. The computer-generated image was slaved to the line of sight of a head-mounted display with a 17 horizontal field of view. Seven exponential display lags (with time constants between 3 ms and 318 ms) were imposed between movements of the head and the displayed image. Consistent increases were observed in reading time with increasing exponential lags greater then 40 ms.


1.2. Factors influencing the effects of lags on performance

The results of some the experimental studies show that the effects of lags on the performance of tasks using head-coupled displays is dependent on factors including the frequency and magnitude of head movements, the field of view of the display, and depth cues provided by the display. The findings are summarised in the following three sections.


Head movements

Lags in immersive virtual reality systems with space-stabilised displays will induce image position errors which are proportional to the velocity of head movement (Bryson and Fisher, 1990). When following targets with the head, the head movement velocity will be largely determined by the angular velocity of the target. So and Griffin (1995a) showed that, with system delays between 40 ms and 200 ms, errors in the tracking of continuously moving targets increased in proportion to target velocity. In this case, the effect of faster head movements will have been confounded by the increased difficulty of the task due to the faster target motions.

Grunwald et al (1991) found that in a simulated helicopter flying task, involving flying down a winding canyon, tracking errors were increased by 116% when subjects were required to move their heads to capture secondary targets. The system used in this experiment had an update rate of 15 frames per second, which would have resulted in a visual delay of about 67 ms. The authors suggested that the increased errors were due to larger display lags during head movements, but performance may also have been affected by the secondary task.

Liu et al (1991a) have reported that the performance of a manual control task with a head-slaved display is strongly dependent on the frequency at which subjects must move their heads. In the experiments reported by Liu et al (1991, 1993) and Tharp et al (1992) the limited field of view of the head-mounted display (22) constrained the subjects to turn their heads to keep the task in view on the display. However, it was reported that as the system delay was increased, subjects increasingly inhibited their head movements because of the de-coupling between head position and the position of the displayed image.

The results of the above experiments show that the performance of tasks in virtual reality systems with space-stabilised displays will become more difficult as the frequency and magnitude of head movements increase. The effect of head movements on task difficulty will increase with increasing system lag. As the system lag increases, users will increasingly adopt strategies which minimise head movements. Assessment of the likely effects of image lags in a particular head-coupled system therefore requires a knowledge of the pattern of head movements required by users.


Field of view

The field of view of a head-mounted display can affect the frequency and velocity of head movements which are made by an observer, and may consequently affect the extent of image displacements on the display caused by lags. Wells and Venturino (1990) reported that there was a significant interaction between the field of view (between 20 and 120), the number of displayed targets, the number of hits and the subjects' response time in a target acquisition task. There was no effect of field of view on performance with three targets, but performance was significantly degraded by fields of view of 60 or less when 9 targets were displayed. Subjects were observed to move their heads less with the larger fields of view, since more targets were simultaneously visible on the display, allowing them to be monitored more effectively using only eye movements. Head movements made with the larger fields of view were observed to be faster than those with smaller fields of view.

Grunwald et al (1991) and Woodruff et al (1986) compared the accuracy of simulated flying tasks between wide conventional displays and helmet-mounted displays (slaved to the line of sight) with narrower fields of view. The accuracy of flying down a narrow canyon (Grunwald et al, 1991) was reported to be greater with a 23 HMD than with a 102 fixed display, unless the pilots were forced to make head movements by a secondary target capture task. However Woodruff et al (1986) reported smaller horizontal position errors in a simulated aerial refuelling task with a 300 projection display than with a 40 field of view HMD.

Wider fields of view reduce the requirement for users to make head movements to keep a task in view, and may therefore reduce image position errors caused by system lags.


2. Side effects of exposures to VEs

Immersion in virtual reality environments has been shown to result in side-effects similar to those which have been reported during and after exposures to simulators with wide field-of-view displays (Cobb et al, 1995; Regan, 1995; Regan and Price, 1994; Kennedy et al, 1995). These side-effects have been collectively referred to as "simulator sickness" (Kennedy et al, 1989).

Simulator sickness is characterised by three classes of symptoms:

(a) ocular problems such as eyestrain, blurred vision and fatigue (Mon-Williams et al, 1993; Regan and Price, 1993b; Rushton et al, 1994),

(b) disorientation (Kennedy et al, 1993), and

(c) nausea (Cobb et al, 1995; Kennedy et al, 1995; Kolasinski, 1995).

After-effects, such as visual flashbacks and balance disturbances have occasionally occurred up to 12 hours after exposure (Kennedy et al, 1993; Kennedy et al, 1995).

Symptoms of motion sickness and postural disturbances have been reported to be higher in simulators employing space-stabilised head-mounted displays (i.e. immersive virtual reality systems) than in simulators with dome-based projection systems or fixed CRT displays (Kennedy et al, 1995).


2.1 Disorientation, nausea and cue conflict in virtual reality environments

Problems of disorientation and nausea in simulators and virtual reality systems are similar to the symptoms of motion sickness and both are generally considered to be caused by conflicts between the information received from two or more sensory systems (Reason and Brand, 1975). Sensory conflict alone is not sufficient to explain the development of simulator sickness, since it takes no account of the adaptive capabilities of the human body. To take account of habituation, the concept of sensory conflict has been extended into a theory of sensory rearrangement (Reason and Brand, 1975) which states that: "all situations which provoke motion sickness are characterised by a condition of sensory rearrangement in which the motion signals transmitted by the eyes, the vestibular system and the non-vestibular proprioceptors are at variance either with one another or with what is expected from previous experience".

There are commonly assumed to be two main categories of sensory conflict which can cause motion sickness: inter-modality (between the eyes and the vestibular receptors) and intra-modality (between the semi-circular canals and the otoliths in the vestibular system). Lags between the user's head movements and the corresponding movement of the displayed image are a source of conflict between visual and vestibular perception of self-motion.

Self-motion (i.e. motion of the body) can be perceived by visual, vestibular and somatosensory mechanisms (Howard, 1986). In the absence of visual cues (i.e. in the dark or when blind-folded) the orientation and motion of the head is detected by the vestibular apparatus (Barnes, 1980). Young and Oman (1969) have described the dynamics of the vestibular sensation of angular head movements in terms of a mathematical model which describes the relationship between subjective velocity and the angular velocity of the head.

In the absence of head motions, illusions of self-motion can be produced by moving images on a display. Illusions of self-rotation, caused by rotating scenes, are referred to as circular vection. Illusions of translational self-motion, caused by scenes moving in a flat plane, are referred to as linear vection. Illusions of self-motion can also be induced by somatosensory mechanisms, such as when walking in the dark, on a rotating platform, in the opposite direction to the platform motion (Boff and Lincoln, 1988).

When visual and vestibular motion cues are both present, there is an important non-linear interaction whereby rapidly occurring conflicts between visual and vestibular cues, especially those involving direction disparities, result in a precipitous decline in visually-induced vection and a temporary domination by the vestibular response (Young et al, 1973). Zacharias and Young (1981) have suggested that rotational self-motion is estimated by combining complementary visual and vestibular cues: low frequency visual cues are used to augment high frequency vestibular cues giving the capability of sensing head motion over a wide range of frequencies.

A series of experiments were reported by Zacharias and Young (1981) in which subjects were required to null their own sense of self-motion in the yaw axis while seated in a modified small aircraft training simulator. Both the motion of the trainer and the motion of the visual field (a pattern of vertical stripes subtending a horizontal angle of 60 degrees) were manipulated. The results were used to determine the parameters of a model of visual-vestibular interaction with parallel channels for visual and vestibular pathways. The relative gains of the two channels were adjusted according to the level of agreement between the visual and vestibular cues.

In the model by Zacharias and Young (1981), when the conflict between visual and vestibular signals is high, the relative weighting given to the vestibular input increases. The cue conflict is calculated from the difference between the between the vestibular system's estimate of angular velocity, ves, and an appropriately filtered visual signal, vis. This visual signal represents an internal model of what the semi-circular canal signals would be if the visual field velocity were representative of head motion in a stationary field. The effect of this cue conflict adjustment of the weighting function, which makes the model non-linear, is to emphasise visual signals when the vestibular signals are either highly variable or unlikely to present meaningful information about the relatively slow changes in true body velocity. When sudden changes in visual field velocity are not in agreement with vestibular signals, then the visually induced motion is largely ignored in favour of reliance on the vestibular cues. Conflicts between vestibular and visual signals can occur when:

  1. there is visual stimulation in the absence of vestibular stimulation (i.e. vection);
  2. there is a delay between vestibular sensations of motion and corresponding movements of a visual scene;
  3. the motions of a visual scene are distorted compared with motions of the head.

Vection has been shown to be highly nauseogenic (Hettinger and Riccio, 1992), and to induce inappropriate postural adjustments in experimental subjects (Lestienne et al, 1977).

2.2 Factors influencing side-effects of virtual reality environments

Factors which have been identified as contributors to simulator sickness in virtual reality systems are shown in the following Table (Frank et al, 1983; Kennedy et al, 1989; Kolasinski, 1995; Pausch et al, 1992). These are divided into characteristics of the user, the system and the user's task. Few systematic studies have been carried out to determine the effects of the characteristics of virtual reality systems on the symptoms of simulator sickness. Hence much of the evidence for the effects of these factors comes from studies of visually-induced motion sickness and motion-induced sickness (i.e. sickness caused by actual vehicle motions), as well as the effects of exposures to simulators.

Table 1. Factors contributing to simulator sickness in virtual environments
User characteristics

physical characteristics

  • age
  • gender
  • ethnic origin
  • postural stability
  • state of health


  • with virtual reality system
  • with corresponding real-world task

perceptual characteristics

  • flicker fusion frequency
  • mental rotation ability
  • perceptual style
System characteristics


  • contrast
  • flicker
  • luminance level
  • phosphor lag
  • refresh rate
  • resolution

system lags

  • time lag
  • update rate
Task characteristics

movement through virtual environment

  • control of movement
  • speed of movement

visual image

  • field of view
  • scene content
  • vection
  • viewing region
  • visual flow

interaction with task

  • duration
  • head movements
  • sitting vs. standing


User characteristics

Physical characteristics: Age has been shown to affect susceptibility to motion-induced motion sickness. Motion sickness susceptibility is greatest between ages of 2 and 12 years. Susceptibility tens to decrease rapidly from 12 to 21 years, then more slowly through the remainder of life (Reason and Brand, 1975).

Females tend to be more susceptible to motion sickness than males. The differences might be due to anatomical differences or an effect of hormones (Griffin, 1990). In a study of sea-sickness on one ship, vomiting occurred among 14.1% of female passengers but only 8.5 % of male passengers (Lawther and Griffin, 1986).

Ethnic origin may affect susceptibility to visually-induced motion sickness. Stern et al (1993) have presented experimental evidence to show that Chinese women may be more susceptible than European-American or African-American women to visually-induced motion sickness. A rotating optokinetic drum was used to provoke motion sickness. The Chinese subjects showed significantly greater disturbances in gastric activity and reported significantly more severe motion sickness symptoms. It is unclear whether this effect is due to environmental or genetic factors.

Postural stability has been shown to be affected by exposure to virtual reality environments and simulators (Kennedy et al, 1993, 1995). Kolasinski (1995) has presented evidence to show that less stable individuals may be more susceptible to simulator sickness. Pre-simulator postural stability measurements were compared with post-simulator sickness data in Navy helicopter pilots. Postural stability was found to be associated with symptoms of nausea and disorientation but not with ocular disturbances.

The state of health of an individual may affect susceptibility to simulator sickness. It has been recommended that individuals should not be exposed to virtual reality environments when suffering from health problems including flu, ear infection, hangover, sleep loss or when taking certain medications that may affect visual or vestibular function (Frank et al ,1983; Kennedy et al, 1987, 1993; McCauley and Sharkey, 1992). Regan and Ramsey (1994) have shown that drugs such as hycosine hydrobromide can be effective in reducing symptoms of nausea (as well as stomach awareness and eyestrain) during immersion in virtual reality.

Experience: The incidence of nausea and postural problems have been shown to reduce with increased prior experience in both simulators (Crowley, 1987) and in immersive virtual reality environments (Regan, 1995). Frank et al (1983) have suggested that although adaptation reduces symptoms during immersion, re-adaptation to the normal environment could lead to a greater incidence of post-immersion symptoms. Kennedy et al (1989) have also suggested that adaptation cannot be advocated as the technological answer to the problem of sickness in simulators since adaptation is a form of learning involving acquisition of incorrect or maladaptive responses which could place aircraft or individuals at risk.

Pilots with more flight experience may be generally more prone to simulator sickness (Kennedy et al, 1987). This may be due to their greater experience of flight conditions, leading to greater sensitivity to discrepancies between actual and simulated flight, or because they have a smaller degree of control when acting as instructors in simulators (Pausch et al, 1992).

Perceptual characteristics: Perceptual characteristics which have been suggested to affect susceptibility to simulator sickness include perceptual style, or field independence (Kennedy, 1975; Kolasinski, 1995), mental rotation ability (Parker and Harm, 1992), and level of concentration (Kolasinski, 1995).


System characteristics

Characteristics of the display: The luminance, contrast and resolution should be balanced with the task to be performed in order to achieve optimum performance (Pausch et al, 1992). Low spatial resolution can lead to problems of temporal aliasing, similarly to low frame rates (Edgar and Bex, 1995).

Flicker in the display has been cited as a contributor to simulator sickness (Frank et al, 1983; Kolasinski, 1995; Pausch et al, 1992). Flicker is also distracting and contributes to eye fatigue (Pausch et al, 1992). The point at which flicker becomes visually perceptible (i.e. the flicker fusion frequency threshold) is dependent on the refresh rate, luminance and field-of-view. As the level of luminance increases, the refresh rate must also increase to prevent flicker. Increasing the field of view increases the probability that an observer will perceive flicker since the peripheral visual system is more sensitive to flicker than the fovea. There is a wide range of sensitivities to flicker between individuals, and also a daily variation within individuals (the threshold for detection of flicker tends to reduce at night) (Boff and Lincoln, 1988).

Optical factors, which contribute to oculomotor symptoms reported during exposure to virtual environments, have been discussed by Mon-Williams et al (1993), Regan and Price (1993) and Rushton et al (1994).

System lags: Wioka has suggested that lags of less than 300 ms are necessary to maintain the illusion of immersion in a virtual reality environment, since with longer lags subjects start to dissociate their movements from the associated image motions (Wioka, 1992; Held and Durlach, 1991). It is unclear whether the authors attribute these effects to pure lags or the system update rates. However, lags of this order, and update rates of the order of 3 frames per second, have both been shown to have large effects on performance and on subjects' movement strategies. The total system lag in the virtual reality system used in the experimental studies reported by Regan (1995) and Regan and Price (1994) was reported to be 300 ms (Regan and Price, 1993c).

There is an urgent need for further research to systematically investigate the effect of a range of system lags on the incidence of simulator sickness symptoms. The interaction between system lags of head movement velocity is likely to be important, since errors in the motion of displayed images are proportional to both total lag and head velocity.

Pausch (1992) cites data from Westra and Lintern (1985) to show that lags may affect subjective impressions of a simulator more than they affect performance. Simulated helicopter landings were compared with visual lags of 117 ms and 217 ms. The lag had only a small effect on objective performance measures, but pilots believed that the lag had a larger effect than was indicated by the performance measures.

Richard et al (1996) suggested that the frame rate (i.e. the maximum rate at which new virtual scenes are presented to the user) is an important source of perceptual distortions. Low frame rates make objects appear to move in saccades (discrete spatial jumps). The visual system thus has to bridge the gaps between perceived positions by using spatio-temporal filtering. The resulting sampled motion may also result in other artefacts such as motion reversals (Edgar and Bex, 1995). Low frame rates (particularly when combined with high image velocities) may cause the coherence of the image motion to be lost, and a number of perceptual phenomena may occur, including appearance of reversals in the perceived motion direction, motion appearing jerky, and multiple images trailing behind the target. This phenomena is referred to as temporal aliasing. Edgar and Bex (1995) discuss methods for optimising displays with low update rates to minimise this problem.


Characteristics of the task

Movement through the virtual environment:A degree of control over the motion has been shown to affect simulator sickness. The incidence of simulator sickness among air-crew has been reported to be lower in pilots (who are most likely to generate control inputs) than in co-pilots or other crew members (Pausch et al, 1992).

The speed of movement through a virtual environment determines global visual flow (i.e. the rate at which objects flow through the visual scene). The rate of visual flow influences vection and has been shown to be related to the incidence of simulator sickness (McCauley and Sharkey, 1992). Other motion conditions that have been observed to exacerbate sickness in simulators include tasks involving high rates of linear or rotational acceleration, unusual manoeuvres such as flying backwards and freezing or resetting the simulation during exposures (McCauley and Sharkey, 1992).

Regan and Price (1993c) have suggested that the method of movement through the virtual world may affect the level of side-effects. Experiments to investigate side-effects in immersive virtual environments which have been reported by Regan (1995), Regan and Price (1993c, 1994) and Cobb et al (1995) have utilised a 3D mouse to generate movement. This is likely to generate conflict between visual, vestibular and somatosensory senses of body movement. A more natural movement might be provided by coupling movement through a virtual environment to walking on a treadmill (Regan and Price, 1993c).

Visual image: A wider field-of-view may enhance performance in a simulator but it also increases the risk of simulator sickness (Kennedy et al, 1989; Pausch et al, 1992), although the effect of field of view is often confounded with other factors (Kennedy et al, 1989). Stern et al (1990) have shown that restricting the width of the visual field to 15 degrees significantly reduces both circular vection and the symptoms of motion sickness induced by a rotating surround with vertical stripes (optokinetic drum). Fixation on a central point in the visual field also reduces the circular vection induced by rotating stripes observed with peripheral vision, and greatly reduces motion sickness symptoms (Stern et al, 1990). Circular vection increases with increasing stimulus velocity up to about 90 degrees per second (Boff and Lincoln, 1988). Further increases in stimulus velocity may inhibit the illusion. Vection is not dependent on acuity or luminance (down to scoptopic levels) (Liebowitz et al, 1979).

Linear vection can be induced by a radially expanding pattern of texture points. Anderson and Braunstein (1985) showed that linear vection could be induced by a moving display of radially expanding dots with a visual angle as small as 7.5 in the central visual field. They suggested that the type of motion and the texture in the display may be as important as the field-of-view in inducing vection. The incidence of simulator sickness has been shown to be related to the rate of global visual flow, or the rate at which objects flow through the visual scene (McCauley and Sharkey, 1992). The direction of self-motion can be derived from the motion pattern of texture points in the visual field (Warren, 1976; Zacharias et al, 1985). The optical flow field appears to expand from a focal point, which indicates the direction of motion. For curved motion the expanding flow field tends to bend sideways, and the focal point is no longer defined. Grunwald et al (1991) have shown how parasitic image shifts, due to lags in a flight simulator with a head-coupled head-mounted display, distort the visual flow field. In straight and level flight, the parasitic image motions which occur during head movements will cause the expanding visual pattern to appear to bend, creating the illusion of a curved flight path. The bending effect is proportional to the ratio between the magnitude of the image shifts and the apparent velocity along the line of sight. The apparent velocity depends on the velocity to height ratio. Hence the angular errors induced by the bending effect increase with decreased velocity and increased altitude.

Linear vection has been observed to influence postural adjustments made by subjects in the fore and aft direction. Lestienne et al (1977) observed inclinations of subjects in the same direction as the movement of the visual scene movement, with a latency of 1 to 2.5 s, and an after-effect on the cessation of motion. The amplitude of the postural adjustments were proportional to the image velocity.

Interaction with the task: Exposure duration of less than 10 minutes to immersive virtual reality environments has been shown to result in significant incidences of nausea, disorientation and ocular problems (Regan and Price, 1993c). Longer exposures to virtual environments can result in an increased incidence of sickness and require longer adaptation periods (McCauley and Sharkey, 1992). The severity of motion-induced sickness symptoms have been shown to increase with the duration of exposure to the provocation for duration up to at least 6 hours (Lawther and Griffin, 1986). Kennedy et al (1993) reported that longer exposures to simulated flight increased the intensity and duration of postural disruption.

The extent of image position errors, and conflicts between visual and vestibular motion cues, will depend on the interaction between head motions and the motions of visual images on the display. Head movements in simulators have been reported to be very provocative (Lackner, 1990, reported by Pausch et al, 1992). However Regan and Price (1993c) found that over a ten minute period of immersion in a virtual environment, there was no significant effect of type of head movement on reported levels of simulator sickness. Sickness incidence was compared between two ten minute exposures to an immersive virtual reality environment. One exposure involved pronounced head movements and rapid interaction with the system. During the other exposure, subjects were able to control their head movements and their speed of interaction to suit themselves. There was some evidence that the pronounced head movements initially caused higher levels of symptoms, but that subjects adapted to the conditions by the end of the exposures. No measurements were made of head movements, so the effects of the instructions given to the subjects on the velocity and duration of head movements is unclear. The system lag was reported to be 300 ms, so even slow head movements may have been expected to result in significant spatio-temporal distortions. There is an urgent need for further research to systematically investigate the interaction between system lags and head movement velocity with the incidence of side-effects.

The levels of symptoms reported by seated subjects after immersion in a virtual reality environment have been reported to be slightly higher than the level of symptoms reported by standing subjects (Regan and Price, 1993c). However, the differences were not statistically significant after ten minute exposures.

Please consult this chapter's references to get more information about these issues.

For any questions or requests, please contact