Self-motion and Presence in the Perceptual Optimization of a Multisensory Virtual Reality Environment
Licentiatavhandling, 2005

Determining the perceptually optimal resolution of multisensory rendering might help to foster the development of cost-effective, highly immersive multi-modal displays for mediated environments (e.g. virtual and augmented reality). The required sensory depth of stimulation can be quantified using human centered methodologies where end user experiences serve as a basis for uni- and cross-modal optimization of the sensory inputs. In the psychophysical studies presented in this thesis, self-reported presence and illusory self-motion (vection) indicated salience of auditory and multisensory cues in design of perceptually optimized motion simulators. Contribution of auditory cues to illusory self-motion has been largely neglected until very recently and papers A and B present studies on purely auditory induced vection (AIV). Paper A shows that rotating auditory scenes synthesized using individualized Head-Related Transfer Functions (HRTFs) are more instrumental for presence compared to generic binaural synthesis. Study on translational AIV in paper B shows that inconsistent auditory scene might significantly decrease self-motion responses. Paper C and D demonstrate that bi-sensory stimulations increase presence and self-motion ratings as expected. In paper C additional vibrotactile stimulation increased translational AIV and presence ratings, especially for the stimuli containing the auditory-tactile engine metaphor. Paper D extended paper A results for rotational AIV showing that spatial resolution of rotating auditory scenes can be greatly reduced when combined with visual input. This thesis shows that sound plays important role in the illusory self-motion perception and it should be carefully used in multi-modal motion simulators. The presented findings suggest that a minimum set of acoustic cues can be sufficient for eliciting a self-motion sensation, especially if other modalities are involved. However, perceptual consistency of the created auditory and multimodal scenes should be assured in the design of the next generation of motion simulators.


Författare

Alexander Väljamäe

Chalmers, Signaler och system, Kommunikation, Antenner och Optiska Nätverk

Ämneskategorier

Annan elektroteknik och elektronik

R - Department of Signals and Systems, Chalmers University of Technology: R037/2005

Mer information

Skapat

2017-10-07