Authors: ChengHung Lo ChihHsing Chu Kurt Debattista Alan Chalmers
Publish Date: 2009/07/23
Volume: 26, Issue: 2, Pages: 97-107
Abstract
Depthrelated visual effects are a key feature of many virtual environments In stereobased systems the depth effect can be produced by delivering frames of disparate image pairs while in monocular environments the viewer has to extract this depth information from a single image by examining details such as perspective and shadows This paper investigates via a number of psychophysical experiments whether we can reduce computational effort and still achieve perceptually highquality rendering for stereo imagery We examined selectively rendering the image pairs by exploiting the fusing capability and depth perception underlying human stereo vision In raytracingbased global illumination systems a higher image resolution introduces more computation to the rendering process since many more rays need to be traced We first investigated whether we could utilise the human binocular fusing ability and significantly reduce the resolution of one of the image pairs and yet retain a high perceptual quality under stereo viewing condition Secondly we evaluated subjects’ performance on a specific visual task that required accurate depth perception We found that subjects required far fewer rendered depth cues in the stereo viewing environment to perform the task well Avoiding rendering these detailed cues saved significant computational time In fact it was possible to achieve a better task performance in the stereo viewing condition at a combined rendering time for the image pairs less than that required for the single monocular image The outcome of this study suggests that we can produce more efficient stereo images for depthrelated visual tasks by selective rendering and exploiting inherent features of human stereo vision
Keywords: