Paper Search Console

Home Search Page About Contact

Journal Title

Title of Journal: Vis Comput

Search In Journal Title:

Abbravation: The Visual Computer

Search In Journal Abbravation:

Publisher

Springer-Verlag

Search In Publisher:

DOI

10.1016/0923-2508(93)90013-R

Search In DOI:

ISSN

1432-2315

Search In ISSN:
Search In Title Of Papers:

Selective rendering for efficient ray traced stere

Authors: ChengHung Lo ChihHsing Chu Kurt Debattista Alan Chalmers
Publish Date: 2009/07/23
Volume: 26, Issue: 2, Pages: 97-107
PDF Link

Abstract

Depthrelated visual effects are a key feature of many virtual environments In stereobased systems the depth effect can be produced by delivering frames of disparate image pairs while in monocular environments the viewer has to extract this depth information from a single image by examining details such as perspective and shadows This paper investigates via a number of psychophysical experiments whether we can reduce computational effort and still achieve perceptually highquality rendering for stereo imagery We examined selectively rendering the image pairs by exploiting the fusing capability and depth perception underlying human stereo vision In raytracingbased global illumination systems a higher image resolution introduces more computation to the rendering process since many more rays need to be traced We first investigated whether we could utilise the human binocular fusing ability and significantly reduce the resolution of one of the image pairs and yet retain a high perceptual quality under stereo viewing condition Secondly we evaluated subjects’ performance on a specific visual task that required accurate depth perception We found that subjects required far fewer rendered depth cues in the stereo viewing environment to perform the task well Avoiding rendering these detailed cues saved significant computational time In fact it was possible to achieve a better task performance in the stereo viewing condition at a combined rendering time for the image pairs less than that required for the single monocular image The outcome of this study suggests that we can produce more efficient stereo images for depthrelated visual tasks by selective rendering and exploiting inherent features of human stereo vision


Keywords:

References


.
Search In Abstract Of Papers:
Other Papers In This Journal:

  1. Interactive GPU-based adaptive cartoon-style rendering
  2. A bag-of-semantics model for image clustering
  3. Pose analysis using spectral geometry
  4. A SIMD-efficient 14 instruction shader program for high-throughput microtriangle rasterization
  5. Erratum to: Dynamic BFECC Characteristic Mapping method for fluid simulations
  6. Cyberworlds: architecture and modeling by an incrementally modular abstraction hierarchy
  7. Cyberworlds: architecture and modeling by an incrementally modular abstraction hierarchy
  8. Achieving developability of a polygonal surface by minimum deformation: a study of global and local optimization approaches
  9. Discriminative Hough context model for object detection
  10. Sampling-sensitive multiresolution hierarchy for irregular meshes
  11. Device-based decision-making for adaptation of three-dimensional content
  12. An improved image analogy method based on adaptive CUDA-accelerated neighborhood matching framework
  13. Enriching a motion database by analogous combination of partial human motions
  14. Real-time EEG-based emotion monitoring using stable features
  15. Optimization-based key frame extraction for motion capture animation
  16. Guiding flows for controlling crowds
  17. Geocube – GPU accelerated real-time rendering of transparency and translucency
  18. Illustrative uncertainty visualization of DTI fiber pathways
  19. An immersive multi-agent system for interactive applications
  20. A dynamic balanced flow for filtering point-sampled geometry
  21. Single-strips for fast interactive rendering
  22. Automatic blur-kernel-size estimation for motion deblurring
  23. Perceptually meaningful image editing

Search Result: