Methods for Generating Volume Rendered Views from Registered Camera Images and Three-Dimensional Anatomical Data Sets Open Access
Downloadable ContentDownload PDF
This dissertation presents methods to generate high-quality volume rendered views from registered camera images and three-dimensional anatomical data sets. While camera images provide high-resolution color and texture information of a surface that belongs to a patient, anatomical data sets provide the structural information and reveal internal anatomy that cannot be seen directly. The objective of this research is to provide real-time visualization that displays the link between surface views of a patient's anatomy and the internal structures that lie behind so the correlation of information in both image modalities can be easily understood. In order to do this, a hybrid method combining 3D surface mesh parameterization and volume rendering is introduced, allowing image detail to be preserved independent of the volume's resolution. The visualization is provided by the viewpoint of a virtual camera, enabling the user to view the merged data from arbitrary viewpoints, where the depth of structures can be perceived through motion parallax and the context of the surrounding anatomy. This approach improves on previous methods by retaining volumetric information so the interior anatomy can be visualized. It also utilizes techniques tailored to modern graphics hardware to be fast and robust for a variety of medical applications. These proposed methods can be integrated into most medical systems that register camera images with 3D patient data and are flexible to be augmented by additional visualization techniques. Results show that we can successfully map multiple views onto volumes and provide high-quality volume renderings in real-time.