Generalized Temporal Focus + Context Framework for Improved Medical Data Exploration Open Access
Downloadable ContentDownload PDF
Radiologists and surgeons have different visualization demands due to the diversity in work performed by each group of specialists. Many techniques have been developed to visualize 2D slices and 3D volumes for dataset exploration, surgical planning and practicing of intervention procedures, however, they do not address the individual visualization needs. We introduce a generalized temporal focus + context framework that provides a unique novel way of combining the visualization needs of specialists while working individually or collaboratively. Thus, we can use the framework to describe and compare existing visualization techniques and to identify novel combinations of rendering techniques that can be applied to multimodal medical datasets to create visualizations beneficial to both radiologists and surgeons during their everyday work. The generalized focus + context framework defines various regions that can be used to render a combination of visualization modalities. An examination of how data is rendered inside each region provides us with improved visualizations for 2D and 3D rendering that are beneficial to both radiologists and surgeons. We use the focus region defined by a magic lens to render 2D slices and a 3D sub-volume. The interactive rendering of sub-volumes augmented by slices provides a way to explore the inner structure of target objects. A user study showed that users obtained a more precise structural understanding of complex structures when allowed to explore sub-sections of the whole focus region. We apply a new hierarchy of context areas defined by the temporal positions of the focus region to improve explicit spatial perception. Previous lens positions define a new context region named focus-driven context that can be used to render past focus or as a painting brush to create a region where a separate co-registered dataset can be displayed. A user study experiment showed that when using focus-driven context, users are faster and more accurate in inferring the spatial relationship between multiple objects than when using a small lens. The generalized framework also provides a new visualization that addresses the need of surgeons to explore possible trajectories to target structures. We create a visualization with an arbitrary-shaped focus region defined by the surgeon's lens movement. A user study experiment asking users to virtually sculpt a volume until they reach a target showed that the above-mentioned visualization is significantly faster and more accurate than using an opaque sculpting tool. No context rendering, on the other hand, can provide an endoscopic visualization with improved visibility of occluded areas. All of these techniques can be utilized within the same interactive session by multiple users. Therefore the framework satisfies the visualization needs of different types of medical specialists during team meetings.A fast GPU-based raycasting algorithm is used to support the interactivity of the visualizations. A combination of depth peeling and a novel way to compute parity is used when an arbitrary-shaped focus region is to be rendered. All raycasting techniques are combined in a way such that the system maintains its interactive speeds even when multiple rendering modalities are used simultaneously.In conclusion, the described work addresses the lack of visualizations designed for the work of both surgeons and radiologists during dataset exploration and surgical planning. The unifying approach of the focus + context framework intends to show how such a framework can be efficiently incorporated in the clinical workflow.