Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Учебники / Computer-Aided Otorhinolaryngology-Head and Neck Surgery Citardi 2002

.pdf
Скачиваний:
224
Добавлен:
07.06.2016
Размер:
4.48 Mб
Скачать

320

Wiet et al.

FIGURE 18.9 Merged CT and MRI image data of patient with sphenoid tumor.

(A)

(B)

FIGURE 18.10 Binary masks created from segmentation of original data slice of (A) the patient’s skin and (B) tumor.

Computer-Aided Tumor Modeling and Visualization

321

FIGURE 18.11 Volumetric rendering of the composite (merged) data with highlighted tumor and tinting capability performed in the volume.

sentations are derived directly from the imaging data, whereas isosurfaces are derived by a series of image-processing steps. We have chosen to maintain the original integrity of the volume data and use volume representations. There are several reasons for this decision. Originally, volume rendering was computationally expensive, and, for expediency, surfaces were generated to create representations. Many computers had accelerated hardware to facilitate rendering of surfaces. However, recently, 2D and 3D texture memory techniques have evolved, and specialized hardware is becoming more readily available, thus making volume rendering more cost-effective [3,4,6,16–18]. Second, surface rendering allows only a passive exploration of the surface information. While sectioning techniques have been introduced, the overhead for texturing the original information on the newly created cut is computationally expensive and does not lend itself to interactive rates. Volume representation maintains the original intensity information, and techniques for sectioning and removal are straightforward. Finally, it would be easy to generate an overwhelming number of polygons to depict an accurate surface of a complex region such as the skullbase [19], but by using volume representations, we can obviate this expensive step of processing the original data to extract surfaces. Volume rendering expedites the use of more patient-specific data sets and has been used in such high-demand environments as surgical simulations being developed for resident training [20].

322

Wiet et al.

18.4INTERACTIVITY/INTERFACE

Preprocessing and data collection provide a means to an end: the ability of the users to interact with the data. We have extended this beyond the desktop to the Internet to allow multiple, asynchronous users to interact with rendered image data. In this section we will discuss our approach to the user interface as well as our current work in exporting the visualization capability to the Internet. As discussed previously, our goal is to provide real-time interaction with the image data in an intuitive environment.

Hendin and others have presented methods to provide volume rendering over the Internet [21]; however, various restrictions, including interactive latency, limits to viewing orientation, and quality of data representation, exist. Peifer and others have presented a patient-centric, networked relational database for patientcontrolled data collection [22], but this work focuses on physiological monitoring, and data sizes and rates are relatively low. Silverstein et al. presented a Webbased system for segmentation of computed tomography images and display via VRML [23]. The authors emphasize the need to access massive high-speed computation to provide resultant images on a desktop computer.

Previous authors have constructed a props-based interface for interactive visualizations in neurosurgery [24]. This work presented a 3D isosurface of the brain with resulting sections from the original images placed in a separate window. We previously explored this approach but found our users preferred the integrated view of the section with the main window [1]. In addition, our users were concerned with issues of scale, i.e., the prop as related to the data, and with physical limitations of the props. Subsequently, we provided an integrated view that allowed the users to focus their attention on the data, while the interfaces allowed them complete utility for orientation and selection, without the use of props.

We have developed a prototype system through unique collaborations among networking and computing specialists, advanced applications developers, and end users (clinicians, researchers, and educators), providing a realistic and initial implementation with scalability to a wide range of applications (healthcare, scientific research, and distance education).

The system provides 3D reconstructions of the volumetric data with the best interactivity possible, given the computing and networking environment. The key issues for delivering representations from high-resolution volumetric data sets are integration/functionality, representational quality, and performance.

Previously we reported on exploiting 3D texture hardware to provide interactive performance over the Internet [6]. Volume rendering using 3D texture mapping hardware treats the volume as a 3D texture [17,18]. A set of imagealigned parallel polygons intersects the volume, and the graphics hardware processes (textures, rasterizes, and composites) the polygons onto the final image.

Computer-Aided Tumor Modeling and Visualization

323

New techniques render irregular volume grids by computing image-aligned slices, which are polygon meshes constructed by intersecting the irregular grid cells [3,4]. The graphics hardware composites the meshes into the final image. We have continued to develop applications that exploit 3D texture hardware (see below) and present direct applications for use in resident training [20].

The current system configuration contains two parts: the rendering server and the device client. The rendering server is an interactive volume renderer that runs on high-end Silicon Graphics (Mountain View, CA) computers such as the Onyx2 or Octane . The rendering software uses OpenGL and texture memory to achieve interactive volume-rendering rates. Our software was built upon the 3D texture-based volume rendering software called VRP (Volume Rendering Primer), available from Silicon Graphics, Inc. This code also renders surface geometry, called embedded geometry, defined in an Inventor scenegraph file format, simultaneously with volume data.

The device clients connected to an external device control the server. These device clients are used as an intuitive interface to the rendering server. The device client communicates with the rendering server via TCP/IP networking protocol. This system allows a user to control a rendering server from anywhere on the Internet. An example of the device client is the Spaceball , a manual device that allows the user to orient the data with six degrees of freedom (DOF). The second interface is the Microscribe , a five DOF device (Figure 18.12). If these elegant

FIGURE 18.12 User during interactive session.

324

Wiet et al.

(A)

(B)

(C)

FIGURE 18.13 (A) Orientation of volume; (B) arbitrary slicing of the volume in real time; (C) Arbitrary removing of data or ‘‘cutting’’ from the volume in real time. All are performed in real time.

interfaces are not available, graphical user interface (GUI)–based device simulators are available to provide basic functionality to the user. The system currently supports the following operations:

Orientation. The user can arbitrarily rotate and position a 3D reconstruction of the data (Figure 18.13A).

Arbitrary Slicing. The user can arbitrarily position a plane to reveal underlying structures (Figure 18.13B).

Removal. The user can arbitrarily remove single volume elements for viewing underlying structures (Figure 18.13C). (This function can be changed to provide the addition of volume elements for other applications.)

Tinting. The user can mark specific regions, thus drawing as if on a threedimensional whiteboard (Figure 18.11).

Morphometrics. The system provides measurement of the underlying structures in 3D.

In addition to allowing one user to control the orientation of the data set and the clipping plane of a rendering server, the system allows others to take control of a server. Users can thus interact in the volumetric environment, similar to a 3D environment. We intend to expand the tinting function to provide hyperlinks to multimedia, including text-, audio-, image-, and movie-formatted data, thus allowing access to more extensive patient information. In addition, with these multimedia objects, the system can allow asynchronous collaborations.

18.5FUTURE DIRECTIONS

The prototype system demonstrates a shared virtual environment that allows multiple users to interactively manipulate large volumetric data sets across long dis-

Computer-Aided Tumor Modeling and Visualization

325

tances using intuitive interfaces (12–98, OSC-98, CUMREC98). At Internet II in April 1998, network latencies observed between Washington, DC, and the Interface Lab at OSC in Columbus, Ohio, were approximately 1 second. Albeit noticeable, the latencies were tolerable. We have run additional collaborative studies with the Cleveland Clinic Foundation and, most recently, as a telemedicine demonstration with StrongAngel, among OSC, East Carolina University, and Hawaii.

We are implementing the capacity of users to request the system to identify a structure, such as a tumor, with surrounding structures becoming transparent (Figure 18.14). The users can then orient themselves to the structure and then return to the normal view. This utility will assist residents in understanding the relationship of the tumor to surrounding anatomy and will facilitate identifying the optimal choice of surgical approach, resection, and reconstruction.

We have presented our designs for interactive tumor visualizations. As image acquisitions continue to improve in spatial resolution, these systems integrate multimodal acquisitions into succinct, comprehensible formats for direct use by the health care provider. Improvements in hardware-assisted rendering techniques will make cost-efficient desktop systems feasible. Segmentation will continue to provide technical challenges and remains the key barrier for the common use of these systems with more patient-specific data. However, many of the segmentation issues will be obviated by improved spatial acquisitions, contrast enhance-

(A)

(B)

FIGURE 18.14 (A) Volume segmented sphenoid tumor and surrounding vasculature (internal carotids and basilar arteries). (B) Composite image of volume rendered merged data with transparency of skin and brain showing relationship of tumor to skull base and vasculature performed in real time.

326

Wiet et al.

ment, multiprotocol acquisitions, and ever-increasing computing and network power.

As hardware and software tools progress, these systems will incorporate more interactions that simulate surgical techniques, thus extending the utility of emerging systems from resident training to presurgical planning on patientspecific data.

ACKNOWLEDGMENTS

This research has been funded under grants from the following sources: The Ohio State University Office of Research, The Department of Otolaryngology, The Ohio State University, Ohio Supercomputer Center, the National Institutes of Health/National Library of Medicine, and the National Institute for Deafness and Communicative Disorders.

The authors gratefully acknowledge the assistance of Dennis Sessanna and Jason Bryan, whose technical expertise has made this work possible, and of Mrs. Pamela Walters and Ms. Barbara Woodall for help with preparation of the manuscript.

REFERENCES

1.Bier-Laning C, Wiet GJ, Stredney D. Evaluation of a high performance computing technique for cranial base Tumor Visualization. In: Fourth International Conference on Head and Neck Cancer, Toronto, 1996.

2.Wiet GJ, et al. Cranial base tumor visualization through high performance computing. In: Weghorst SJ, Sieburg H, Morgan K, eds. Medicine Meets Virtual Reality. Amsterdam: IOS Press, 1996:43–59.

3.Yagel R, et al. Hardware assisted volume rendering of unstructured grids. In: Proceedings of 1996 Symposium on Volume Visualization, San Francisco, 1996.

4.Yagel R, et al. Cranial base tumor visualization through multimodal imaging: integration and interactive display. In: Fourth Scientific Meeting of the International Society for Magnetic Resonance in Medicine, New York, 1996.

5.Wiet GJ, et al. Using advanced simulation technology for cranial base tumor evaluation. In: Kuppersmith R, ed. The Otolaryngologic Clinics of North America. Philadelphia: WB Saunders Company, 1998:369–381.

6.Stredney D, et al. Interactive volume visualization for synchronous and asynchronous remote collaboration. In: Westwood J, et al., eds. Medicine Meets Virtual Reality. Amsterdam: IOS Press, 1999:344–350.

7.Stredney D, et al. Interactive medical data on demand: a high performance imagebased approach across heterogeneous environments. In: Westwood J, et al., eds. Medicine Meets Virtual Reality. Amsterdam: IOS Press: 2000:327–333.

8.Kurucay S, et al. A segment interleaved motion compensated acquisition in the steady state (SIMCAST) technique for high resolution imaging of the inner ear. JMRI 1997; (Nov/Dec):1060–1068.

Computer-Aided Tumor Modeling and Visualization

327

9.Goldsmith SJ. Receptor imaging: competitive or complementary to antibody imaging? In: Seminars in Nuclear Medicine. Philadelphia: WB Saunders Company, 1997:85–93.

10.Hoh CK, et al. PET in oncology: will it replace the other modalities? In: Seminars in Nuclear Medicine. Philadelphia: WB Saunders Company, 1997:94–106.

11.Erasmus JJ, Patz EF. Positron Emission Tomography Imaging in the Thorax. Clin Chest Med 1999; 20:715–724.

12.Vlaadinderbroek MT, DeBoer JA. Magnetic Resonance Imaging. Berlin/Heidelberg: Springer, 1996:167–214.

13.Schmalbrock P, et al. Measurement of internal auditory canal tumor volumes with contrast enhanced T1-weighted and steady state T2-weighted gradient echo imaging. AJNR 1999; 20:1207–1213.

14.Davis R, et al. Three-dimensional high-resolution volume rendering of computer tomography data: applications to otolaryngology–head and neck surgery. Laryngoscope 1991; 101:573–582.

15.Shareef N, Wang D, Yagel R. Segmentation of medical data using locally excitatory globally inhibitory oscillator networks. In: The World Congress on Neural Networks, San Diego, CA, 1996.

16.Westover L. Splatting: A Parallel, Feed-Forward Volume Rendering Algorithm. Ph.D. thesis. Chapel Hill, NC: University of North Carolina, 1991.

17.Cabral B, Cam N, Foran J. Accelerated volume rendering and tomographic reconstruction using texture mapping hardware. In: Symposium on Volume Visualization, Washington, D.C., 1994.

18.Van Gelder A, Kim K. Direct volume rendering via 3D texture mapping hardware. In: Proceedings of the 1996 Volume Rendering Symposium, 1996.

19.Stredney D, et al. A Comparative analysis of integrating visual representations with haptic displays. In: Westwood J, et al., eds. Medicine Meets Virtual Reality. Amsterdam: IOS Press, 1998:20–26.

20.Wiet GJ, et al. Virtual temporal bone dissection simulation. In: Westwood, et al., eds. Medicine Meets Virtual Reality. Amsterdam: IOS Press, 2000:378–384.

21.Hendin O, John NW, Shocet O. Medical volume rendering over the WWW using VRML and Java. In: Westwood J, et al., eds. Medicine Meets Virtual Reality. Amsterdam: IOS Press, 1998:34–40.

22.Peifer J, Sudduth B. A patient-centric approach to telemedicine database development. In: Westwood J, et al., eds. Medicine Meets Virtual Reality. Amsterdam: IOS Press, 1998:67–73.

23.Silverstein J, et al. Web-based segmentation and display of three-dimensional radiologic image data. In: Westwood J, et al., eds. Medicine Meets Virtual Reality. Amsterdam: IOS Press, 1998:53–59.

24.Hinckley K, et al. The Props-based interface for neurosurgical visualization. In: Morgan KS, et al., eds. Medicine Meets Virtual Reality. Amsterdam: IOS Press, 1997: 552–562.

19

Head and Neck Virtual Endoscopy

William B. Armstrong, M.D., and

Thong H. Nguyen, M.D.

University of California, Irvine, Orange, and Veterans Affairs Medical Center, Long Beach, California

19.1INTRODUCTION

Virtual endoscopy (VE) can be broadly defined as the reconstruction and rendering of two-dimensional (2D) data to create a realistic depiction of the inner walls of a hollow organ or tubular structure. The term is not technically accurate, since it is not a true ‘‘endoscopic’’ procedure and it is not a ‘‘virtual’’ experience, which is more accurately defined as immersing the observer within a three-dimen- sional (3D) real-time interactive environment. In its current form, VE allows the observer to review a predetermined flight path through the imaged organ system [1,2]. However, the term ‘‘virtual endoscopy’’ has become so commonly used, it has become a de facto terminology to describe radiographic techniques that provide an undistorted intraluminal view that emulates the view seen by an endoscopist.

Development of VE is the result of the convergence of advances in surgical endoscopy, radiology, and computer science and technology. Each of these professions has contributed to the nascent field of VE. The early nineteenth century witnessed the development of laryngoscopy, starting with Bozzini’s attempts to use extracorporeal light to view luminal cavities of the body in 1809 [3]. Subsequently, the mirror laryngoscope, head mirror, and external sources of illumination were developed. From these primitive devices, developed in the early to

329

330

Armstrong and Nguyen

mid-nineteenth century, major advances in lighting, optics, flexible fiberoptics, and documentation techniques have made endoscopic procedures indispensable to the modern practice of otolaryngology. Endoscopic applications have expanded from their origins in the upper respiratory and digestive tract to the nasal cavity and sinuses and more recently to imaging the middle ear [4–7].

In parallel to advances in direct and indirect visual endoscopy, radiology has also undergone a no less impressive revolution. From the time of the first radiographs, attempts have been made to develop cross-sectional imaging techniques. Plain x-ray tomography was one of the early techniques that provided some anatomical and pathological details, but images were generally blurry and its low resolution limited its usefulness. The development of computed tomography (CT) by Hounsfield and Cormack (co-recipients of the 1979 Nobel Prize in Medicine) revolutionized the ability to visualize fine anatomical details previously inaccessible without surgical intervention. Although the mathematical foundations for image reconstruction that underlie planar reconstruction were published by Radon in 1917 [8], it was not until the 1970s that the medical application of his work was realized by development of the CT scanner. Since that time, the quality of CT scans has steadily improved as scanning speed and resolution have increased. The cost per scan has decreased. As a result, CT has become a standard and indispensable imaging tool.

Magnetic resonance imaging (MRI) has undergone a similar revolution using magnetic and radiofrequency energy to discriminate tissues based on differences in hydrogen content and recovery from induced magnetic field perturbations. Both CT and MRI studies produce a large number of cross-sectional images. A significant drawback of these studies is the requirement of the person reading the studies to mentally form a 3D reconstruction of the individual crosssectional slices. This skill is not equally possessed by all persons and is especially difficult with complex anatomical structures or pathology.

Techniques for 3D reconstruction were adapted for medical use from technology developed for nonmedical applications, including flight simulation, terrain guidance, computer science, and defense applications, in an attempt to improve visualization of complex anatomical structures and pathological abnormalities (9,10). These 3D-reconstruction algorithms have been used in the head and neck region to provide renderings of complex craniofacial defects [11–15], visualization of the larynx [16–18], data for the manufacture of prosthetic implants for craniofacial defects, and guidance for surgical planning [19,20].

David Vining and others have taken endoscopic imaging and radiology down a new path by manipulating the noninvasive radiographic information provided from axial CT data to simulate the endoluminal view provided by fiberoptic endoscopy, thereby producing novel radiographic ‘‘endoscopic’’ images [21,22]. VE is a noninvasive radiographic technique that allows visualization of intraluminal surfaces by 3D perspective renderings of air/tissue or fluid/tissue interfaces