- •Biological and Medical Physics, Biomedical Engineering
- •Medical Image Processing
- •Preface
- •Contents
- •Contributors
- •1.1 Medical Image Processing
- •1.2 Techniques
- •1.3 Applications
- •1.4 The Contribution of This Book
- •References
- •2.1 Introduction
- •2.2 MATLAB and DIPimage
- •2.2.1 The Basics
- •2.2.2 Interactive Examination of an Image
- •2.2.3 Filtering and Measuring
- •2.2.4 Scripting
- •2.3 Cervical Cancer and the Pap Smear
- •2.4 An Interactive, Partial History of Automated Cervical Cytology
- •2.5 The Future of Automated Cytology
- •2.6 Conclusions
- •References
- •3.1 The Need for Seed-Driven Segmentation
- •3.1.1 Image Analysis and Computer Vision
- •3.1.2 Objects Are Semantically Consistent
- •3.1.3 A Separation of Powers
- •3.1.4 Desirable Properties of Seeded Segmentation Methods
- •3.2 A Review of Segmentation Techniques
- •3.2.1 Pixel Selection
- •3.2.2 Contour Tracking
- •3.2.3 Statistical Methods
- •3.2.4 Continuous Optimization Methods
- •3.2.4.1 Active Contours
- •3.2.4.2 Level Sets
- •3.2.4.3 Geodesic Active Contours
- •3.2.5 Graph-Based Methods
- •3.2.5.1 Graph Cuts
- •3.2.5.2 Random Walkers
- •3.2.5.3 Watershed
- •3.2.6 Generic Models for Segmentation
- •3.2.6.1 Continuous Models
- •3.2.6.2 Hierarchical Models
- •3.2.6.3 Combinations
- •3.3 A Unifying Framework for Discrete Seeded Segmentation
- •3.3.1 Discrete Optimization
- •3.3.2 A Unifying Framework
- •3.3.3 Power Watershed
- •3.4 Globally Optimum Continuous Segmentation Methods
- •3.4.1 Dealing with Noise and Artifacts
- •3.4.2 Globally Optimal Geodesic Active Contour
- •3.4.3 Maximal Continuous Flows and Total Variation
- •3.5 Comparison and Discussion
- •3.6 Conclusion and Future Work
- •References
- •4.1 Introduction
- •4.2 Deformable Models
- •4.2.1 Point-Based Snake
- •4.2.1.1 User Constraint Energy
- •4.2.1.2 Snake Optimization Method
- •4.2.2 Parametric Deformable Models
- •4.2.3 Geometric Deformable Models (Active Contours)
- •4.2.3.1 Curve Evolution
- •4.2.3.2 Level Set Concept
- •4.2.3.3 Geodesic Active Contour
- •4.2.3.4 Chan–Vese Deformable Model
- •4.3 Comparison of Deformable Models
- •4.4 Applications
- •4.4.1 Bone Surface Extraction from Ultrasound
- •4.4.2 Spinal Cord Segmentation
- •4.4.2.1 Spinal Cord Measurements
- •4.4.2.2 Segmentation Using Geodesic Active Contour
- •4.5 Conclusion
- •References
- •5.1 Introduction
- •5.2 Imaging Body Fat
- •5.3 Image Artifacts and Their Impact on Segmentation
- •5.3.1 Partial Volume Effect
- •5.3.2 Intensity Inhomogeneities
- •5.4 Overview of Segmentation Techniques Used to Isolate Fat
- •5.4.1 Thresholding
- •5.4.2 Selecting the Optimum Threshold
- •5.4.3 Gaussian Mixture Model
- •5.4.4 Region Growing
- •5.4.5 Adaptive Thresholding
- •5.4.6 Segmentation Using Overlapping Mosaics
- •5.6 Conclusions
- •References
- •6.1 Introduction
- •6.2 Clinical Context
- •6.3 Vessel Segmentation
- •6.3.1 Survey of Vessel Segmentation Methods
- •6.3.1.1 General Overview
- •6.3.1.2 Region-Growing Methods
- •6.3.1.3 Differential Analysis
- •6.3.1.4 Model-Based Filtering
- •6.3.1.5 Deformable Models
- •6.3.1.6 Statistical Approaches
- •6.3.1.7 Path Finding
- •6.3.1.8 Tracking Methods
- •6.3.1.9 Mathematical Morphology Methods
- •6.3.1.10 Hybrid Methods
- •6.4 Vessel Modeling
- •6.4.1 Motivation
- •6.4.1.1 Context
- •6.4.1.2 Usefulness
- •6.4.2 Deterministic Atlases
- •6.4.2.1 Pioneering Works
- •6.4.2.2 Graph-Based and Geometric Atlases
- •6.4.3 Statistical Atlases
- •6.4.3.1 Anatomical Variability Handling
- •6.4.3.2 Recent Works
- •References
- •7.1 Introduction
- •7.2 Linear Structure Detection Methods
- •7.3.1 CCM for Imaging Diabetic Peripheral Neuropathy
- •7.3.2 CCM Image Characteristics and Noise Artifacts
- •7.4.1 Foreground and Background Adaptive Models
- •7.4.2 Local Orientation and Parameter Estimation
- •7.4.3 Separation of Nerve Fiber and Background Responses
- •7.4.4 Postprocessing the Enhanced-Contrast Image
- •7.5 Quantitative Analysis and Evaluation of Linear Structure Detection Methods
- •7.5.1 Methodology of Evaluation
- •7.5.2 Database and Experiment Setup
- •7.5.3 Nerve Fiber Detection Comparison Results
- •7.5.4 Evaluation of Clinical Utility
- •7.6 Conclusion
- •References
- •8.1 Introduction
- •8.2 Methods
- •8.2.1 Linear Feature Detection by MDNMS
- •8.2.2 Check Intensities Within 1D Window
- •8.2.3 Finding Features Next to Each Other
- •8.2.4 Gap Linking for Linear Features
- •8.2.5 Quantifying Branching Structures
- •8.3 Linear Feature Detection on GPUs
- •8.3.1 Overview of GPUs and Execution Models
- •8.3.2 Linear Feature Detection Performance Analysis
- •8.3.3 Parallel MDNMS on GPUs
- •8.3.5 Results for GPU Linear Feature Detection
- •8.4.1 Architecture and Implementation
- •8.4.2 HCA-Vision Features
- •8.4.3 Linear Feature Detection and Analysis Results
- •8.5 Selected Applications
- •8.5.1 Neurite Tracing for Drug Discovery and Functional Genomics
- •8.5.2 Using Linear Features to Quantify Astrocyte Morphology
- •8.5.3 Separating Adjacent Bacteria Under Phase Contrast Microscopy
- •8.6 Perspectives and Conclusions
- •References
- •9.1 Introduction
- •9.2 Bone Imaging Modalities
- •9.2.1 X-Ray Projection Imaging
- •9.2.2 Computed Tomography
- •9.2.3 Magnetic Resonance Imaging
- •9.2.4 Ultrasound Imaging
- •9.3 Quantifying the Microarchitecture of Trabecular Bone
- •9.3.1 Bone Morphometric Quantities
- •9.3.2 Texture Analysis
- •9.3.3 Frequency-Domain Methods
- •9.3.4 Use of Fractal Dimension Estimators for Texture Analysis
- •9.3.4.1 Frequency-Domain Estimation of the Fractal Dimension
- •9.3.4.2 Lacunarity
- •9.3.4.3 Lacunarity Parameters
- •9.3.5 Computer Modeling of Biomechanical Properties
- •9.4 Trends in Imaging of Bone
- •References
- •10.1 Introduction
- •10.1.1 Adolescent Idiopathic Scoliosis
- •10.2 Imaging Modalities Used for Spinal Deformity Assessment
- •10.2.1 Current Clinical Practice: The Cobb Angle
- •10.2.2 An Alternative: The Ferguson Angle
- •10.3 Image Processing Methods
- •10.3.1 Previous Studies
- •10.3.2 Discrete and Continuum Functions for Spinal Curvature
- •10.3.3 Tortuosity
- •10.4 Assessment of Image Processing Methods
- •10.4.1 Patient Dataset and Image Processing
- •10.4.2 Results and Discussion
- •10.5 Summary
- •References
- •11.1 Introduction
- •11.2 Retinal Imaging
- •11.2.1 Features of a Retinal Image
- •11.2.2 The Reason for Automated Retinal Analysis
- •11.2.3 Acquisition of Retinal Images
- •11.3 Preprocessing of Retinal Images
- •11.4 Lesion Based Detection
- •11.4.1 Matched Filtering for Blood Vessel Segmentation
- •11.4.2 Morphological Operators in Retinal Imaging
- •11.5 Global Analysis of Retinal Vessel Patterns
- •11.6 Conclusion
- •References
- •12.1 Introduction
- •12.1.1 The Progression of Diabetic Retinopathy
- •12.2 Automated Detection of Diabetic Retinopathy
- •12.2.1 Automated Detection of Microaneurysms
- •12.3 Image Databases
- •12.4 Tortuosity
- •12.4.1 Tortuosity Metrics
- •12.5 Tracing Retinal Vessels
- •12.5.1 NeuronJ
- •12.5.2 Other Software Packages
- •12.6 Experimental Results and Discussion
- •12.7 Summary and Future Work
- •References
- •13.1 Introduction
- •13.2 Volumetric Image Visualization Methods
- •13.2.1 Multiplanar Reformation (2D slicing)
- •13.2.2 Surface-Based Rendering
- •13.2.3 Volumetric Rendering
- •13.3 Volume Rendering Principles
- •13.3.1 Optical Models
- •13.3.2 Color and Opacity Mapping
- •13.3.2.2 Transfer Function
- •13.3.3 Composition
- •13.3.4 Volume Illumination and Illustration
- •13.4 Software-Based Raycasting
- •13.4.1 Applications and Improvements
- •13.5 Splatting Algorithms
- •13.5.1 Performance Analysis
- •13.5.2 Applications and Improvements
- •13.6 Shell Rendering
- •13.6.1 Application and Improvements
- •13.7 Texture Mapping
- •13.7.1 Performance Analysis
- •13.7.2 Applications
- •13.7.3 Improvements
- •13.7.3.1 Shading Inclusion
- •13.7.3.2 Empty Space Skipping
- •13.8 Discussion and Outlook
- •References
- •14.1 Introduction
- •14.1.1 Magnetic Resonance Imaging
- •14.1.2 Compressed Sensing
- •14.1.3 The Role of Prior Knowledge
- •14.2 Sparsity in MRI Images
- •14.2.1 Characteristics of MR Images (Prior Knowledge)
- •14.2.2 Choice of Transform
- •14.2.3 Use of Data Ordering
- •14.3 Theory of Compressed Sensing
- •14.3.1 Data Acquisition
- •14.3.2 Signal Recovery
- •14.4 Progress in Sparse Sampling for MRI
- •14.4.1 Review of Results from the Literature
- •14.4.2 Results from Our Work
- •14.4.2.1 PECS
- •14.4.2.2 SENSECS
- •14.4.2.3 PECS Applied to CE-MRA
- •14.5 Prospects for Future Developments
- •References
- •15.1 Introduction
- •15.2 Acquisition of DT Images
- •15.2.1 Fundamentals of DTI
- •15.2.2 The Pulsed Field Gradient Spin Echo (PFGSE) Method
- •15.2.3 Diffusion Imaging Sequences
- •15.2.4 Example: Anisotropic Diffusion of Water in the Eye Lens
- •15.2.5 Data Acquisition
- •15.3 Digital Processing of DT Images
- •15.3.2 Diagonalization of the DT
- •15.3.3 Gradient Calibration Factors
- •15.3.4 Sorting Bias
- •15.3.5 Fractional Anisotropy
- •15.3.6 Other Anisotropy Metrics
- •15.4 Applications of DTI to Articular Cartilage
- •15.4.1 Bovine AC
- •15.4.2 Human AC
- •References
- •Index
250 |
M.J. Cree and H.F. Jelinek |
But first, there is some notation and jargon that is necessary for talking about retinal images. We turn to that first followed by a brief expansion upon the motivation for automated computer analysis of retinal images, and an introduction to the technologies used to capture retinal images.
11.2 Retinal Imaging
11.2.1 Features of a Retinal Image
The retina is the light sensitive layer at the back of the eye that is visualisable with specialist equipment when imaging through the pupil. The features of a typical view of the retina (see Fig. 11.1) include the optic disc where the blood vessels and nerves enter from the back of the eye into the retina. The blood vessels emerge from the optic disc and branch out to cover most of the retina. The macula is the central region of the retina about which the blood vessels circle and partially penetrate (the view shown in Fig. 11.1 has the optic disc on the left and the macula towards the centre-right) and is the most important for vision.
There are a number of diseases of the retina of which diabetic retinopathy (pathology of the retina due to diabetes) has generated the most interest for automated computer detection. Diabetic retinopathy (DR) is a progressive disease that results in eye-sight loss or even blindness if not treated. Pre-proliferative diabetic retinopathy (loosely DR that is not immediately threatening eye-sight loss) is characterized by a number of clinical symptoms, including microaneurysms (small round outgrowths from capillaries that appear as small round red dots less
Fig. 11.1 Color retinal image showing features of diabetic retinopathy including microaneurysms and exudate
11 Image Analysis of Retinal Images |
251 |
than 125 μ m in diameter in color retinal images), dot-haemorrhages (which are often indistinguishable from microaneurysms), exudate (fatty lipid deposits that appear as yellow irregular patches with sharp edges often organized in clusters) and haemorrhage (clotting of leaked blood into the retinal tissue). These symptoms are more serious if located near the centre of the macular.
Proliferative diabetic retinopathy (PDR) is the more advanced form that poses significant risk of eye-sight loss. Features that lead to a diagnosis of PDR include leakage of blood or extensive exudate near the macular, ischaemia, new vessel growth and changes in vessel diameter such as narrowing of the arterioles and venous beading (venules alternately pinching and dilating that look like a string of sausages). Indeed, there is a reconfiguring of the blood vessel network that we have more to say about later.
11.2.2 The Reason for Automated Retinal Analysis
Recent data suggest that there are 37 million blind people and 124 million with low vision, excluding those with uncorrected refractive errors. The main causes of global blindness are cataract, glaucoma, corneal scaring, age-related macular degeneration, and diabetic retinopathy. The global Vision 2020 initiative is having an impact to reduce avoidable blindness particularly from ocular infections, but more needs to be done to address cataract, glaucoma, and diabetic retinopathy [14]. Screening is generally considered effective if a number of criteria are met including identification of disease at an early, preferably preclinical, stage and that the disease in its early or late stage is amenable to treatment. Screening for diabetic retinopathy, for example, and monitoring progression, especially in the early asymptomatic stage has been shown to be effective in the prevention of vision loss and cost.
Automated screening (for example, by computer analysis of retinal images) allows a greater number of people to be assessed, is more economical and accessible in rural and remote areas where there is a lack of eye specialists. Automated assessment of eye disease as an ophthalmological equivalent to the haematology point-of care testing such as blood glucose levels has been subject to intense research over the past 40 years by many groups with algorithms being proposed for identification of the optic disc, retinal lesions such as microaneurysms, haemorrhage, cotton wool spots and hard exudates, and retinal blood vessel changes.
Retinal morphology and associated blood vessel pattern can give an indication of risk of hypertension (high blood pressure), cardiovascular and cerebrovascular disease as well as diabetes [23, 28, 36]. With the increase in cardiovascular disease, diabetes and an aging population, a greater number of people will need to be screened yet screening a large number of people is difficult with limited resources necessitating a review of health care services.
Early identification of people at risk of morbidity and mortality due to diverse disease processes allows preventative measures to be commenced with the greatest efficacy. However in many instances preclinical signs are not easily recognized and
252 |
M.J. Cree and H.F. Jelinek |
often appear as signs or symptoms that are not specific for a particular disease. The retina and its blood vessel characteristics however have been shown to be a window into several disease processes. The identification of increased risk of disease progression is based on several markers in the eye including venous dilatation, vessel tortuosity and the change in the ratio of the arteriolar to venular vessel diameter especially in proximity to the optic disc. This morphological characteristic allows the application of image analysis and automated classification [23, 28, 36].
11.2.3 Acquisition of Retinal Images
Digital images of the human retina are typically acquired with a digital fundus camera, which is a specialized camera that images the retina via the pupil of the eye. The camera contains an illumination system to illuminate the retina and optics to focus the image to a 35 mm SLR camera. Modern systems image at high-resolution and in color with Nikon or Canon digital SLR camera backends. The field of view (FOV) of the retina that is imaged can usually be adjusted from 25◦ to 60◦ (as determined from the pupil) in two or three small steps. The smaller FOV has better detail but this is at the expense of a reduced view of the retina.
When monochromatic film was commonplace a blue-green filter was sometimes placed in the optical path of the fundus camera as the greatest contrast in retinal images occurs in the green wavelengths of light. An image acquired in such a manner is referred to as a red-free image. With color digital imaging it is common practice to take the green field of a RGB image as a close approximation to the red-free image.
In fluorescein angiographic imaging, the patient is injected with a sodium fluorescein drug which is transported in the blood supply to the retina. The fundus camera uses a flash filtered to the blue spectrum (465–490 nm) to activate the fluorescein, which fluoresces back in the green part of the spectrum (520–630 nm). The collected light is filtered with a barrier filter so that only the fluorescence from the retina is photographed. Images obtained with fluorescein angiography are monochromatic and highlight the blood flow in the retina. Since the fluorescein, when injected into the blood stream, takes a few seconds to completely fill the retinal vessels, images taken early in the angiographic sequence show the vasculature filling with fluorescein. First the fluorescein streams into the arterioles, and then a couple or so seconds later fills the venules. Over time (minutes) the images fade as fluorescein is flushed out of the retinal blood supply.
Angiographic retinal images better highlight vascular lesions such as microaneurysms, ischaemia (absence of blood flow) and oedema (leakage of blood into the surrounding tissues). The downside of the use of fluorescein is the inherent risk to the patient with about 1 in 200,000 patients suffering anaphylactic shock. Indocyanine green (ICG) is sometimes used for imaging the choroidal vasculature but requires a specially designed fundus camera due to the low intensity fluorescence.
Other imaging technologies such as the scanning laser ophthalmoscope (SLO) and optical coherence tomography (OCT) may be encountered. The SLO scans a