- •Biological and Medical Physics, Biomedical Engineering
- •Medical Image Processing
- •Preface
- •Contents
- •Contributors
- •1.1 Medical Image Processing
- •1.2 Techniques
- •1.3 Applications
- •1.4 The Contribution of This Book
- •References
- •2.1 Introduction
- •2.2 MATLAB and DIPimage
- •2.2.1 The Basics
- •2.2.2 Interactive Examination of an Image
- •2.2.3 Filtering and Measuring
- •2.2.4 Scripting
- •2.3 Cervical Cancer and the Pap Smear
- •2.4 An Interactive, Partial History of Automated Cervical Cytology
- •2.5 The Future of Automated Cytology
- •2.6 Conclusions
- •References
- •3.1 The Need for Seed-Driven Segmentation
- •3.1.1 Image Analysis and Computer Vision
- •3.1.2 Objects Are Semantically Consistent
- •3.1.3 A Separation of Powers
- •3.1.4 Desirable Properties of Seeded Segmentation Methods
- •3.2 A Review of Segmentation Techniques
- •3.2.1 Pixel Selection
- •3.2.2 Contour Tracking
- •3.2.3 Statistical Methods
- •3.2.4 Continuous Optimization Methods
- •3.2.4.1 Active Contours
- •3.2.4.2 Level Sets
- •3.2.4.3 Geodesic Active Contours
- •3.2.5 Graph-Based Methods
- •3.2.5.1 Graph Cuts
- •3.2.5.2 Random Walkers
- •3.2.5.3 Watershed
- •3.2.6 Generic Models for Segmentation
- •3.2.6.1 Continuous Models
- •3.2.6.2 Hierarchical Models
- •3.2.6.3 Combinations
- •3.3 A Unifying Framework for Discrete Seeded Segmentation
- •3.3.1 Discrete Optimization
- •3.3.2 A Unifying Framework
- •3.3.3 Power Watershed
- •3.4 Globally Optimum Continuous Segmentation Methods
- •3.4.1 Dealing with Noise and Artifacts
- •3.4.2 Globally Optimal Geodesic Active Contour
- •3.4.3 Maximal Continuous Flows and Total Variation
- •3.5 Comparison and Discussion
- •3.6 Conclusion and Future Work
- •References
- •4.1 Introduction
- •4.2 Deformable Models
- •4.2.1 Point-Based Snake
- •4.2.1.1 User Constraint Energy
- •4.2.1.2 Snake Optimization Method
- •4.2.2 Parametric Deformable Models
- •4.2.3 Geometric Deformable Models (Active Contours)
- •4.2.3.1 Curve Evolution
- •4.2.3.2 Level Set Concept
- •4.2.3.3 Geodesic Active Contour
- •4.2.3.4 Chan–Vese Deformable Model
- •4.3 Comparison of Deformable Models
- •4.4 Applications
- •4.4.1 Bone Surface Extraction from Ultrasound
- •4.4.2 Spinal Cord Segmentation
- •4.4.2.1 Spinal Cord Measurements
- •4.4.2.2 Segmentation Using Geodesic Active Contour
- •4.5 Conclusion
- •References
- •5.1 Introduction
- •5.2 Imaging Body Fat
- •5.3 Image Artifacts and Their Impact on Segmentation
- •5.3.1 Partial Volume Effect
- •5.3.2 Intensity Inhomogeneities
- •5.4 Overview of Segmentation Techniques Used to Isolate Fat
- •5.4.1 Thresholding
- •5.4.2 Selecting the Optimum Threshold
- •5.4.3 Gaussian Mixture Model
- •5.4.4 Region Growing
- •5.4.5 Adaptive Thresholding
- •5.4.6 Segmentation Using Overlapping Mosaics
- •5.6 Conclusions
- •References
- •6.1 Introduction
- •6.2 Clinical Context
- •6.3 Vessel Segmentation
- •6.3.1 Survey of Vessel Segmentation Methods
- •6.3.1.1 General Overview
- •6.3.1.2 Region-Growing Methods
- •6.3.1.3 Differential Analysis
- •6.3.1.4 Model-Based Filtering
- •6.3.1.5 Deformable Models
- •6.3.1.6 Statistical Approaches
- •6.3.1.7 Path Finding
- •6.3.1.8 Tracking Methods
- •6.3.1.9 Mathematical Morphology Methods
- •6.3.1.10 Hybrid Methods
- •6.4 Vessel Modeling
- •6.4.1 Motivation
- •6.4.1.1 Context
- •6.4.1.2 Usefulness
- •6.4.2 Deterministic Atlases
- •6.4.2.1 Pioneering Works
- •6.4.2.2 Graph-Based and Geometric Atlases
- •6.4.3 Statistical Atlases
- •6.4.3.1 Anatomical Variability Handling
- •6.4.3.2 Recent Works
- •References
- •7.1 Introduction
- •7.2 Linear Structure Detection Methods
- •7.3.1 CCM for Imaging Diabetic Peripheral Neuropathy
- •7.3.2 CCM Image Characteristics and Noise Artifacts
- •7.4.1 Foreground and Background Adaptive Models
- •7.4.2 Local Orientation and Parameter Estimation
- •7.4.3 Separation of Nerve Fiber and Background Responses
- •7.4.4 Postprocessing the Enhanced-Contrast Image
- •7.5 Quantitative Analysis and Evaluation of Linear Structure Detection Methods
- •7.5.1 Methodology of Evaluation
- •7.5.2 Database and Experiment Setup
- •7.5.3 Nerve Fiber Detection Comparison Results
- •7.5.4 Evaluation of Clinical Utility
- •7.6 Conclusion
- •References
- •8.1 Introduction
- •8.2 Methods
- •8.2.1 Linear Feature Detection by MDNMS
- •8.2.2 Check Intensities Within 1D Window
- •8.2.3 Finding Features Next to Each Other
- •8.2.4 Gap Linking for Linear Features
- •8.2.5 Quantifying Branching Structures
- •8.3 Linear Feature Detection on GPUs
- •8.3.1 Overview of GPUs and Execution Models
- •8.3.2 Linear Feature Detection Performance Analysis
- •8.3.3 Parallel MDNMS on GPUs
- •8.3.5 Results for GPU Linear Feature Detection
- •8.4.1 Architecture and Implementation
- •8.4.2 HCA-Vision Features
- •8.4.3 Linear Feature Detection and Analysis Results
- •8.5 Selected Applications
- •8.5.1 Neurite Tracing for Drug Discovery and Functional Genomics
- •8.5.2 Using Linear Features to Quantify Astrocyte Morphology
- •8.5.3 Separating Adjacent Bacteria Under Phase Contrast Microscopy
- •8.6 Perspectives and Conclusions
- •References
- •9.1 Introduction
- •9.2 Bone Imaging Modalities
- •9.2.1 X-Ray Projection Imaging
- •9.2.2 Computed Tomography
- •9.2.3 Magnetic Resonance Imaging
- •9.2.4 Ultrasound Imaging
- •9.3 Quantifying the Microarchitecture of Trabecular Bone
- •9.3.1 Bone Morphometric Quantities
- •9.3.2 Texture Analysis
- •9.3.3 Frequency-Domain Methods
- •9.3.4 Use of Fractal Dimension Estimators for Texture Analysis
- •9.3.4.1 Frequency-Domain Estimation of the Fractal Dimension
- •9.3.4.2 Lacunarity
- •9.3.4.3 Lacunarity Parameters
- •9.3.5 Computer Modeling of Biomechanical Properties
- •9.4 Trends in Imaging of Bone
- •References
- •10.1 Introduction
- •10.1.1 Adolescent Idiopathic Scoliosis
- •10.2 Imaging Modalities Used for Spinal Deformity Assessment
- •10.2.1 Current Clinical Practice: The Cobb Angle
- •10.2.2 An Alternative: The Ferguson Angle
- •10.3 Image Processing Methods
- •10.3.1 Previous Studies
- •10.3.2 Discrete and Continuum Functions for Spinal Curvature
- •10.3.3 Tortuosity
- •10.4 Assessment of Image Processing Methods
- •10.4.1 Patient Dataset and Image Processing
- •10.4.2 Results and Discussion
- •10.5 Summary
- •References
- •11.1 Introduction
- •11.2 Retinal Imaging
- •11.2.1 Features of a Retinal Image
- •11.2.2 The Reason for Automated Retinal Analysis
- •11.2.3 Acquisition of Retinal Images
- •11.3 Preprocessing of Retinal Images
- •11.4 Lesion Based Detection
- •11.4.1 Matched Filtering for Blood Vessel Segmentation
- •11.4.2 Morphological Operators in Retinal Imaging
- •11.5 Global Analysis of Retinal Vessel Patterns
- •11.6 Conclusion
- •References
- •12.1 Introduction
- •12.1.1 The Progression of Diabetic Retinopathy
- •12.2 Automated Detection of Diabetic Retinopathy
- •12.2.1 Automated Detection of Microaneurysms
- •12.3 Image Databases
- •12.4 Tortuosity
- •12.4.1 Tortuosity Metrics
- •12.5 Tracing Retinal Vessels
- •12.5.1 NeuronJ
- •12.5.2 Other Software Packages
- •12.6 Experimental Results and Discussion
- •12.7 Summary and Future Work
- •References
- •13.1 Introduction
- •13.2 Volumetric Image Visualization Methods
- •13.2.1 Multiplanar Reformation (2D slicing)
- •13.2.2 Surface-Based Rendering
- •13.2.3 Volumetric Rendering
- •13.3 Volume Rendering Principles
- •13.3.1 Optical Models
- •13.3.2 Color and Opacity Mapping
- •13.3.2.2 Transfer Function
- •13.3.3 Composition
- •13.3.4 Volume Illumination and Illustration
- •13.4 Software-Based Raycasting
- •13.4.1 Applications and Improvements
- •13.5 Splatting Algorithms
- •13.5.1 Performance Analysis
- •13.5.2 Applications and Improvements
- •13.6 Shell Rendering
- •13.6.1 Application and Improvements
- •13.7 Texture Mapping
- •13.7.1 Performance Analysis
- •13.7.2 Applications
- •13.7.3 Improvements
- •13.7.3.1 Shading Inclusion
- •13.7.3.2 Empty Space Skipping
- •13.8 Discussion and Outlook
- •References
- •14.1 Introduction
- •14.1.1 Magnetic Resonance Imaging
- •14.1.2 Compressed Sensing
- •14.1.3 The Role of Prior Knowledge
- •14.2 Sparsity in MRI Images
- •14.2.1 Characteristics of MR Images (Prior Knowledge)
- •14.2.2 Choice of Transform
- •14.2.3 Use of Data Ordering
- •14.3 Theory of Compressed Sensing
- •14.3.1 Data Acquisition
- •14.3.2 Signal Recovery
- •14.4 Progress in Sparse Sampling for MRI
- •14.4.1 Review of Results from the Literature
- •14.4.2 Results from Our Work
- •14.4.2.1 PECS
- •14.4.2.2 SENSECS
- •14.4.2.3 PECS Applied to CE-MRA
- •14.5 Prospects for Future Developments
- •References
- •15.1 Introduction
- •15.2 Acquisition of DT Images
- •15.2.1 Fundamentals of DTI
- •15.2.2 The Pulsed Field Gradient Spin Echo (PFGSE) Method
- •15.2.3 Diffusion Imaging Sequences
- •15.2.4 Example: Anisotropic Diffusion of Water in the Eye Lens
- •15.2.5 Data Acquisition
- •15.3 Digital Processing of DT Images
- •15.3.2 Diagonalization of the DT
- •15.3.3 Gradient Calibration Factors
- •15.3.4 Sorting Bias
- •15.3.5 Fractional Anisotropy
- •15.3.6 Other Anisotropy Metrics
- •15.4 Applications of DTI to Articular Cartilage
- •15.4.1 Bovine AC
- •15.4.2 Human AC
- •References
- •Index
100 |
D.P. Costello and P.A. Kenny |
Fig. 5.7 Images and their corresponding intensity histograms obtained using T1W TSE and WS b-SSFP sequences. Both images are acquired from the same anatomic slice of the same subject with breath hold. The T1W TSE image (a) is shown to have lower contrast between fat and nonfat. Water and partial-volume fat signal are also in the same gray level range as shown in (b), making automated fat quantification difficult. The WS b-SSFP image (d), however, shows negligible water signal, leading to fat-only images. The corresponding histogram (e) shows that signal from suppressed water, partial-volume fat, and full-volume fat are delineated. This makes it possible to perform automated, yet accurate fat quantification. This material is reproduced with permission of John Wiley & Sons, Inc. [44] with the addition of images (c) and (f). Image (c) is the result of OTSU thresholding applied to (a) and (f) is the result of the segmentation technique described by Peng et al. [44]
5.4.3 Gaussian Mixture Model
When overlapping peaks cannot be avoided the Gaussian mixture model (GMM) can be used to model complex probability density functions (PDF), such as image histograms, f (x), as k overlapping Gaussians. This is illustrated in Fig. 5.8 and can be expressed as:
k |
|
f (x) = ∑ piN x|μi, σi2 , |
(5.6) |
i=1
where μi and σi are the mean and standard deviation of the ith Gaussian, respectively. pi is the mixing proportion of the constituent Gaussians, N, used to model the PDF of the image. It satisfies the conditions:
k |
|
pi > 0, ∑ pi = 1. |
(5.7) |
i=1
5 Fat Segmentation in Magnetic Resonance Images |
101 |
Fig. 5.8 (a) T1-weighted GE image containing fat and soft tissue, (b) shows the threshold value T illustrates the threshold location using a red line. Voxels with gray level intensities higher than T are classified as fat. (c) Image histogram of (a) and its constituent Gaussians estimated using the GMM
Each Gaussian cluster is then modeled using the product of (5.8), the general equation for a Gaussian distribution and its probability (pi).
N( |
, |
σi |
) = |
1 |
|
exp |
|
(x − μi)2 |
. |
(5.8) |
|
|
|
|
|||||||
|
μi |
|
σi√2π |
|
− |
2σi2 |
|
Figure 5.8c, illustrates an image histogram containing 3 tissue classes and its constituent Gaussians calculated using a GMM. One of the most common algorithms used to calculate pi, μi and σi is the expectation maximization (EM) algorithm [45].
The GMM assumes that the PDF of all constituent tissue types in an image histogram are Gaussian and that tissue of the same class is uniform in intensity throughout the image or region of image to be segmented. When segmenting fat in MR images, k is usually estimated using a priori knowledge as 3 (fat, soft tissue and background).
The probability of an event (pixel/voxel) belonging to the ith distribution is given by
k |
|
P(x|Θ) = ∑ piN(x|μi , σi ), |
(5.9) |
i=1
102 |
D.P. Costello and P.A. Kenny |
|
where |
|
|
|
Θ = {μi=1, . . . , μi=k , σi=1, . . . , σi=k , pi=1, . . . , pi=k }. |
(5.10) |
The EM algorithm is a two-step iterative algorithm consisting of an expectation step (E-step) and a maximization step (M-step). The algorithm can be initialized using k- means clustering or by equal partitioning of the data into k regions or by automated seed initialization [46]. The expected log likelihood function for the complete data
|
|
|
is calculated using the E-step and is defined by Q(Θ, Θ(t)) using the estimated |
||
|
|
|
parameters Θ (t). |
|
|
|
|
(5.11) |
Q(Θ, Θ(t)) ≡ E[logN (X , Y |Θ)|X ,Θ (t)]. |
The function Q(Θ, Θ(t)) contains two arguments Θ denotes the parameters that
will be optimized in order to maximize the likelihood and Θ(t) corresponds to the estimated values. X is the observed data and remains constant while Y is the missing data, which is controlled by the underlying distributions.
The second step in the algorithm is the M-step. This step uses the maximized values from (5.11) above to generate a new set of parameters and is given by:
|
|
(5.12) |
Θ(t + 1) = arg max Q(Θ, Θ(t)). |
||
|
Θ |
|
If we have an estimate of the means (μi) and standard deviation (σi) of the constituent Gaussians we can compute the probability (pi) of a point (gray-level) in the histogram belonging to the kth Gaussian. The maximization step updates the Gaussian parameters using (5.13), (5.14) and (5.15).
pinew = |
1k |
k |
|
|
∑ N( i|x, Θ(t)). |
||||
|
|
|
|
|
|
|
|
i=1 |
|
|
|
|
|
|
μ new = |
∑ik=1 xN( i|x, Θ(t)) . |
|||
i |
|
|
|
|
|
∑ik=1 N( i|x, Θ(t)) |
σ new = |
|
|
∑ik=1 N( i|x, Θ(t))(xi − μinew)(xi − μinew)T . |
||
i |
|
|
|
∑ik=1 N( i|x, Θ(t)) |
(5.13)
(5.14)
(5.15)
The algorithm iterates between (5.11) and (5.12) until convergence is reached. Based on the Gaussian estimated using the EM algorithm an optimized threshold is calculated that minimizes misclassification error. The threshold for fat is set to the closest gray-level corresponding to the intersection of the fat and soft tissue Gaussians, as illustrated in Fig. 5.8b. Figure 5.13 in Sect. 5.5 is an example of an image which has been segmented successfully using the GMM. Lynch et al. [46]
5 Fat Segmentation in Magnetic Resonance Images |
103 |
Fig. 5.9 Seed point selection voxels in a region of interest. Each of the nearest neighbor voxels are illustrated using yellow arrows
proposed a novel approach to initialize cluster centers based on histogram analysis. It used the GMM to obtain a more robust segmentation of fat in MR images. No objective assessment of this method was carried out.
5.4.4 Region Growing
Region growing is an image segmentation technique that creates regions based some predefined homogeneity criteria such as texture, gray-level or color. Gray-level is the characteristic most commonly used when segmenting fat in MR images. Region growing aims to merge voxels or small regions in an image into larger ones based on a homogeneity criteria. The first step in any region-growing algorithm is the selection of a seed point, as illustrated in Fig. 5.9. This can be a single voxel or a small region of voxels and can be selected manually or automatically. Next, a homogeneity criterion is set. For example, if the gray-level of the neighboring voxel is between two threshold values, merge the voxel (region) with the seed point. The seed voxels nearest neighbors are illustrated using the yellow arrows in Fig. 5.9. Each new voxel in the region becomes a growth point and is compared to its nearest neighbors using the same homogeneity criteria. Growth of the region ceases when
104 |
D.P. Costello and P.A. Kenny |
no new growth points can be created within the confines of the homogeneity criteria. Growth or merging of regions will continue iteratively until the stopping criterion is reached.
Region Growing Algorithm
•Select a seed point/points
•Define a homogeneity/merging criteria
•Join all voxels connected to the seed that follow the homogeneity criteria to form a region
•Stop the algorithm when no adjacent voxels agree with the homogeneity criteria
Figure 5.10 shows an image containing a number of manually selected seed points and the resultant segmentation using region growing. Unlike thresholding, region growing can differentiate objects based on their spatial location, which enables the classification of different fat categories. In Fig. 5.10b, note also that bone marrow adipose tissue and the liver are not classified as fat. Fat classification as distinct from segmentation is an important issue and is discussed in detail in Sect. 5.5.
Region growing can be implemented using a number of algorithms, including region merging, region splitting and split and merge algorithms [39, 47]. Split and merge algorithms can be used to automate region growing, removing the need for subjective seed point selection [48]. Automated seed selection does not always guarantee complete segmentation due to the unpredictability of seed point selection and may therefore require manual addition of extra seed points after initial segmentation. In noisy images, it is possible for holes to appear in the segmented regions, post processing may be used to reduce these.
Siegel et al. [6] used a region growing technique in its simplest form to segment fat in transverse MR images of the abdomen. However, region growing is sometimes combined with edge detection to reduce the occurrence of overand under-segmentation. In a study by Yan et al. [49], a three-step segmentation process to isolate skeletal structures in CT images was employed. The first step was the application of a three-dimensional region-growing algorithm with adaptive local thresholds, which was followed by a series of boundary correction steps. This approach is both accurate and reliable for high contrast, low noise CT images. Subsequently, a similar technique was applied to the segmentation of fat in MR images by Brennan et al. [15].
Brennan et al. [15], used a four-step automated algorithm for the quantification of fat in MR images. Their algorithm was based on initialization of fat regions using conservative thresholding followed by steps of boundary enhancement, region growing and region refining (post processing). A weak correlation was found between total body fat quantified using MRI and BMI. The authors attribute this to the flawed nature of BMI measurements. A more appropriate measure of accuracy would have been comparison with manual delineation of fat by an expert radiologist (the gold standard). Combination of both region growing and edge–based segmentation algorithms contradicts work by Rajapakse and Kruggel [23], who state that the use of region and edge detection schemes are unsuitable in MR images due