Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Kluwer - Handbook of Biomedical Image Analysis Vol

.1.pdf
Скачиваний:
106
Добавлен:
10.08.2013
Размер:
10.58 Mб
Скачать
+ H( φ) = 0.

Recent Advances in the Level Set Method

205

Eq. 4.3 gives the level set evolution equation,

 

 

∂φ

+ F φ = 0.

(4.5)

 

 

∂t

This is the key evolution equation that was introduced in [85]. Through this equation, the motion of the interface (t) is captured through Eq. 4.5 so that at any time t,

(t) = {x : φ(x, t) = 0}.

(4.6)

One key observation about Eq. 4.5 is that we have implicitly assumed that the function F is known over the entire domain of φ. Very often, this is not the case, and F is only defined on the interface itself. However, this problem can be solved by using velocity extensions, which will be discussed in Section 4.2.5.

4.2.2Numerical Implementation of the Level Set Method

As noted in the introduction, the second critical part of the paper by Osher and Sethian was the use of methods borrowed from hyperbolic conservation laws for discretizing the level set equation Eq. 4.5. This concept was generalized in [103], where numerical flux functions designed for hyperbolic conservation laws were used to solve Hamilton–Jacobi equations of the form

∂φ

(4.7)

∂t

Here, the function H( φ) is called the Hamiltonian, and it is a function of the gradient of φ. There is a rich history of numerical methods for hyperbolic conservation laws. An excellent review of numerical methods for hyperbolic conservation laws can be found in [75].

In the case of the level set method, the Hamiltonian is given by

H( φ) = F φ .

(4.8)

A first-order numerical Hamiltonian for solving Eq. 4.7 is given by Godunov’s method, where

φijn+1 = φijn t(max( sign(Fij )Dxφijn , − sign(Fij )D+xφijn , 0)2

 

+ max( sign(Fij )Dyφijn , − sign(Fij )D+yφijn , 0)2)1/2.

(4.9)

206

 

 

 

 

 

 

 

Chopp

Here, the finite difference operators D±x are defined by

 

D

φ

φi+1, j φi, j

,

D

φ

φi, j φi−1, j

.

(4.10)

 

 

+x i, j =

x

 

x i, j =

x

 

The operators D±y are defined in a similar manner for the jth index. Note that the numerical flux function, the term multiplied by t in Eq. 4.9, senses the direction in which the interface is moving, then chooses the finite difference approximation which looks in the correct direction, also known as the upwind direction.

A second-order method based upon the ENO method [55] is given by

 

φn+1

φn

t(max( sign(F ) A,

sign(F )B, 0)2

 

ij

= ij

 

 

 

 

 

ij

ij

 

 

+ max( sign(Fij )C, − sign(Fij )D, 0)2)1/2,

(4.11)

where

 

 

 

 

 

 

 

 

 

 

A = Dxφijn +

x

minmod(Dx Dxφijn , Dx D+xφijn ),

(4.12)

 

 

 

 

 

2

 

 

 

B = D+xφijn +

x

minmod(D+x Dxφijn , D+x D+xφijn ),

(4.13)

 

 

 

 

 

2

 

 

 

 

 

y

 

 

 

 

C = Dyφijn +

 

 

 

 

minmod(Dy Dyφijn , Dy D+yφijn ),

(4.14)

2

 

 

 

 

y

 

 

 

 

D = D+yφijn +

 

 

 

 

minmod(D+y Dyφijn , D+y D+yφijn ),

(4.15)

2

 

 

and where

 

 

 

 

 

 

 

 

 

 

minmod(a, b) =

1

( sign(a) + sign(b)) min(|a|, |b|).

(4.16)

 

 

2

In general, the speed function, F, in Eq. 4.5 is split into F = Fadv + Fdiff, where Fadv is the advective part and Fdiff is the diffusive part. When constructing the numerical method for solving Eq. 4.5, the numerical flux function is used for the advective part, and the diffusive part is discretized using standard central differences.

To illustrate this, we take an example used in [85], where F = 1 − κ, 0 <<< 1, and κ is the mean curvature given by

κ =

φxxφy2 + φyyφx2 − 2φxyφxφy

.

(4.17)

φx2 + φy2 3/2

In this example, F is broken down so that Fadv = 1 and Fdiff = − κ. Using Godunov’s method for the advective term and central differences for the diffusive

Recent Advances in the Level Set Method

207

term gives

 

 

 

 

 

φijn+1 = φijn

t(max( sign(Fij )Dxφijn , − sign(Fij )D+xφijn , 0)2

 

 

+ max( sign(Fij )Dyφijn , − sign(Fij )D+yφijn , 0)2)1/2

 

 

+

D+x Dxφij D0yφij + D+y Dyφij D0xφij 2D0x D0yφij D0xφij D0yφij .

 

 

 

 

 

 

 

(D0xφij )2 + (D0yφij )2

(4.18)

 

 

 

 

Here, the difference operators, D0x, D0y, are the central finite difference operators defined by

D

φ

φi+1, j φi−1, j

,

D

φ

φi, j+1 φi, j−1

.

(4.19)

 

 

0x i, j =

2 x

 

0y i, j =

2 y

 

4.2.3 The Fast Marching Method

An interesting method related to the level set method is the fast marching method, which was introduced by Sethian [105, 106]. While the fast marching method is used for some subsidiary algorithms within the general level set method, this method is interesting in its own right. The fast marching method solves a subclass of the problems normally solved with the level set method, but it does so much more quickly.

Like the level set method, the fast marching method also uses an implicit representation for an evolving interface, but for the fast marching method, the embedding function carries much more information. For the fast marching method, the entire evolution of the interface is encoded in the embedding function, not just a single time slice. In other words, the location of the interface at time t is given by the set

(t) = {x : φ(x) = t}.

(4.20)

As a result, in the fast marching method, the embedding function, φ, has no time dependency.

The embedding function, φ, is constructed by solving a static Hamilton– Jacobi equation of the form

F φ = 1,

(4.21)

where F is again the speed of the interface. What makes the fast marching method fast is the fact that Eq. 4.21 can be solved with one pass over the mesh.

208

Chopp

This contrasts with the level set method, where each time step requires an additional pass over the mesh to evolve the level set function in time.

The implementation of the fast marching method also uses the numerical flux functions discussed in Section 4.2.2; however, in this case, only one-sided differences such as Godunov’s method may be used. For example, suppose the values of φi−1, j , φi, j+1 are already determined, and we wish to compute φij . Then Eq. 4.21 is discretized using one-sided differences to obtain

Fij2 ((Dxφij )2 + (D+yφij )2) = 1.

(4.22)

This equation can be rewritten as a quadratic in terms of the unknown φij :

1

+

1

 

2

− 2

φi−1, j

+

φi, j+1

 

φi, j +

φi2−1, j

+

φi2, j+1

1

= 0.

 

 

φi, j

 

 

y2

x2

y2

 

x2

y2

x2

F2

(4.23)

In most cases, solving Eq. 4.23 will produce two solutions, one which is less than the values of φi−1, j , φi, j+1, and one which is greater. The larger of the two values is always chosen because of the causality assumption made by this method; values that are unknown are always greater than the known values.

Occasionally, Eq. 4.23 will not have any real roots. In that case, each of the coordinate directions is considered separately. For example, if we consider the

x-direction, we assume that ∂φ/∂ y = 0, and then discretize Eq. 4.21 to get

Fij Dxφij = 1.

(4.24)

This equation is linear and is easily solved for φij . Similarly, the y-direction is considered, and the smaller of the two solutions is taken as the new estimate for φij .

The key to solving Eq. 4.21 in one pass is to traverse the mesh in the proper order. The grid points must be evaluated in the order of increasing t. This is accomplished by using a sorted heap which always keeps track of which grid point is to be evaluated next. To begin, the set of grid points is divided into three disjoint sets, the accepted points A, the tentative points T, and the distant points D. The accepted points in A are the points xij for which the computed value of φij is already determined. The tentative points in T are the points xij for which a tentative value for φij is computed. The remainder of the points are in the set D. One by one, points in T are taken, in order of increasing value of φij , from the set T into A. Each time, points φij in D which become adjacent to points in the set A are moved into the set T and a tentative value

Recent Advances in the Level Set Method

209

Accepted

Tentative

Distant

φ−1(t)

φ−1(0)

Figure 4.3: Illustration of the sets A, T, and D associated with the fast marching method. This figure reprinted from [22].

for φij is computed using Eq. 4.21. The algorithm terminates when all points have migrated into the set A. See Fig. 4.3 for an illustration of the sets A, T, and D.

The full algorithm for the fast marching method becomes:

1.Initialize all the points adjacent to the initial interface with an initial value, put those points in A. A discussion about initialization follows in Section 4.2.3. All points xi, j / A, adjacent to a point in A, are given initial estimates for φi, j by solving Eq. 4.21. These points are tentative points and put in the set T. All remaining points unaccounted for are placed in D and given initial value of φi, j = +∞.

2.Choose the point xi, j T which has the smallest value of φi, j and move it into A.

3.Any point which is adjacent to xi, j (i.e. the points xi−1, j , xi, j−1, xi+1, j , and xi, j+1) which is in T has its value φi, j recalculated using Eq. 4.21. Any point adjacent to xi, j and in D has its value φi, j computed using Eq. 4.21 and is moved into the set T.

4.If T = , go to step 2.

210

Chopp

Increasing values of ϕ

Figure 4.4: Example of a binary tree for the heap sort algorithm.

A higher order version of the fast marching method can be obtained by

replacing Eq. 4.23 with

 

 

 

 

 

 

 

 

 

 

 

 

 

 

max(Dxφi, j + sx,−1

x

 

 

 

 

 

 

 

x2

 

Dx Dx Dxφi, j ,

 

 

 

 

Dx Dxφi, j + sx,−1sx,−2

 

 

 

 

2

 

6

 

 

 

 

x

 

 

 

 

 

 

x2

D+x D+x D+xφi, j , 0)2

D+xφi, j sx,1

 

 

D+x D+xφi, j sx,1sx,2

 

 

2

 

6

 

+ max(Dyφi, j + sy,−1

y

Dy Dyφi, j + sy,−1sy,−2

y2

Dy Dy Dyφi, j ,

2

 

 

6

D+yφi, j sy,1

 

y

 

 

 

 

 

y2

D+y D+y D+yφi, j , 0)2

 

 

D+y D+yφi, j sy,1sy,2

 

 

2

 

6

1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(4.25)

=

 

.

 

 

 

 

 

 

 

 

 

 

 

 

 

Fi2, j

 

 

 

 

 

 

 

 

 

 

 

 

 

The fast marching method algorithm presented in [105], is first-order accurate and can be recovered from Eq. 4.25 by taking all the switches s , = 0. The second-order accurate method presented in [106] can also be recovered from Eq. 4.25 by taking all the switches s ,±2 = 0.

The Heap-Sort Algorithm

The heap sort algorithm employed in the fast marching method is a balanced binary-tree structure which always maintains the smallest value of φ at the top. For purposes of illustration, see Fig. 4.4. The top of the tree is indicated by the single node at the top in Fig. 4.4. Each of the nodes connected to the top is called the child of that node, and the top node is the parent of its children. Except for the top node, each node has one parent, and may have zero, one, or two children depending upon where it is in the tree.

The operations on the tree that are required for the fast marching method are:

1. Resort the tree from one element.

Recent Advances in the Level Set Method

211

0

 

0

 

 

3

up-sweep

1 N

 

 

 

4

3

1

N

4

Figure 4.5: Example of the up-sweep for re-sorting a tree.

It is important that any operation on the tree ensures that after the operation, the tree preserves its property that any parent node has a smaller value of φ than either of its children. Occasionally, an operation on a particular node may mean that it is no longer correctly placed. This requires the tree to be re-sorted to accommodate this modified node. Either an upsweep or a down-sweep process is required to restore the tree structure. Suppose there is a single misplaced node, N. First, compare N with its parent. If N is smaller than its parent, than an up-sweep is required. Otherwise, N is compared with its children, and if N is larger than either child, a down-sweep is used.

In the up-sweep, since N is smaller than its parent, N and its parent are exchanged. This process continues, with N comparing with its parent, until the parent is smaller or N has reached the top of the tree; see Fig. 4.5 for an illustration.

In the down-sweep, the node N is compared against its children. If N is smaller than either child, it is exchanged with the smaller of its two children. Like the up-sweep, this process is repeated until N is smaller than both of its children, or reaches the end of the tree. The down-sweep is illustrated in Fig. 4.6.

0

0

 

 

3

 

1

 

 

down-sweep

 

 

 

 

 

N

 

4

 

1

4

2

2

5

3

N

5

4

5

4

5

 

Figure 4.6: Example of the down-sweep for re-sorting a tree.

212

Chopp

2. Remove the smallest (top) node of the tree.

When the top node of the tree is removed, the child of the top node, whose value for φ is smallest, is chosen to be the new top node. This process of promoting the smallest child up the tree is then propagated down until a node with less than two children is detected. This process preserves the property of the tree that parent nodes always have a smaller value of φ than the children.

3. Add a new node to the tree.

When a grid point is moved from the set D to T, it is also added to the tree. Since the initial estimate for φ at this point is likely to be larger than any of those already in the tree, it is best to add the node to an outer branch. For purposes of efficiency, care should be taken to keep the tree as balanced as possible, hence the new node should be added to the sparsest part of the tree. Once the node is appended, an up-sweep is performed to ensure proper placement.

4. Change the key value of an element in the tree.

When a grid point value is changed, it may require the tree to be resorted. If the value of the node is increased, then a down-sweep is done, and if the value is decreased, an up-sweep is done.

Initialization of the Fast Marching Method

The best form of initialization is where the exact solution is assigned to all the points in the original set A. These are all the nodes which are immediately adjacent to the initial interface. Most often, the exact solution is not known, and the initial values for the set A must be approximated from the initial data.

The method for initializing the set A given in [105, 106] is only first-order accurate, and can be prone to errors which will propagate through the remainder of the calculation. It was shown in [22] that a more accurate method is available, which can drive higher order fast marching method solutions.

The underpinning of this higher degree of accuracy around the initial front is the use of a bicubic interpolation function p which is a second-order accurate local representation of a level set function φ, i.e. p(x) ≈ φ(x). The interpolation function p(x) can serve many purposes, including second-order

Recent Advances in the Level Set Method

 

213

xi−1, j+2

xi, j+2

xi+1, j+2

xi+2, j+2

xi−1, j+1

xi, j+1

xi+1, j+1

xi+2, j+1

 

 

 

 

xi−1, j

xi, j

xi+1, j

xi+2, j

xi−1, j−1

xi, j−1

xi+1, j−1

xi+2, j−1

Figure 4.7: Sample portion of the mesh where a bicubic interpolation is used. This figure reprinted from [22].

accuracy for the distance to the zero level set, subgrid resolution of the shape of the interface, as well as subgrid resolution of the level set function φ(x) itself.

We begin with a description of the bicubic interpolation for a level set function given on a rectangular mesh. The approximation is done locally in a box of the mesh bounded by grid points, call them xi, j , xi+1, j , xi, j+1, and xi+1, j+1, as in Fig. 4.7.

A bicubic interpolation p(x) of a function φ(x) is a function

 

 

3

3

 

p(x) = p(x, y) =

am,nxmyn,

(4.26)

m=0 n=0

which solves the following set of equations:

p(xk, ) = φ(xk, )

 

 

∂ p

∂φ

 

 

 

 

 

(xk, ) =

 

 

(xk, )

 

 

∂ x

∂ x

 

 

∂ p

∂φ

 

 

 

 

(xk, ) =

 

(xk, )

 

 

∂ y

∂ y

 

2 p

2φ

 

 

(xk, ) =

 

(xk, )

 

∂ x∂ y

∂ x∂ y

214 Chopp

for k = i, i + 1, = j, j + 1. This gives 16 equations for the 16 unknown coefficients am,n. Solving for am,n makes p(x, y) a bicubic interpolating function of φ(x, y) on the rectangle bounded by the corners xi, j , xi+1, j , xi, j+1, and

xi+1, j+1.

Since φ is only known on the mesh points, the values for the derivatives of

φ must be approximated. We use second-order finite difference approximations for the derivatives of φ:

 

∂φ

1

 

(φ(xm+1,n) − φ(xm−1,n))

 

 

 

 

(xm,n) ≈

 

 

 

 

 

 

∂ x

2 x

 

∂φ

1

(φ(xm,n+1) − φ(xm,n−1))

 

 

 

(xm,n) ≈

 

 

 

 

∂ y

2 y

2φ

1

 

(φ(xm+1,n+1) φ(xm−1,n+1)

 

(xm,n) ≈

 

 

∂ x∂ y

4 x y

 

 

 

 

 

φ(xm+1,n−1) + φ(xm−1,n−1))

for m = i, i + 1 and n = j, j + 1. Thus, construction of the interpolant p requires all the points shown in Fig. 4.7. Higher order local approximations can be made using higher order finite difference approximations and using a larger set of grid points around the box where the interpolant is used.

Now, given the interpolating function p(x, y) in the domain [xi, xi+1] × [yj , yj+1], and given a point (x0, y0) in that domain, we compute the distance between (x0, y0) and the zero level curve of p(x, y). The point (x1, y1) on the zero level curve closest to (x0, y0) must satisfy two conditions:

p(x1, y1) = 0,

(4.27)

p(x1, y1) × ((x0, y0) − (x1, y1)) = 0.

(4.28)

Equation 4.27 is a requirement that (x1, y1) must be on the interface. Equation 4.28 is a requirement that the interface normal, given by p(x1, y1), must be aligned with the line through the points (x0, y0) and (x1, y1). Equations 4.27 and 4.28 are solved simultaneously using Newton’s method. Typically, less than five iterations are necessary in order to achieve sufficient accuracy.

Given the front speed F(x1, y1) and the initial distance to the front, d = (x1, y1) − (x0, y0) , the initial value for a point adjacent to the initial front for the general fast marching method solving Eq. 4.21 is d/F.