New Research In
Physical Sciences
Social Sciences
Featured Portals
Articles by Topic
Biological Sciences
Featured Portals
Articles by Topic
 Agricultural Sciences
 Anthropology
 Applied Biological Sciences
 Biochemistry
 Biophysics and Computational Biology
 Cell Biology
 Developmental Biology
 Ecology
 Environmental Sciences
 Evolution
 Genetics
 Immunology and Inflammation
 Medical Sciences
 Microbiology
 Neuroscience
 Pharmacology
 Physiology
 Plant Biology
 Population Biology
 Psychological and Cognitive Sciences
 Sustainability Science
 Systems Biology
Fast methods for the Eikonal and related Hamilton– Jacobi equations on unstructured meshes

Communicated by Alexandre J. Chorin, University of California, Berkeley, CA (received for review December 8, 1999)
Abstract
The Fast Marching Method is a numerical algorithm for solving the Eikonal equation on a rectangular orthogonal mesh in O(M log M) steps, where M is the total number of grid points. The scheme relies on an upwind finite difference approximation to the gradient and a resulting causality relationship that lends itself to a Dijkstralike programming approach. In this paper, we discuss several extensions to this technique, including higher order versions on unstructured meshes in R^{n} and on manifolds and connections to more general static Hamilton–Jacobi equations.
Fast Marching Methods (1) are numerical algorithms for solving the nonlinear Eikonal equation 1 on a Cartesian mesh in O(M log M) steps, where M is the total number of grid points in the domain. Here, Ω is a domain in R^{n}, and the righthand side F(x) > 0 is typically supplied as a known input to the equation, as is the boundary condition that u equals a known function g(x) given along a prescribed curve or surface Γ in Ω. The technique hinges on using numerically consistent upwind finite difference approximations to the operators in the Eikonal equation that select the correct viscosity solution. The structure of this finite difference approximation to the gradient yields a resulting causality relationship that lends itself to an efficient programming approach.
The Fast Marching Method is connected to Huyghen's principle, which is a construction involving expanding wavefronts, and Dijkstra's method, which is an algorithm for computing smallest cost paths on a network. The viscosity solution to the Eikonal equation ∥∇u(x)∥ = F(x) can be interpreted through Huyghen's principle in the following way: circular wavefronts are drawn at each point on the boundary, with the radius proportional to F(x). The envelope of these wavefronts is then used to construct a new set of points, and the process is repeated; in the limit the Eikonal solution is obtained. The Fast Marching Method mimics this construction; a computational grid is used to carry the solution u, and an upwind, viscositysatisfying finite difference scheme is used to approximate the wavefront.
The order in which the grid values produced through these finite difference approximations are obtained is reminiscent of Dijkstra's method, which is a depthsearch technique for computing shortest paths on a network (2). Dijkstra's method keeps track of the “current smallest cost” for reaching a grid point and fans out along the network links to touch the adjacent grid points. The Fast Marching Method exploits the same idea in the context of an approximation to the underlying partial differential equation, rather than the discrete network links.
The Fast Marching Method is intertwined with some earlier work on front propagation, including work on curve and surface evolution in ref. 3, the suggestion to use schemes from hyperbolic conservation laws to approximate front motion in ref. 4, the introduction of level set methods by Osher and Sethian (5), and the narrow band level set method (6). In fact, the Fast Marching Method came in part from examining the limit of the narrow band method as the band was reduced to one grid cell. In this paper, we discuss several extensions to Fast Marching Methods, including higher order versions on Cartesian grids and unstructured meshes, on manifolds and in R^{n}; we also explore the connections to more general static Hamilton–Jacobi equations. For the sake of notational simplicity, the discussion below is limited to R^{2}, R^{3}, and twodimensional manifolds; the results hold for arbitrary dimension.
Fast Marching Methods.
We begin by finding a discretized version of the Eikonal Eq. 1 on the Cartesian grid. The easiest way to obtain such a discretization is to replace the gradient by the firstorder approximation (see ref. 7): 2 where we have used standard finite difference notation D_{ij}^{−x}u = u_{i,j} − u_{i−1,j}/h and D_{ij}^{+x}u = u_{i+1,j} − u_{i,j}/h. Here, u_{ij} is the value of u on a grid at the point ih,jh with grid spacing h. The forward and backwards operators D^{−y} and D^{+y} in the other coordinate direction are similar. This approximation is consistent and stable, and ensures that the viscosity solution is chosen.
One method for solving Eq. 2, described by Rouy and Tourin (7), is through iteration, which in three dimensions leads to an O(N^{4}) algorithm, where N is the number of points in each direction. The key to the Fast Marching Methods lies in the observation that the upwind approximation possesses a specific causality relationship. By “causality,” we mean that the solution of Eq. 2 at each grid point depends only on the smaller adjacent values. Thus, we can systematically build the solution in the order of increasing values of u.
Suppose, at some time, the solution to the Eikonal equation is known at a set of points (denoted Accepted points). For every notyet accepted grid point, such that it has an accepted neighbor, we compute a trial solution to the above quadratic Eq. 2 by using the given values for u at accepted points and values of ∞ at all other points. We now observe that the smallest of these trial solutions must be correct, because it depends only on accepted values that are themselves smaller. This causality relationship can be exploited to compute the solution efficiently and systematically as follows.
First, tag points in the initial conditions as Accepted. Then tag as Considered all points one grid point away and compute values at those points by solving the Eq. 2. Finally, tag as Far all other grid points. Then the loop is:
 1.,
 Begin loop: Let Trial be the point in Considered with the smallest value of u.;
 2.,
 Tag as Considered all neighbors of Trial that are not Accepted. If the neighbor is in Far, remove it from that set and add it to the set Considered.;
 3.,
 Add Trial to Accepted; remove from Considered.;
 4.,
 Recompute the values of u at all Considered neighbors of Trial by solving the piecewise quadratic equation according to Eq. 2.
This procedure is the Fast Marching Method as described in ref. 1. Some early applications include photolithography (12), a comparison of this approach with volumeoffluid techniques (13), and a fast algorithm for image segmentation (8); see also ref. 9 for a different Dijkstralike algorithm, which obtains the viscosity solution through a controltheoretic discretization, which hinges on a causality relationship based on the optimality criterion.
The key to an efficient implementation of the Fast Marching Method lies in a fast way of locating the Considered grid point with the smallest value for u. An efficient scheme for doing so, discussed in detail in ref. 10, can be devised by using a minheap structure, similar to what is done in Dijkstra's method. Given M elements in the heap, this structure allows us to change any element in the heap and reorder the heap in O(log M) steps. Thus, the computational efficiency of the total Fast Marching Method for the mesh with M total points is O(M log M); it takes M steps to touch each mesh point, where each step is O(log M), because the heap has to be reordered each time the values are changed.
Two extensions to the basic technique have added to the capabilities of this method. First, Kimmel and Sethian (11) moved the basic firstorder scheme to unstructured acute triangulated meshes in R^{2} and on twodimensional surfaces and used this technique to compute shortest path geodesics on triangulated manifolds. Second, higher order versions of the Cartesian Fast Marching Method were developed by Sethian in ref. 10; these techniques replaced the basic firstorder upwind operators by doublebackwards secondorder approximations to the first derivatives in each coordinate direction in such a way that a onepass nature of the method was preserved. For details of these secondorder extensions, see ref. 10.
In this paper, we build secondorder methods for arbitrary unstructured meshes on manifolds and in R^{n} and show how certain static Hamilton–Jacobi equations in the plane can be recast as Eikonal equations on appropriately chosen manifolds.
Our interest in unstructured meshes stems from our intent to solve Hamilton–Jacobi equations on manifolds.
Unstructured Mesh Methods.
Derivative approximations.
Because there is no natural choice of the coordinate system for an unstructured mesh, we compute the gradient as a linear combination of n directional derivatives. For any grid point x, the difference approximation of the directional derivative is obviously available for each direction x − x_{r}, where x_{r} is any grid point adjacent to x.
Suppose the directional derivative approximations are available for the directions defined by the linearly independent row unit vectors P_{1}, … , P_{n}. (If x is a vertex of simplex xx_{1}… x_{n}, then we will chose P_{r} = x − x_{r}/∥x − x_{r}∥ for r = 1, … , n.) Consider the n by n nonsingular matrix P having vectors P_{r} (r = 1, … , n) as its rows. Let v_{r}(x) be the value of the directional derivative for the direction P_{r} evaluated at the point x. Assuming that the function u is differentiable at x, we have P∇u(x) = v(x), where v(x) is a column vector of v_{r}(x) values. Combining this equation with the Eikonal equation, we can now write an equation for v(x): 3 The particular difference operator used to approximate v_{r} will depend on the type of the mesh and will determine the order of convergence of the numerical method. However, if the difference approximations depend linearly on u(x) (i.e., on the value of the function at the point where the approximation is performed), then the resulting discretized version of the Eikonal equation will always be quadratic in terms of u(x) regardless of the number of dimensions.
To obtain the discretized equation, we now replace each v_{r} with the corresponding difference approximation: v_{r} ≈ a_{r}u + b_{r}, where b_{r} linearly depends on values of u (and possibly of ∇u for higher order schemes) at the grid points around x. For convenience let Q = (PP^{T})^{−1} and use v ≈ ua + b. Then, the discretized version of Eq. 3 for the grid point x is the quadratic equation: 4
Upwinding criteria.
Suppose we are computing the value of u(x) from some simplex having x as one of its vertices. If u is known at the other vertices of that simplex, then u(x) can be computed by solving the quadratic Eq. 4. Now we can strictly define our upwinding criterion: the computed value of u(x) can be accepted, if the update is coming from within that simplex, i.e., only if the computed approximate (−∇u(x)) lies inside the simplex (see Fig. 1). This restriction is equivalent to requiring that all of the components of the vector Qv ≈ Q(ua + b) are nonnegative. If this condition is satisfied, we say that x is updated from this simplex and that this simplex is defining for x. The upwinding condition for an arbitrary triangle xx_{1}x_{2} is equivalent to two inequalities: (ua_{1} + b_{1}) ≥ (P_{1}⋅P_{2}^{T})(ua_{2} + b_{2}) and (ua_{2} + b_{2}) ≥ (P_{1}⋅P_{2}^{T})(ua_{1} + b_{1}), where P_{r} = x − x_{r}/∥x − x_{r}∥. Thus, for the firstorder difference approximation on the equilateral triangle xx_{A}x_{B}, the upwinding requirement means that we will accept u(x) only if u(x) ≥ 2u(x_{B}) − u(x_{A}) and u(x) ≥ 2u(x_{A}) − u(x_{B}).
It is possible that selecting a different simplex to define P will result in a different value of u(x). To avoid the difficulty, we always choose the defining simplex that produces the smallest new value for u(x); this choice corresponds to a scheme similar to Godunov's method on a Cartesian grid.
Extension to obtuse meshes.
The original Fast Marching Method requires solving Eq. 4 for all of the simplexes adjacent to the grid point x. But what should be done if one or more of the vertices are not yet Accepted? Previous versions of the Fast Marching Method handled this difficulty by using the values at the Considered (as well as Accepted) points for updates and/or by computing updates based on the available partial information (values of u and/or ∇u only at the Accepted vertices of the simplex; ref. 10). Even though this technique is quite useful for “nice” acute triangulations, it can lead to numerical instability of the scheme when used for arbitrary unstructured meshes (see ref. 11). Computing updates based on partial information (i.e., using some but not all vertices) can be particularly dangerous for higher accuracy methods, because the obtained estimate for ∇u might be very different from the true gradient.
The causality relationship requires u(x) to depend only on the smaller values of u at the grid points adjacent to x, which means that if u(x) is updated from simplex xx_{1}x_{2} [i.e., u(x) is produced by solving Eq. 4 corresponding to that simplex], then both u(x_{1}) and u(x_{2}) should be smaller than u(x). This condition is necessary for using the Fast Marching Method, because that method accepts the values at the grid points in ascending order; thus, if either x_{1} or x_{2} has a larger value than x, then it will not be Accepted at the time when we are evaluating x and cannot be used for computing u(x). For the Cartesian grid, the causality relationship is the result of using upwind difference approximations. Previously, we defined the upwinding criteria for an arbitrary triangulated mesh: if u(x) is updated from simplex xx_{1}x_{2}, then the vector −∇u(x) should point into that simplex. It is easy to show that the causality still follows from the upwinding requirement, provided the simplex has only acute angles.
However, if the triangulated mesh contains simplexes with obtuse angles, then the causality relationship might not hold even in the limit. Consider, for example, the front advancing with unit speed in the direction (1,1) (suppose we are solving ∥∇u∥ = 1 in R^{2} with the boundary condition u = 0 on the line y = −x). Consider the simplex xx_{1}x_{2} from Fig. 2. The update for the grid point x should clearly be coming from this simplex, because −∇u(x) points into it. However, it is also clear that u(x_{1}) < u(x) < u(x_{2}); thus, u(x_{2}) will not be available for computing u(x).
One possible solution is to build locally numerical support at obtuse angles, as was suggested in ref. 11. For an obtuse angle x_{1}xx_{2} (see Fig. 2a), consider its splitting section—an angle such that any ray inside it will split x_{1}xx_{2} into two acute angles. Find the closest Accepted grid point in the splitting section and then use that point as if it were adjacent to x. Thus, we would use simplex x_{1}xx_{6} in Fig. 2a.
There are some disadvantages to this method. First of all, the implementation for higher dimensions is rather cumbersome. Second, implementing it for triangulated surfaces requires an additional step of “unfolding” (11). Third, the method is no longer confined to considering the grid points immediately adjacent to x, because we need to look back for an Accepted point in the splitting section. The upper bound for how far back we need to look in the splitting section is available but depends on the maximum angle of the triangulation—the wider angle corresponds to the narrower splitting section that is less likely to contain a grid point near x.
We observe that the splitting section method can be improved further by noting that the cause of the problem is not just the obtuse angle in the defining simplex but also the fact that some of the vertices of that simplex are not yet Accepted. Thus, it is not necessary to find an Accepted node in the splitting section; it is enough to find an Accepted node such that the resulting virtual simplex intersects the splitting section. This strategy often allows us to look back much less; thus, in Fig. 2b, for example, the grid point x_{3} is the first Accepted point found such that xx_{3} intersects the splitting section. Therefore, the virtual simplex x_{1}xx_{3} will be used to compute the update for u(x).
We note that our construction works equally well on manifolds. As an example, in Fig. 3, we show offsets equidistant from the bounding box on a manifold that represents a complex machine part; the triangulation is obtained by mapping a regular triangular mesh in the xy plane onto the surface, thereby creating a large number of obtuse and neardegenerate triangles, including some with angles bigger than 160°.
Higher Order Versions.
We now create higher accuracy Fast Marching Methods by using higher order difference approximations for the directional derivatives. It would seem that such approximations can be used only if the solution u is sufficiently smooth; nonetheless, the fact that at some points ∇u is undefined does not prevent us from using this approach: u is differentiable almost everywhere, and characteristics never emanate from the shocks, i.e., no information is created at the shock. However, the order of the difference approximation does not always correspond to the order of convergence of the method. Still such methods converge faster than the ones that use the firstorder approximations (10). We further discuss the rate of convergence of the higher accuracy methods in Numerical Tests.
Cartesian higher order methods.
A higher order Fast Marching Method on Cartesian grid was first presented by Sethian in ref. 10. Herein, we show that such methods can also be obtained from the directional derivative approximation perspective, as described in Derivative approximations.
For a Cartesian grid, the natural choice of the coordinate system will be aligned with the grid lines. Then, for any grid point x, a direction vector P_{r} is always equal (up to the sign) to one of the canonical basis vectors. Thus, for x inside the grid (i.e., away from the boundary), both points x_{r,1} = x − hP_{r} and x_{r,2} = x − 2hP_{r} are also present in the grid. Then we can use the well known secondorder difference approximation for the directional derivative 5 Using the notation introduced in Derivative approximations, we can write 6 Because this approximation is valid only inside the domain, we need to have the exact values of u for the grid points near the boundary to start the algorithm. If these values are not available, we can use the firstorder Fast Marching Algorithm with much smaller mesh size to obtain the secondorder accurate approximations of u at those points. Because this approximation is secondorder only away from a singularity, we note that the exact (or secondorder accurate) values of u are also needed for the grid points in the narrow band around any singularities at the boundary. Finally, we note that the same higher order difference approximations can be used even for nonCartesian grids provided all of the grid points lie on the straight lines and are equidistant on those lines.
Higher order methods.
Typically, we do not have orthogonal difference operators on an unstructured mesh. Fortunately, we can still build higher order directional derivative approximations by using the gradient information at the grid points adjacent to x.
Consider a grid point x_{r} adjacent to x and the corresponding directional vector P_{r} = x − x_{r}/∥x − x_{r}∥. Supposing that both u(x_{r}) and ∇u(x_{r}) are known, we can write a secondorder approximation for the directional derivative 7 Using the notation introduced in Derivative approximations, we can write 8 We can also compute the secondorder accurate approximation for the gradient at x, namely ∇u(x) = P^{−1}v ≈ P^{−1}[u(x)a + b], provided u(x) is known with at least the secondorder accuracy as well. Thus, as the algorithm runs, we will store for each Accepted grid point the computed values of both u and ∇u to be used later when recomputing the Considered points.
Because this approach requires information about the gradient, we need to have the exact values of ∇u for the grid points on the boundary to start the algorithm. If these values are not available, we can use the firstorder Fast Marching Algorithm with much denser mesh in the narrow band near the boundary to obtain the accurate approximations of ∇u at the points in that narrow band.
Finally, we note that an additional step of “gradient mapping” is required to use this higher order method on nonsmooth triangulated surfaces (J.A.S. and A.V., unpublished work).
Numerical tests.
Next, we test the methods by finding the numerical solutions for the Eikonal equation ∥∇u(x)∥ = 1 in R^{2} with different boundary conditions. The viscosity solution of this equation taken with zero boundary condition is the “distance from boundary” function. Two examples with shocks are considered: one where the shock line occurs along the grid lines and another where the shock line is not aligned with the grid. Both the L_{2} and L_{∞} errors are computed on the grid. Note that the rate of convergence in the L_{2} norm generally corresponds to the order of the difference approximations. However, the rate of convergence in the L_{∞} norm might be lower (depending on the location of shocks relative to the grid). This phenomenon is due to the fact that the higher order approximations are meaningful only where the solution is sufficiently smooth. Fortunately, viscosity solutions are differentiable almost everywhere, and no information emanates from the shocks. The upwinding difference approximations used in our methods help the numerical solution to mimic this useful property of the true solution; thus, the L_{2} norm convergence is not affected by the larger errors committed near the shocks.
The tests are performed on the parallelogram with vertices at (0,0), (1,0), (0.5,/2), and (1.5,/2). The exact values for the distance are used in the narrow band of radius 0.1 around the initial points to start the algorithm. We use the grid of equilateral triangles on this domain. The numerical results for a nonacute triangulation are very similar.
We first use the Fast Marching Method to compute the distance from two vertices: (0,0) and (1.5,/2). The shock line runs along the edges of simplexes (along the shorter diagonal of the parallelogram; Fig. 4). Table 1 shows the errors under mesh refinement.
We now compute the minimal distance from the other two vertices: (1,0) and (1/2,/2). In this case, the shock line is not aligned with the grid lines (edges of simplexes), and it runs along the longer diagonal (Fig. 5). Table 2 shows the errors under mesh refinement. The L_{2} error is still secondorder convergent for the secondorder scheme, whereas its L_{∞} error is lower order, although still much better than the L_{∞} error of the firstorder scheme. This effect is to be expected; because of the grid alignment, the approximation is sometimes performed across the shock lines, which leads to the firstorder errors there.
Eikonal Equations on Surfaces and More General Static Hamilton–Jacobi Equations.
Suppose we are given a graph of a function z = f(x,y) and attempt to solve the Eikonal equation ∥∇u∥ = F(x,y) on that manifold. To be clear, 1/F(x,y) gives the speed in the direction normal to the level line u = constant on the manifold z = f(x,y). Projecting down onto the xy plane, this Eikonal equation translates into a particular static Hamilton–Jacobi equation on the plane. We now use this argument in reverse as follows. Consider any static Hamilton–Jacobi equation in the xy plane of the form 9 together with boundary conditions for u. Now suppose we can find functions p(x,y), q(x,y), and F(x,y) such that p_{y} = q_{x}, F(x,y) > 0, and 10 It can be then shown (see ref. 10) that the solution of Eq. 9 can be obtained by solving the Eikonal equation ∥∇u∥ = F(x,y) on the manifold z = f(x,y) where f_{x} = p and f_{y} = q. Thus, for any static Hamilton–Jacobi equation of the form given by the Eq. 9, if we can find functions p and q satisfying the above, then we can construct the surface z = f(x,y), approximate it with a triangulated mesh, and then solve the straightforward Eikonal problem on the manifold.
As an example, we consider the following equation. 11 where γ = (0.9π)^{2}, x,y ∈ [0,1], and the boundary condition is u(0.5,0.5) = 0. We can find functions p = 0.9πcos(2πx)sin(2πy) and q = 0.9πsin(2πx)cos(2πy), which satisfy our compatibility requirements, and then solve the Eikonal equation ∥∇u∥ = 1 on the surface f(x,y) = 0.45sin(2πx)sin(2πy). Fig. 6 shows the evolving front on the surface and the solution to the original problem on the plane.
We have presented techniques for computing certain Hamilton–Jacobi equations on unstructured meshes. Further discussion may be found elsewhere (J.A.S. and A.V., unpublished work).
Acknowledgments
The authors thank L. C. Evans, O. Hald, and M. Falcone. Supported by the Applied Mathematics and Science Office of Energy Research, U.S. Department of Energy (Grant DEAC0376SF00098) and the Office of Naval Research (Grant FDN000149610381).
Footnotes

↵* To whom reprint requests should be addressed. Email: sethian{at}math.berkeley.edu.

Article published online before print: Proc. Natl. Acad. Sci. USA, 10.1073/pnas.090060097.

Article and publication date are at www.pnas.org/cgi/doi/10.1073/pnas.090060097
 Received December 8, 1999.
 Accepted February 10, 2000.
 Copyright © The National Academy of Sciences
References
 ↵
 Sethian J A
 ↵
 ↵
 ↵
 Concus P,
 Finn R
 Sethian J A
 ↵
 ↵
 ↵
 ↵
 Malladi R,
 Sethian J A
 ↵
 ↵
 Sethian J A
 ↵
 Kimmel R,
 Sethian J A
 ↵
 Sethian J A
 ↵
 Helmsen J,
 Puchett E G,
 Colella P,
 Dorr M
Citation Manager Formats
More Articles of This Classification
Physical Sciences
Related Content
 No related articles found.