# Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data

^{*}To whom correspondence should be addressed at: Department of Statistics, Room 128, Sequoia Hall, Stanford University, Stanford, CA 94305. E-mail: ude.drofnats.tats@ohonod.

## Abstract

We describe a method for recovering the underlying parametrization of scattered data (*m _{i}*) lying on a manifold

*M*embedded in high-dimensional Euclidean space. The method, Hessian-based locally linear embedding, derives from a conceptual framework of local isometry in which the manifold

*M*, viewed as a Riemannian submanifold of the ambient Euclidean space

^{n}, is locally isometric to an open, connected subset Θ of Euclidean space

^{d}. Because Θ does not have to be convex, this framework is able to handle a significantly wider class of situations than the original ISOMAP algorithm. The theoretical framework revolves around a quadratic form (

*f*) = ∫

_{M}

*H*(

_{f}*m*)

*dm*defined on functions

*f*:

*M*. Here

*Hf*denotes the Hessian of

*f*, and (

*f*) averages the Frobenius norm of the Hessian over

*M*. To define the Hessian, we use orthogonal coordinates on the tangent planes of

*M*. The key observation is that, if

*M*truly is locally isometric to an open, connected subset of

^{d}, then (

*f*) has a (

*d*+ 1)-dimensional null space consisting of the constant functions and a

*d*-dimensional space of functions spanned by the original isometric coordinates. Hence, the isometric coordinates can be recovered up to a linear isometry. Our method may be viewed as a modification of locally linear embedding and our theoretical framework as a modification of the Laplacian eigenmaps framework, where we substitute a quadratic form based on the Hessian in place of one based on the Laplacian.

**Keywords:**manifold learning|ISOMAP|tangent coordinates|isometry| Laplacian eigenmaps

## 1. Introduction

A recent article in *Science* (1) proposed to recover a low-dimensional parametrization of high-dimensional data by assuming that the data lie on a manifold *M* which, viewed as a Riemannian submanifold of the ambient Euclidean space, is globally isometric to a convex subset of a low-dimensional Euclidean space. This bold assumption has been surprisingly fruitful, although the extent to which it holds is not fully understood.

It is now known (2, 3) that there exist high-dimensional libraries of articulated images for which the corresponding data manifold is indeed locally isometric to a subset of a Euclidean space; however, it is easy to see that, in general, the assumption that the subset will be convex is unduly restrictive. Convexity can fail in the setting of image libraries due to (*i*) exclusion phenomena (2, 3), where certain regions of the parameter space would correspond to collisions of objects in the image, or (*ii*) unsystematic data sampling, which investigates only a haphazardly chosen region of the parameter space.

In this article we describe a method that works to recover a parametrization for data lying on a manifold that is locally isometric to an open, connected subset Θ of Euclidean space ^{d}. Because this subset need not be convex, whereas the original method proposed in ref. 1 demands convexity, our proposal significantly expands on the class of cases that can be solved by isometry principles.

Justification of our method follows from properties of a quadratic form (*f*) = ∫_{M}*H*_{f}(*m*)*dm* defined on functions *f* : *M* . (*f*) measures the average, over the data manifold *M*, of the Frobenius norm of the Hessian of *f*. To define the Hessian, we use orthogonal coordinates on the tangent planes of *M*.

The key observation is that, if *M* is locally isometric to an open, connected subset of ^{d}, then (*f*) has a (*d* + 1)-dimensional null space consisting of the constant function and a *d*-dimensional space of functions spanned by the original isometric coordinates. Hence, the isometric coordinates can be recovered, up to a rigid motion, from the null space of (*f*).

We describe an implementation of this procedure on sampled data and demonstrate that it performs consistently with the theoretical predictions on a variant of the “Swiss roll” example, where the data are not sampled from a convex region in parameter space.

## 2. Notation and Motivation

Suppose we have a parameter space Θ ^{d} and a smooth mapping ψ : Θ ^{n}, where the embedding space ^{n} obeys *d* < *n*. We speak of the image *M* = ψ(Θ) as the manifold, although of course from the viewpoint of manifold theory it is actually the very special case of a single coordinate patch.

The vector θ can be thought of as some control parameters underlying a measuring device and the manifold as the enumeration *m* = ψ(θ) of all possible measurements as the parameters vary. Thus the mapping ψ associates parameters to measurements.

In such a setting, we are interested in obtaining data examples *m*_{i}, *i* = 1, …, *N* showing (we assume) the results of measurements with many different choices of control parameters (θ_{i}, *i* = 1, …, *N*). We will speak of *M* as the data manifold, i.e., the manifold on which our data *m*_{i} must lie. In this article we consider only the situation where all data points *m*_{i} lie exactly in the manifold *M*.

There are several concrete situations related to image analysis and acoustics where this abstract model may apply.

- Scene variation: pose variations and facial gesturing;
- Imaging variations: changes in the position of lighting sources and in the spectral composition of lighting color; and
- Acoustic articulations: changes in distance from source to receiver, position of the speaker, or direction of the speaker's mouth.

In all such situations, there is an underlying parameter controlling articulation of the scene; here are two examples.

- Facial expressions: The tonus of several facial muscles control facial expression; conceptually, a parameter vector θ records the contraction of each of those muscles.
- Pose variations: Several joint angles (shoulder, elbow, wrist, etc.) control the combined pose of the elbow–wrist–finger system in combination.

We also speak of *M* as the articulation manifold.

In the above settings and presumably many others, we can make measurements (*m*_{i}), but they are without access to the corresponding articulation parameters (θ_{i}).

It would be interesting to be able to recover the underlying parameters θ_{i} from the observed points *m*_{i} on the articulation manifold. Thus we have the following.

### Parametrization Recovery Problem.

*Given a collection of data points *(*m*_{i})* on an articulation manifold **M*,* recover the mapping *ψ* and the parameter points *θ_{i}.

As stated, this of course is ill-posed, because if ψ is one solution, and : ^{d} ^{d} is a morphing of the Euclidean space ^{d}, the combined mapping ψ ○ is another solution. For this reason, several extra assumptions must be made in order to uniquely determine solutions.

## 3. ISOMAP

In an insightful article, Tenenbaum *et al.* (1) proposed a method that, under certain assumptions, could indeed recover the underlying parametrization of a data manifold. The assumptions were:

(ISO1) Isometry: The mapping ψ preserves geodesic distances. That is, define a distance between two points

mandm′ on the manifold according to the distance travelled by a bug walking along the manifoldMaccording to the shortest path betweenmandm′. Then the isometry assumption says thatwhere || denotes Euclidean distance in

^{d}.

(ISO2) Convexity: The parameter space Θ is a convex subset of

^{d}. That is, if θ, θ′ is a pair of points in Θ, then the entire line segment {(1 −t)θ +tθ′ :t(0,1)} lies in Θ.

Tenenbaum *et al.* (1) introduced a procedure, ISOMAP, which under these assumptions recovered Θ up to rigid motion. That is, up to a choice of origin and a rotation and possible mirror imaging about that origin, ISOMAP recovered Θ. In their article, they gave an example showing successful recovery of articulation parameters from an image database that showed many views of a wrist rotating and a hand opening at various combinations of rotation/opening.

The stated assumptions lead to two associated questions:

(Q1) Do interesting articulation manifolds have isometric structure?

(Q2) Are interesting parameter spaces truly convex?

We (2, 3) studied these questions in the case of image libraries. Namely, we modeled images *m* as continuous functions *m*(*x*,*y*) defined on the plane (*x*,*y*) ^{2} and focused attention on images in special articulation families defined by certain mathematical models. As one example, we considered images of a ball on a white background, where the underlying articulation parameter is the position of the ball's center. In this model, let *B*_{θ} denote the ball of radius 1 centered at θ ^{2}, and define

This establishes a correspondence between θ ^{2} and *m*_{θ} in *L*^{2}(^{2}). After dealing with technicalities associated with having *L*^{2}(^{2}) as the ambient space in which *M* is embedded, we derived expressions for the metric structure induced from *L*^{2}(^{2}) and showed that indeed, if Θ is a convex subset of ^{2}, then isometry holds.

We found that isometry held for a dozen examples of interesting image articulation families including cartoon faces with articulated eyes, lips, and brows. Hence *Q1* admits of positive answers in a number of interesting cases.

On the other hand, in our studies of image articulation families, we (2, 3) noted that *Q2* can easily have a negative answer. A simple example occurs with images showing two balls that articulate by translation, as in the single-ball case mentioned above, but where the ball centers obey exclusion: The two balls never overlap. In this case, the parameter space Θ ^{4} becomes nonconvex; writing θ = (θ^{1}, θ^{2}) as a concatenation of the parameters of the two ball centers, we see that it is missing a tube where |θ^{1} − θ^{2}| ≤ 1.

The case of two balls moving independently and subject to exclusion is merely one in a series of examples where the articulation manifold fails to obey ISO1 and ISO2 but instead obeys something weaker.

(LocISO1) Local isometry: In a small enough neighborhood of each point

m, geodesic distances to nearby pointsm′ inMare identical to Euclidean distances between the corresponding parameter points θ and θ′.

(LocISO2) Connectedness: The parameter space Θ is an open, connected subset of

^{d}.

In such settings, the original assumptions of ISOMAP are violated, and as shown (ref. 2 and unpublished data), the method itself fails to recover the parameter space up to a linear mapping. We (unpublished data) pointed out the possibility of recovering nonconvex Θ by applying ISOMAP to a suitable decomposition of *M* into overlapping geodesically convex pieces. However, a fully automatic procedure based on a general principle would be preferable in solving this problem. In this article we propose such a procedure.

## 4. The Functional

We now set up notation to define the quadratic form (*f*) referred to in the *Abstract* and *Introduction.*

We suppose that *M* ^{n} is a smooth manifold, and thus the tangent space *T*_{m}(*M*) is well defined at each point *m* *M*. Thinking of the tangent space as a subspace of ^{n}, we can associate to each such tangent space *T*_{m}(*M*) ^{n} an orthonormal coordinate system using the inner product inherited from ^{n}. (It will not matter in the least how this choice of coordinate system varies from point to point in *M*.)

Think momentarily of *T*_{m}(*M*) as an affine subspace of ^{n} that is tangent to *M* at *m*, with the origin 0 *T*_{m}(*M*) identified with *m* *M*. There is a neighborhood *N*_{m} of *m* such that each point *m*′ *N*_{m} has a unique closest point *v*′ *T*_{m}(*M*) and such that the implied mapping *m*′ *v*′ is smooth. The point in *T*_{m}(*M*) has coordinates given by our choice of orthonormal coordinates for *T*_{m}(*M*). In this way, we obtain local coordinates for a neighborhood *N*_{m} of *m* *M*, call them *x*, …, *x*, where we retain tan, *m* in the notation to remind us that they depend on the way in which coordinates were defined on *T*_{m}(*M*).

We now use the local coordinates to define the Hessian of a function *f* : *M* that is *C*^{2} near *m*. Suppose that *m*′ *N*_{m} has local coordinates *x* = *x*^{(tan,m)}. Then the rule *g*(*x*) = *f*(*m*′) defines a function *g* : *U* , where *U* is a neighborhood of 0 in ^{d}. Because the mapping *m*′ *x* is smooth, the function *g* is *C*^{2}. We define the Hessian of *f* at *m* in tangent coordinates as the ordinary Hessian of *g*.

In short, at each point *m*, we use the tangent coordinates and differentiate *f* in that coordinate system. We call this construction the tangent Hessian for short.

We now consider a quadratic form defined on *C*^{2} functions by

where *dm* stands for a probability measure on *M* that has strictly positive density everywhere on the interior of *M*. (*f*) measures the average “curviness” of *f* over the manifold *M*.

### Theorem.

*Suppose **M* = ψ(Θ)* where *Θ* is an open*,* connected subset of *^{d},* and *ψ* is a locally isometric embedding of *Θ* into *^{n}.* Then *(*f*)* has a *(*d* + 1)*-dimensional null space consisting of the constant function and a **d**-dimensional space of functions spanned by the original isometric coordinates.*

We give the proof in Appendix.

### Corollary.

*Under the same assumptions as Theorem*,* the original isometric coordinates θ can be recovered*,* up to a rigid motion*,* by identifying a suitable basis for the null space of *(*f*).

## 5. Hessian Locally Linear Embedding (HLLE)

We now consider the setting where we have sampled data (*m*_{i}) lying on *M*, and we would like to recover the underlying parametrization ψ and underlying parameter settings θ_{i}, at least up to rigid motion. The *Theorem* and its *Corollary* suggest the following algorithm for attacking this problem. We model our algorithm structure on the original LLE algorithm (4).

##### HLLE algorithm:

Input: (

m_{i}:i= 1, …,N) a collection ofNpoints in^{n}.

Parameters:

d, the dimension of the parameter space;k, the size of the neighborhoods for fitting.

Constraints: min(

k,n) >d.

Output: (

w_{i}:i= 1, …,N) a collection ofNpoints in^{d}, the recovered parametrization.

##### Procedure:

- Identify neighbors: For each data point
*m*_{i},*i*= 1, …*n*, identify the indices corresponding to the*k*-nearest neighbors in Euclidean distance. Let_{i}denote the collection of those neighbors. For each neighborhood_{i},*i*= 1, …,*N*, form a*k*×*n*matrix*M*^{i}with rows that consist of the recentered points*m*_{j}−_{i},*j*_{i}, where_{i}=*Ave*{*m*_{j}:*j*_{i}}. - Obtain tangent coordinates: Perform a singular value decomposition of
*M*^{i}, getting matrices*U*,*D*, and*V*;*U*is*k*by min(*k*,*n*). The first*d*columns of*U*give the tangent coordinates of points in_{i}. - Develop Hessian estimator: Develop the infrastructure for least-squares estimation of the Hessian. In essence, this is a matrix
*H*^{i}with the property that if*f*is a smooth function*f*:*M*, and**f**_{j}= (*f*(*m*_{i})), then the vector*v*^{i}with entries that are obtained from**f**by extracting those entries corresponding to points in the neighborhood_{i}; then, the matrix vector product*H*^{i}*v*^{i}gives a*d*(*d*+ 1)/2 vector with entries that approximate the entries of the Hessian matrix, (*f*/*U*_{i}*U*_{j}). - Develop quadratic form: Build a symmetric matrix
_{ij}having, in coordinate pair*ij*, the entryHere by*H*^{l}we mean, again, the*d*(*d*+ 1)/2 ×*k*matrix associated with estimating the Hessian over neighborhood_{l}, where rows*r*correspond to specific entries in the Hessian matrix and columns*i*correspond to specific points in the neighborhood.

- Find approximate null space: Perform an eigenanalysis of , and identify the (
*d*+ 1)-dimensional subspace corresponding to the*d*+ 1 smallest eigenvalues. There will be an eigenvalue 0 associated with the subspace of constant functions; and the next*d*eigenvalues will correspond to eigenvectors spanning a*d*-dimensional space_{d}in which our embedding coordinates are to be found. - Find basis for null space: Select a basis for
_{d}, which has the property that its restriction to a specific fixed neighborhood_{0}(the neighborhood may be chosen arbitrarily from those used in the algorithm) provides an orthonormal basis. The given basis has basis vectors*w*^{1}, …*w*^{d}; these are the embedding coordinates.

The algorithm is a straightforward implementation of the idea of estimating tangent coordinates, the tangent Hessian, and the empirical version of the operator .

##### Remarks:

- Coding requirements: This can be implemented easily in matlab, mathematica, s-plus, r, or similar quantitative programming environment; our matlab implementation is available at http://basis.stanford.edu/HLLE.
- Storage requirements: This is a “spectral method” and involves solving the eigenvalue problem for an
*N*×*N*matrix. Although it would appear to require*O*(*N*^{2}) storage, which can be prohibitive, the storage required is actually proportional to*n**N*, i.e. the storage of the data points. In fact, this storage can be kept on disk; the remaining storage is basically proportional to*Nk*. Note that the matrix is a sparse matrix with ≈*O*(*Nk*) nonzero entries. - Computational complexity: In effect, the computational cost difference between a sparse and a full matrix using the sparse eigenanalysis implementation in matlab 6.1 (using Arnoldi methods) depends on the cost of computing a matrix-vector product using the input matrix. For our sparse matrix, the cost of each product is ≈2
*kN*, whereas for a full matrix the cost is ≈2*N*^{2}, making the overall cost of the sparse version*O*(*kN*^{2}). - Building the Hessian estimator: Consider first the case
*d*= 2. Form a matrix*X*^{i}consisting of the following columns.In the general case1*d*> 2, create a matrix with 1 +*d*+*d*(*d*+ 1)/2 columns; the first*d*+ 1 of these consist of a vector of ones, and then the first*d*columns of*U*, and the last*d*(*d*+ 1)/2 consist of the various cross products and squares of those*d*columns. Perform the usual Gram–Schmidt orthonormalization process on the matrix*X*^{i}, yielding a matrix^{i}with orthonormal columns; then define*H*^{i}by extracting the last*d*(*d*+ 1)/2 columns and transposing.

- Finding the basis for the null space: Let
*V*be the*N*×*d*matrix of eigenvectors built from the nonconstant eigenvectors associated to the (*d*+ 1) smallest eigenvalues, and let*V*_{l,r}denote the*l*th entry in the*r*th eigenvector of . Define the matrix (*R*)_{rs}= ∑_{jN1}*V*_{j,r}*V*_{j,s}. The desired*N*×*d*matrix of embedding coordinates is obtained fromIn section 7, we apply this recipe to a canonical isometric example.

## 6. Comparison to LLE/Laplacian Eigenmaps

The algorithm we have described bears substantial resemblance to the LLE procedure proposed by Roweis and Saul (4). The theoretical framework we have described also bears substantial resemblance to the Laplacian eigenmap framework of Belkin and Niyogi (5), only with the Hessian replacing the Laplacian. The Laplacian eigenmap setup goes as follows: Define the Laplacian operator in tangent coordinates by Δ^{(tan)}(*f*) = ∑(^{2}*f*/*x*), and define the functional (*f*) = ∫_{M}(Δ^{(tan)}(*f*))^{2}*dm*. This functional computes the average of the Laplacian operator over the manifold; Laplacian eigenmap methods propose to solve embedding problems by obtaining the *d* + 1 lowest eigenvalues of and using the corresponding eigenfunctions to embed the data in low-dimensional space. The LLE method is an empirical implementation of the same principle, defining a discrete Laplacian based on a nearest-neighbor graph and embedding scattered *n*-dimensional data by using the first *d* nonconstant eigenvectors of the graph Laplacian.

## 7. Data Example

In this example we take a random sample (*m*_{i}) on the Swiss roll surface (4) in three dimensions. The resulting surface is like a rolled-up sheet of paper and thus is exactly isometric to Euclidean space (i.e. to a rectangular segment of ^{2}). Successful results of LLE and ISOMAP on such data have been published (1, 4). However, here we consider a change in sampling procedure. Instead of sampling parameters in a full rectangle, we sample from a rectangle with a missing rectangular strip punched out of the center. The resulting Swiss roll is then missing the corresponding strip and thus is not convex (while still remaining connected).

Using this model and the code provided for ISOMAP and LLE in refs. 1 and 4, respectively, we test the performance of all three algorithms on a random sample of 600 points in three dimensions. The points were generated by using the same code published by Roweis and Saul (4). The results, as seen in Fig. Fig.1,1, show the dramatic effect that nonconvexity can have on the resulting embeddings. Although the data manifold is still locally isometric to Euclidean space, the effect of the missing sampling region is, in the case of LLE, to make the resulting embedding functions asymmetric and nonlinear with respect to the original parametrization. In the case of ISOMAP, the nonconvexity causes a strong dilation of the missing region, warping the rest of the embedding. Hessian LLE, on the other hand, embeds the result almost perfectly into two-dimensional space.

The computational demands of LLE algorithms are very different than those of the ISOMAP distance-processing step. LLE and HLLE are both capable of handling large *N* problems, because initial computations are performed only on smaller neighborhoods, whereas ISOMAP has to compute a full matrix of graph distances for the initial distance-processing step. However, both LLE and HLLE are more sensitive to the dimensionality of the data space, *n*, because they must estimate a local tangent space at each point. Although we introduce an orthogonalization step in HLLE that makes the local fits more robust to pathological neighborhoods than LLE, HLLE still requires effectively a numerical second differencing at each point that can be very noisy at low sampling density.

## 8. Discussion

We have derived an LLE algorithm from a conceptual framework that provably solves the problem of recovering a locally isometric parametrization of a manifold *M* when such a parametrization is possible. The existing ISOMAP method can solve the same problem in the special case where *M* admits a globally isometric parametrization. This special case requires that *M* be geodesically convex or equivalently that Θ be convex.

Note that in dealing with data points (*m*_{i}) sampled from a naturally occurring manifold *M*, we can see no reason that the probability measure underlying the sampling must have geodesically convex support. Hence our local isometry assumption seems much more likely to hold in practice than the more restrictive global isometry assumption in ISOMAP.

HLLE requires the solution of *N* separate *k* × *k* eigenproblems and, similar to Roweis and Saul's original LLE algorithm, a single *N* × *N* sparse eigenproblem. The sparsity of this eigenproblem can confer a substantial advantage over the general nonsparse eigenproblem. This is an important factor distinguishing LLE techniques from the ISOMAP technique, which poses a completely dense *N* × *N* matrix for eigenanalysis. In our experience, if we budget an equal programming effort in the two implementations, the implementation of HLLE can solve much larger-scale data analysis problems (much larger *N*) than ISOMAP. [We are aware that there are modifications of the ISOMAP principle under development that attempt to take advantage of sparse graph structure (i.e. landmark ISOMAP); however, the original ISOMAP principle discussed here is not posable as a sparse eigenproblem.]

A drawback of HLLE versus the other methods just discussed is the Hessian approach requires estimation of second derivatives, and this is known to be numerically noisy or difficult in very high-dimensional data samples.

Our understanding of the phenomena involved in learning parametrizations is certainly not complete, and we still have much to learn about the implementation of spectral methods for particular types of data problems.

## Acknowledgments

We thank Lawrence Saul (University of Pennsylvania, Philadelphia) for extensive discussions of both LLE and ISOMAP and also both George Papanicolaou (Stanford University) for pointers and Jonathan Kaplan (Harvard University, Cambridge, MA) for extensive discussions about properties of ill-posed boundary-value problems. D.L.D. thanks R. R. Coifman (Yale University, New Haven, CT) for pointing out the intrinsic interest of LLE and ISOMAP as well as for helpful discussions and references. C.G. thanks Vin de Silva (Stanford University) for extensive discussions of both LLE and ISOMAP. This work was partially supported by National Science Foundation Grants DMS-0077261, DMS-0140698, and ANI-008584 and a contract from the Defense Advanced Research Projects Agency Applied and Computational Mathematics Program.

## Abbreviations

- LLE
- locally linear embedding
- HLLE
- Hessian LLE

## Proof of Our Theorem

Because the mapping ψ is a locally isometric embedding, the inverse mapping = ψ^{−1} : *M* ^{d} provides a locally isometric coordinate system on *M*. Let θ_{1}, …, θ_{d} denote the isometric coordinates.

Let *m* be a fixed point in the interior of *M*, and let *f* be a *C*^{2} function on *M*. Define the pullback of *f* to Θ by *g*(θ) = *f*(ψ(θ)). Define the Hessian in isometric coordinates of *f* at *m* by

This definition postulates knowledge of the underlying isometry , so the result is definitely not something we expect to be “learnable” from knowledge of *M* alone. Nevertheless, it provides an important benchmark for comparison. Now define the quadratic form

This is similar to the quadratic form except that it is based on the Hessian in isometric coordinates rather than the Hessian in tangent coordinates. We will explore the null space of this quadratic form and relate it to the null space of .

By its very definition, ^{iso} has a natural pullback from *L*^{2}(*M*, *dm*) to *L*^{2}(Θ, *d*θ). Indeed, letting *g* : Θ be a function on Θ ^{d} with open interior, letting θ be an interior point, and letting *H*(θ) denote the ordinary Hessian in Euclidean coordinates at θ, we have actually defined *H*^{iso} by *H* *H*, where *f**(θ) = *f*(ψ(θ)) is the pullback of *f* to Θ ^{d}. Hence defining, for functions *g* : Θ , the quadratic form

(where *d*θ and *dm* are densities in one-to-one correspondence under the one-to-one correspondence θ ↔ *m*), we have

It follows that the null space of ^{iso} is in one-to-one correspondence with the null space of ^{euc} under the pullback by ψ. Consider then the null space of ^{euc}.

#### Lemma 1.

*Let *Θ ^{d}* be connected with open interior*,* and let **d*θ* denote a strictly positive density on the interior of *Θ.* Let **W*(Θ)* denote the usual Sobolev space of functions on *Θ* that have finite **L*^{2}* norm and the first two distributional derivatives of which exist and belong to **L*^{2}.* Then *^{euc},* viewed as a functional on the linear space **W*(Θ),* has a *(*d* + 1)*-dimensional null space consisting of the span of the constant function together with the **d** coordinate functions *θ_{1}, …, θ_{d}.

##### Proof:

It of course is obvious that the null space contains the span of the constant function and all the coordinate functions, because this span is simply all linear functions and linear functions have everywhere-vanishing Hessians. In the other direction, we show that the null space contains only these functions. Consider any function *g* in *C*^{∞}(Θ) that is not exactly linear. Then there must be some second-order mixed derivative (*g*/θ_{i1}θ_{i2}) that is nonvanishing on some ball: *g*/θ_{i1}θ_{i2}_{L2}_{(Θ,dθ)} > 0. But,

Hence no smooth nonlinear function can belong to the null space of ^{euc}. The openness of the interior of Θ implies that *C*^{∞}(Θ) is dense in *W*(Θ), and thus we can reach the same conclusion for all *g* in *W*(Θ).

By pullback, *Lemma 1* immediately implies:

#### Corollary.

*Viewed as a functional on **W*(*M*), ^{iso}* has a *(*d* + 1)*-dimensional null space consisting of the span of the constant functions and the **d**-isometric coordinates *θ_{i}(*m*)* on **M*.

We now show the same for the object of our original interest: . This follows immediately from the following lemma.

#### Lemma 2.

*Let **f** be a function in **C*^{∞}(*M*),* and let *ψ* be a local isometry between *Θ* and **M*.* Then at every **m* *interior*(*M*),

*Lemma 2* implies that

and allows us to see that and ^{iso} have the same null space. Our theorem follows.

##### Proof:

Recall that *H*(*m*)_{F} is unchanged by variations in the choice of orthonormal basis for the tangent plane *T*_{m}(*M*). Recall that = ψ^{−1} is a local isometry and gives a coordinate system θ_{1}, …, θ_{d} on *M* and therefore induces a choice of coordinate system on *T*_{m}(*M*) that is orthonormal, because is a local isometry. Therefore we may assume that our choice of orthonormal basis for *T*_{m}(*M*) is exactly the same as the choice induced by . Once this choice has been made, *Lemma 2* follows if we can show that

which will follow if we show that for every vector *v* *T*_{m}(*M*),

Given a vector *v* *T*_{m}(*M*), let γ_{v} : [0, ) *M* denote the unit-speed geodesic in *M* that starts at *m* = γ_{v}(0) and that has *v* for its tangent (*d*/*dt*)γ_{v}|_{t=0} = *v*. Consider the induced function *g*_{geo,v}(*t*) = *f*(γ_{v}(*t*)). Notice that by definition of isometric coordinates

On the other hand, the tangent space *T*_{m}(*M*) provides another local coordinate system for *M*. Let τ_{m} : *T*_{m}(*M*) *M* denote the inverse that maps from local coordinates back to *M*; this is the inverse of the mapping *m*′ (*x*(*m*′)). Consider the path δ_{v} : [0, ) *T*_{m}(*M*) defined by *d*_{v}(*t*) = *tv*, which this corresponds to a path in *M* defined by δ_{v}(*t*) = τ_{m}(*d*_{v}(*t*)), i.e. projecting the path in *T*_{m}(*M*) onto a neighborhood of *m* in *M*. Consider the induced function *g*_{tan,v} = *f*(δ_{v}(*t*)). Notice that by definition of tangent coordinates

Hence we have to show that

#### Lemma 3.

*Consider a geodesic *γ : [0, ) *M** of a Riemannian submanifold **M** of *^{n}* and view it as a space curve in *^{n}.* View **T*_{m}(*M*)* as a subspace of *^{n}.* The acceleration vector *(*d*^{2}/*dt*^{2})γ* of this space curve at the point **m* = γ(0)* is normal to **T*_{m}(*M*).

*Lemma 3* is a classic textbook fact about isometric embeddings (see chapter 6 of ref. 6). Less well known is that the same fact is true of δ_{v}(*t*).

#### Lemma 4.

*Viewing **t* δ_{v}(*t*)* as a space curve in *^{n},* its acceleration vector at **t* = 0* is normal to **T*_{m}(*M*).

In short, the acceleration components of both δ_{v} and γ_{v} at 0 reflect merely the extrinsic curvature of *M* as a curved submanifold of ^{n}; neither curve has an acceleration component within *T*_{m}(*M*). Because both curves by construction have the same tangent at 0, namely *v*, the key relation (Eq. A4) holds, and *Lemma 2* is proved. *Lemma 4* does not seem to be as well known as *Lemma 3*, so we prove it here.

##### Proof:

Let *c* = *n* − *d* denote the codimension of *M* in ^{n}. We can model *M* in the vicinity of *m* as the solution of a system of equations.

Let *t* *x*(*t*) be a path in ^{n} that starts in *M* at *t* = 0 and stays in *M*. Without loss of generality choose coordinates such that *x*(0) = 0 ^{n}. Then

where η_{i} denotes the gradient of η_{i}, and *H*_{ηi} denotes the Hessian of η_{i}, both evaluated at *x* = 0. Writing *x*(*t*) = *tv* + (*t*^{2}/2)*u* + *o*(*t*^{2}), we conclude that

whereas

In short,

Because the η_{i} span the normal space to *M* at *m*, it follows that *u* must have the specified coordinates in the normal space.

Now, by definition, δ_{v}(*t*) is the closest point in *M* to the point in the tangent plane represented by *tv*. But then, viewing *v* *T*_{m}(*M*) as a vector in ^{n}, we must have that *u* ^{n} satisfies

In short, *u* solves

This minimum-norm problem has a unique solution, with *u* in the span of the vectors η_{i}, i.e. in the normal space perpendicular to *T*_{m}(*M*). Hence the space curve *x*(*t*) has no component of acceleration in *T*_{m}(*M*).

## References

**National Academy of Sciences**

## Formats:

- Article |
- PubReader |
- ePub (beta) |
- PDF (226K) |
- Citation

- Nonlinear dimensionality reduction by locally linear inlaying.[IEEE Trans Neural Netw. 2009]
*Hou Y, Zhang P, Xu X, Zhang X, Li W.**IEEE Trans Neural Netw. 2009 Feb; 20(2):300-15. Epub 2009 Jan 13.* - Manifold learning on brain functional networks in aging.[Med Image Anal. 2015]
*Qiu A, Lee A, Tan M, Chung MK.**Med Image Anal. 2015 Feb; 20(1):52-60. Epub 2014 Oct 30.* - Vector Diffusion Maps and the Connection Laplacian.[Commun Pure Appl Math. 2012]
*Singer A, Wu HT.**Commun Pure Appl Math. 2012 Aug; 65(8).* - Maximal Linear Embedding for Dimensionality Reduction.[IEEE Trans Pattern Anal Mach Intell. 2011]
*Wang R, Shan S, Chen X, Chen J, Gao W.**IEEE Trans Pattern Anal Mach Intell. 2011 Feb 23; . Epub 2011 Feb 23.* - Alignment of overlapping locally scaled patches for multidimensional scaling and dimensionality reduction.[IEEE Trans Pattern Anal Mach Intell. 2008]
*Yang L.**IEEE Trans Pattern Anal Mach Intell. 2008 Mar; 30(3):438-50.*

- Hessian-Regularized Co-Training for Social Activity Recognition[PLoS ONE. ]
*Liu W, Li Y, Lin X, Tao D, Wang Y.**PLoS ONE. 9(9)e108474* - Conformations of macromolecules and their complexes from heterogeneous datasets[Philosophical Transactions of the Royal Soc...]
*Schwander P, Fung R, Ourmazd A.**Philosophical Transactions of the Royal Society B: Biological Sciences. 2014 Jul 17; 369(1647)20130567* - Sparse Projections of Medical Images onto Manifolds[Information processing in medical imaging :...]
*Chen GH, Wachinger C, Golland P.**Information processing in medical imaging : proceedings of the ... conference. 2013; 23292-303* - Vector Diffusion Maps and the Connection Laplacian[Communications on pure and applied mathemat...]
*Singer A, Wu HT.**Communications on pure and applied mathematics. 2012 Aug; 65(8)10.1002/cpa.21395* - Making sense of big data[Proceedings of the National Academy of Scie...]
*Wolfe PJ.**Proceedings of the National Academy of Sciences of the United States of America. 2013 Nov 5; 110(45)18031-18032*

- PubMedPubMedPubMed citations for these articles

- Hessian eigenmaps: Locally linear embedding techniques for high-dimensional dataHessian eigenmaps: Locally linear embedding techniques for high-dimensional dataProceedings of the National Academy of Sciences of the United States of America. 2003 May 13; 100(10)5591

Your browsing activity is empty.

Activity recording is turned off.

See more...