## New Research In

### Physical Sciences

### Social Sciences

#### Featured Portals

#### Articles by Topic

### Biological Sciences

#### Featured Portals

#### Articles by Topic

- Agricultural Sciences
- Anthropology
- Applied Biological Sciences
- Biochemistry
- Biophysics and Computational Biology
- Cell Biology
- Developmental Biology
- Ecology
- Environmental Sciences
- Evolution
- Genetics
- Immunology and Inflammation
- Medical Sciences
- Microbiology
- Neuroscience
- Pharmacology
- Physiology
- Plant Biology
- Population Biology
- Psychological and Cognitive Sciences
- Sustainability Science
- Systems Biology

# Feynman’s clock, a new variational principle, and parallel-in-time quantum dynamics

Edited by Frank A. Wilczek, Center for Theoretical Physics, Cambridge, MA, and approved August 20, 2013 (received for review April 29, 2013)

## Significance

Methodology for studying quantum ground-state problems is currently more mature than what is available for quantum dynamical problems. Moreover, modern computing architectures demand that new algorithms be able to make use of parallel processing. We introduce a construction inspired by quantum computation that allows one to use virtually any model for a ground-state wavefunction to model quantum many-body dynamics and formulate it in a way that naturally leads to a parallel-in-time algorithm. We show how the elementary ground-state variational principle maps onto a new time-dependent variational principle and demonstrate practical examples of its use.

## Abstract

We introduce a discrete-time variational principle inspired by the quantum clock originally proposed by Feynman and use it to write down quantum evolution as a ground-state eigenvalue problem. The construction allows one to apply ground-state quantum many-body theory to quantum dynamics, extending the reach of many highly developed tools from this fertile research area. Moreover, this formalism naturally leads to an algorithm to parallelize quantum simulation over time. We draw an explicit connection between previously known time-dependent variational principles and the time-embedded variational principle presented. Sample calculations are presented, applying the idea to a hydrogen molecule and the spin degrees of freedom of a model inorganic compound, demonstrating the parallel speedup of our method as well as its flexibility in applying ground-state methodologies. Finally, we take advantage of the unique perspective of this variational principle to examine the error of basis approximations in quantum dynamics.

Feynman proposed a revolutionary solution to the problem of quantum simulation: Use quantum computers to simulate quantum systems. Although this strategy is powerful and elegant, universal quantum computers may not be available for some time, and in fact, accurate quantum simulations may be required for their eventual construction. In this work, we use the clock Hamiltonian originally introduced by Feynman (1, 2) for the purposes of quantum computation, to rewrite the quantum dynamics problem as a ground-state eigenvalue problem. We then generalize this approach and obtain a variational principle for the dynamics of a quantum system and show how it allows for the natural formulation of a parallel-in-time quantum dynamics algorithm. Variational principles play a central role in the development and study of quantum dynamics (3⇓⇓⇓⇓⇓⇓–10), and the variational principle presented here extends the arsenal of available tools by allowing one to directly apply efficient approximations from the ground-state quantum many-body problem to study dynamics.

Following trends in modern computing hardware, simulations of quantum dynamics on classical hardware must be able to make effective use of parallel processing. We show in this paper that the perspective of the variational principle leads naturally to a time-parallelizable formulation of quantum dynamics. Previous approaches for recasting quantum dynamics as a time-independent problem include Floquet theory for periodic potentials (11⇓–13) and more generally the formalism of Peskin and Moiseyev (14). However, the approach proposed in this paper differs considerably from these previous approaches. We derive our result from a different variational principle, and in our embedding the dynamics of the problem are encoded directly in its solution, as opposed to requiring the construction of another propagator. Considerable work has now been done in the migration of knowledge from classical computing to quantum computing and quantum information (15⇓⇓⇓–19). In this paper, we propose the use of an idea from quantum computation for the simulation of quantum dynamics.

This paper is organized as follows. We first review the Feynman clock: a mapping stemming from the field of quantum computation that can be used for converting problems in quantum evolution into ground-state problems in a larger Hilbert space. We then generalize the Feynman clock into a time-embedded discrete variational principle (TEDVP) that offers additional insight into quantum time dynamics in a way that is complementary to existing differential variational principles (20⇓–22). We then apply the configuration interaction method (23, 24) from quantum chemistry to solve for approximate dynamics of a model spin system, demonstrating convergence of accuracy of our proposed approach with the level of truncation. We demonstrate how this construction naturally leads to an algorithm that takes advantage of parallel processing in time and show that it performs favorably against existing algorithms for this problem. Finally we discuss metrics inspired by our approach that can be used to quantitatively understand the errors resulting from truncating the Hilbert space of many-body quantum dynamics.

## Physical Dynamics as a Sequence of Quantum Gates

Consider a quantum system described by a time-dependent wavefunction . The dynamics of this system are determined by a Hermitian Hamiltonian according to the time-dependent Schrödinger equation in atomic units,A formal solution to the above equation can be generally writtenwhere is the well-known time-ordering operator and is a unitary operator that evolves the system from a time *t*_{0} to a time *t*. These operators also obey a group composition property, such that if , thenFrom the unitarity of these operators, it is of course also true that , where † indicates the adjoint. Thus far, we have treated time as a continuous variable. However, when one considers numerical calculations on a wavefunction, it is typically necessary to discretize time.

We discretize time by keeping an ancillary quantum system, which can occupy states with integer indexes ranging from to , where *T* is the number of discrete time steps under consideration. This quantum system has orthonormal states such thatThis definition allows one to encode the entire evolution of a physical system by entangling the physical wavefunction with this auxiliary quantum system representing time, known as the ‘‘time register’’. We define this compound state to be the history state, given bywhere subscripts are used to emphasize when we are considering a time-independent state of a system at time *t*. That is, we define . From these definitions, it is immediately clear from above that the wavefunction at any time *t* can be recovered by projection with the time register, such thatAdditionally, we discretize the action of our unitary operators, such that and we embed this operator into the composite system–time Hilbert space as . Although the utility of this discretization has not yet been made apparent, we now use this discretization to transform the quantum dynamics problem into a ground-state eigenvalue problem.

## Feynman’s Clock

In the gate model (15, 25) of quantum computation, one begins with an initial quantum state and applies a sequence of unitary operators , known as quantum gates. By making a measurement on the final state , one determines the result of the computation or equivalently the result of applying the sequence of unitary operators . The map from the sequence of unitary operators in the gate model, to a Hamiltonian that has the clock state as its lowest eigenvector, is given by a construction called the clock Hamiltonian (2). Since its initial inception, much work has been done on the specific form of the clock, making it amenable to implementation on quantum computers (26). However, for the purposes of our discussion that pertains to implementation on a classical computer, the following simple construction suffices,where *C*_{0} is a penalty term that can be used to specify the state of the physical system at any time. Typically, we use this to enforce the initial state, such that if the known state at time is given by , then

One may verify by action of ℋ on the history state defined above, , that the history state is an eigenvector of this operator, with eigenvalue 0.

## A Discrete-Time Variational Principle

We now introduce the TEDVP and show how the above eigenvalue problem emerges as the special case of choosing a linear variational space. Suppose that one knows the exact form of the evolution operators and wavefunctions at times . By the properties of unitary evolution it is clear that the following holds:

From this, we can see that if the exact construction of is known for all *i*, but the wavefunctions are only approximately known (but still normalized), it is true thatwhere equality holds in the case that the wavefunction becomes exact at each discrete time point. Reintroducing the ancillary time register, we may equivalently say that all valid time evolutions embedded into the composite system–time space as minimize the quantitywhere ℋ (script font for operators denotes they act in the composite system–time space) is the operator given byIt is then clear from the usual ground-state variational principle that the expectation value of the operatoris minimized only for exact evolutions of the physical system. This leads us immediately to a time-dependent variational principle for the discrete representation of a wavefunction, given byIt is interesting to note that this is a true variational principle in the sense that an exact quantum evolution is found at a minimum, rather than at a stationary point as in some variational principles (22). This is related to the connection between this variational principle and the McLachlan variational principle that is explored in the next section.

To complete the connection to the clock mapping given above, we first note that this operator is Hermitian by construction and choose a linear variational space that spans the entire physical domain. To constrain the solution to have unit norm, we introduce the Lagrange multiplier *λ* and minimize the auxiliary functional given byIt is clear that this problem is equivalent to the exact eigenvalue problem on ℋ with eigenvalue *λ*. The optimal trajectory is given by the ground-state eigenvector of the operator ℋ. From this construction, we see that the clock mapping originally proposed by Feynman is easily recovered as the optimal variation of the TEDVP in a complete linear basis, under the constraint of unit norm. Note that the inclusion of *C*_{0} as a penalty term to break the degeneracy of the ground state is a convenient construction only for the eigenvalue problem. In the general TEDVP, one need not include this penalty explicitly, as degenerate allowed paths are excluded, as in other time-dependent variational principles, by fixing the initial state.

We note, as in the case of the time-independent variational principle and differential formulations of the time-dependent variational principle, the most compact solutions may be derived from variational spaces that have nonlinear parameterizations. Key examples of this in chemistry include Hartree–Fock, coupled cluster theory (27⇓–29), and multiconfigurational time-dependent Hartree (30). It is here that the generality of this unique variational principle allows one to explore solutions to the dynamics of the path without the restriction of writing the problem as an eigenvalue problem, as in the original clock construction of Feynman.

## Connection to Previous Time-Dependent Variational Principles

In the limit of an exact solution, it must be true that all valid time-dependent variational principles are satisfied. For that reason, it is important to draw the formal connection between our variational principle and previously known variational principles.

Consider only two adjacent times *t* and and the operator ℋ defined in Eq. **12**. Now suppose that the separation of physical times between discrete step *t* and , denoted *dt*, is small and the physical system has an associated Hamiltonian given by *H*, such that

By inserting this propagator into Eq. **14**, rewriting the result in terms of , and dropping terms of order *dt*^{3}, we haveAfter defining the difference operator such that , we can factorize the above expression intoIn the limit that , these difference operators become derivatives. Defining and allowing only variations of Θ, this is precisely the McLachlan variational principle (22):We then conclude that in the limit of infinitesimal physical time for a single time step, the TEDVP is equivalent to the McLachlan variational principle under these assumptions. Under the reasonable conditions that the variational spaces of the wavefunction and its time derivative are the same and that the parameters are complex analytic, then it is also equivalent to the Dirac–Frenkel and Lagrange variational principles (31). Moreover, in *SI Text* we provide a connection that allows other variational principles to be written as eigenvalue problems and further discuss the merits of the integrated formalism used here.

To conclude this section, we highlight one additional difference between practical uses of the TEDVP and other variational principles: The TEDVP is independent of the method used to construct the operator *U*_{t}. In quantum information applications, this implies it is not required to know a set of generating Hamiltonians for quantum gates. Additionally, in numerical applications, one is not restricted by the choice of approximate propagator used. In cases where an analytic propagator is known for the chosen basis, it can be sampled explicitly. Suppose that the dimension of the physical system is given by *N* and the number of time steps of interest is given by *T*. Assuming that the time register is ordered, the resulting eigenvalue problem is block tridiagonal with dimension *NT* (Fig. 1). This structure has been described at least once before in the context of ground-state quantum computation (32).

## Many-Body Application of the TEDVP

There has been a recent rise in the interest of methods for simulating quantum spin dynamics in chemistry (33, 34). To study the properties of the clock mapping when used to formulate approximate dynamics, we chose a simple model spin system inside an inorganic molecule (35). Specifically, we examine the spin dynamics of the vanadium compound depicted in Fig. 2. By choosing the three unpaired electron spins to interact with one another by means of isotropic exchange as well as uniform static external magnetic field *B*_{0} and a time-dependent transverse field *B*_{1}, this system can be modeled with a spin Hamiltonianwhere is the *α* Pauli operator on spin *i*, , is the Bohr magneton, and *g* is the measured spectroscopic splitting factor. The couplings as well as *g* are fitted through experimental determinations of magnetic susceptibility (35). The fact that they are not equal is reflective of the isosceles geometry of the vanadium centers. The parameters of this model are given by , , and . We allow *B*_{0} to vary to study the properties of the clock mapping in the solution of approximate quantum dynamics.

The quantum chemistry community has decades of experience in developing and using methods for obtaining approximate solutions of high-dimensional, ground-state eigenvector problems. By using the connection we have made from dynamics to ground-state problems, we now borrow and apply the most conceptually simple approximation from quantum chemistry, configuration interaction in the space of trajectories (36), to our model problem to elucidate the properties of the clock mapping.

For the uncorrelated reference, we take the entire path of a mean-field spin evolution that is governed by the time-dependent Hartree equations and write it aswhere is the reference spin-down state for spin *i*, is the reference spin-down state after rotation at time *t*, is the whole-product system at time *t*, and is determined from the mean field Hamiltonian. The equations of motion that determine are derived in a manner analogous to that of the time-dependent Hartree equations, and if one writes the wavefunction in the physical space asthen the equations of motion are given bywhere is the mean-field Hamiltonian for spin *i* formed by contracting the Hamiltonian over all other spins , is the expectation value of the Hamiltonian at time *t*, and *f* is the number of spins in the system.

To generate configurations, we also introduce the transformed spin excitation operator , which is defined byIn analogy to traditional configuration interaction, we define different levels of excitation (e.g., singles, doubles, …) as the number of spin excitations at each time *t* that will be included in the basis in which the problem is diagonalized. For example, the basis for the configuration interaction singles (CIS) problem is defined asNote that for is simply defined to be the identity operator on all spins so that the reference configuration is naturally included. Similarly, the basis for single and double excitations (CISD) is given byHigher levels of excitation follow naturally, and it is clear that when one reaches a level of excitation equivalent to the number of spins, this method may be regarded as full configuration interaction or exact diagonalization in the space of discretized paths.

The choice of a time-dependent reference allows the reference configuration to be nearly exact when , independent of the nature of the time-dependent transverse field. This allows for the separation of the consequences of time dependence from the effects of two vs. one body spin interactions.

After a choice of orthonormal basis, the dynamics of the physical system are given by the ground-state eigenvector of the projected eigenvalue problemwhere we explicitly define the projection operator onto the basis as so that the projected operator is given by .

Using these constructions, we calculate the quantum dynamics of the sample system described above. For convenience, we rescale the Hamiltonian by the value of . To add arbitrary nontrivial time dependence to the system and mimic the interaction of the system with a transverse pulse, we take . The magnitude of *B*_{0} was taken to be 200*T* to model perturbative two-body interactions in this Hamiltonian. To propagate the equations of motion and generate the propagators for the clock operator we use the enforced time-reversal symmetry exponential propagator (37) given by

The dynamics of some physical observables are displayed (Fig. 3) for the reference configuration, single excitations, double excitations, and the full configuration interaction. The physical observables have been calculated with normalization at each time step. It is seen (Fig. 4) that as in the case of ground-state electronic structure the physical observables become more accurate both qualitatively and quantitatively as the configuration space expands, converging to the exact solution with full configuration interaction. Moreover, in Fig. 5 we plot the contributions from the reference configuration, singles space, doubles space, and triples space and observe rapidly diminishing contributions. This suggests that the time-dependent path reference used here provides a good qualitative description of the system. As a result, perturbative and other dynamically correlated methods from quantum chemistry may also be amenable to the solution of this problem.

In principle, approximate dynamics derived from this variational principle are not norm conserving, as is seen in Fig. 5; however, this actually offers an important insight into a major source of error in quantum dynamics simulations of many-body systems, which is the truncation of the basis set as described in the last section. Simulations based on conventional variational principles that propagate within an incomplete configuration space easily preserve norm; however, the trajectories of probability that should have left the simulated space are necessarily in error.

## Parallel-in-Time Quantum Dynamics

Algorithms that divide a problem up in the time domain, as opposed to the spatial domain, are known as parallel-in-time algorithms. Compared with their spatial counterparts, such as traditional domain decomposition (38), these algorithms have received relatively little attention. This is perhaps due to the counterintuitive notion of solving for future times in parallel with the present. However, as modern computational architectures continue to evolve toward many-core setups, exploiting all avenues of parallel speedup available will be an issue of increasing importance. The most popular parallel-in-time algorithm in common use today is likely the parareal algorithm (39, 40). The essential ingredients of parareal are the use of a coarse propagator that performs an approximate time evolution in serial and a fine propagator that refines the answer and may be applied in parallel. The two propagations are combined with a predictor–corrector scheme. It has been shown to be successful with parabolic-type equations (41), such as the heat equation, but it has found limited success with wave-like equations (42), like the time-dependent Schrödinger equation. We now show how our variational principle can be used to naturally formulate a parallel-in-time algorithm and demonstrate its improved domain of convergence with respect to the parareal algorithm for Schrödinger evolution of hydrogen.

Starting from the TEDVP, minimization under the constraint that the initial state is fixed yields a Hermitian positive-definite, block-tridiagonal linear equation of the formwhere (to be solved for) contains the full evolution of the system and specifies the initial condition such that

The linear clock operator, , is similar to that before, where now we distinguish between a clock built from a coarse operator, , and a clock built from a fine operator, , as

The spectrum of this operator is positive definite and admits *T* distinct eigenvalues, each of which is *N*-fold degenerate. The conjugate gradient algorithm can be used to solve for , converging at worst in a number of steps, which is equal to the number of distinct eigenvalues (43, 44). Thus, application of the conjugate gradient algorithm to this problem is able to converge, requiring at most *T* applications of the linear clock operator . This approach on its own yields no parallel speedup. However, the use of a well-chosen preconditioner can greatly accelerate the convergence of the conjugate gradient algorithm (45).

If one uses an approximate propagation performed in serial, , which is much cheaper to perform than the exact propagation, as a preconditioning step to the conjugate gradient solve, the algorithm can converge in far fewer steps than *T* and a parallel-in-time speedup can be achieved. The problem being solved in this case for the clock construction is given by

To clarify and compare with existing methods, we now introduce an example from chemistry. The nuclear quantum dynamics of the hydrogen molecule in its ground electronic state can be modeled by the Hamiltonianwhere , , and atomic units (46). The initial state of our system is a Gaussian wavepacket with a width corresponding to the ground state of the harmonic approximation to this potential, displaced from the equilibrium position. To avoid the storage associated with the propagator of this system and mimic the performance of our algorithm on a larger system, we use the symmetric matrix-free split-operator Fourier transform (SOFT) method to construct block propagators (47). We note that this splitting is commonly referred to as the Suzuki–Trotter formula (48, 49) in the physics literature. This method is unconditionally stable, is accurate to second order in the time step *dt*, and may be written asHere, *F* and correspond to the application of the fast Fourier transform (FFT) and its inverse over the wavefunction grid. The use of the FFT allows each of the exponential operators to be applied trivially to their eigenbasis and as a result the application of the propagator has a cost dominated by the FFT that scales as , where *N* is the number of grid points being used to represent . For our algorithm, we define a fine propagator, , and a coarse propagator, from the SOFT construction, such that for a given number of subtime steps *k*,

We take for our problem the clock constructed from the fine propagator and use the solution of the problem built from the coarse propagator as our preconditioner. In all cases, only the matrix-free version of the propagator is used, including in the explicit solution of the coarse propagation.

We emphasize here the importance of the matrix-free aspect of the propagator. Consider, for example, an alternative scheme of parallelization where the propagators are computed simultaneously by *T* processors and stored for application to the vector. Although this scheme has apparently high parallel efficiency, the explicit calculation of a propagator with equivalent accuracy to that of the SOFT method can scale as in the size of the system and require the storage of a dense matrix of size . This makes it impractical for many problems of interest and is the reason we emphasize a scalable, matrix-free approach here.

From the construction of the coarse and fine propagators, with *T* processors, up to communication time, the costs of applying the fine clock in parallel and solving the coarse clock in serial require roughly the same amount of computational time. This is a good approximation in the usual case where the application of the propagators is far more expensive than the communication required. From this, assuming the use of *T* processors, we define an estimated parallel-in-time speedup for the computational procedure, given bywhere *N*_{f} is the number of applications of the fine-propagator matrix performed in parallel and *N*_{c} is the number of serial linear solves, using the coarse-propagator matrix used in the preconditioned conjugate gradient. The factor of 2 originates from the requirement of backward evolution not present in a standard propagation. The cost of communication overhead as well as the inner products in the CG algorithm are neglected for this approximate study, assuming they are negligible with respect to the cost of applying the propagator.

The equivalent parallel speedup for the parareal algorithm is given bywhere *N*_{f} and *N*_{c} are now defined for the corresponding parareal operators that are functionally identical to the clock operators without backward time evolution, and thus it lacks the same factor of 2.

As is stated above, in the solution of the linear clock without preconditioning, it is possible to converge the problem in at most *T* steps, independent of both the choice of physical time step and the size of the physical system *N*, by using a conjugate gradient method. However, with the addition of the preconditioner, the choice of time step and total time simulated can have an effect on the total preconditioned system. This is because as the coarse (approximate) propagation deviates more severely from the exact solution, the preconditioning of the problem can become quite poor.

This problem exhibits itself in a more extreme way for the parareal algorithm, as the predictor–corrector scheme may start to diverge for a very poorly preconditioned system. This has been seen before in the study of hyperbolic equations (42) and remains true in this case for the evolution of the Schrödinger equation. The construction derived from the clock is more robust and is able to achieve parallel-in-time speedup for significantly longer times. This marks an improvement over the current standard for parallel-in-time evolution of the Schrödinger equation.

To give a concrete example, consider the case where we divide the evolution of the nuclear dynamics of hydrogen into pieces containing *T* evolution blocks, each of which is constructed from *T* fine evolutions for a time as described above. The dynamics over which we simulate are depicted in Fig. 6. We average the estimated parallel speedup for 10 time blocks (which we define as the whole time in one construction of the clock) forward and the results are for the speedup are given in Fig. 7. In this example we see that for small *T* (or small total evolution times), the reduced overhead of having no backward evolution yields an advantage for the parareal algorithm. However, as the *T* increases, the parareal algorithm is less robust to errors in the coarse propagation and performance begins to degrade rapidly. In these cases, our clock construction demonstrates a clear advantage. A topic of current research is how to facilitate even longer evolutions in the clock construction.

## Norm Loss as a Measure of Truncation Error

Conservation of the norm of a wavefunction is often considered a critical property for approximate quantum dynamics, as it is a property of the exact propagator resulting from time-reversal symmetry. However, if norm conservation is enforced in a propagation that does not span the complete Hilbert space, one must account for the components of the wavefunction that would have evolved into the space not included in the computation. It is not immediately clear how population density that attempts to leave the selected subspace should be folded back into the system without being able to simulate the exact dynamics. This problem is sometimes glossed over with the use of exponential propagators that are guaranteed to produce norm-conserving dynamics on a projected Hamiltonian. Some more sophisticated approaches adjust the configuration space in an attempt to mitigate the error (50).

This discrepancy is at the center of the difference between the approximate dynamics derived from the discrete variational principle here and the approximate dynamics derived from the McLachlan variational principle such as the multiconfigurational time-dependent Hartree method. Mathematically, this results from the noncommutativity of the exponential map and projection operator defined above. That is, in general for a Hermitian operator *H*. In an approximate method derived from the McLachlan or any of the other differential time-dependent variational principles, the projection is performed on the Hamiltonian. As the projection of any Hermitian operator yields another Hermitian operator, the dynamics generated from the projection are guaranteed to be unitary if a sufficiently accurate exponential propagator is used. In contrast, projection of a unitary operator, as prescribed by the TEDVP, does not always yield a unitary operator. Thus, for an approximate basis, one expects norm conservation to be violated, where the degree of violation is related to the severity of the configuration space truncation error. This leads us to define a metric of truncation error that we term the instantaneous norm loss. We define this aswhere is always assumed to be normalized, which in practice means that a renormalization is used after each time step here. However, as we proved above, in the limit of a short time step, with dynamics generated by a Hamiltonian, the TEDVP must contain essentially the same content as the McLachlan variational principle. For this reason, we propose an additional metric that is given bywhere is the physical Hamiltonian. This is motivated by appealing to the McLachlan variational principle and substituting from the exact Schrödinger equation that , where *H* is the full (nonprojected) Hamiltonian. By defining , we recognize this as a perturbation theory estimate of the error resulting from the configuration basis truncation at a given point in time.

To examine the quality of these metrics and to better understand the consequences of the noncommutativity of the exponential map and projection, we return to the sample spin system. In this case, we choose a basis for the propagation space based entirely on the initial state and do not allow it to change dynamically in time as before. We perform simulations in the space of single excitations (S) from the initial state, of double excitations (SD), and in the full Hilbert space (Ex). Dynamics from the TEDVP are generated by first building the exact propagator and then projecting to the desired basis set whereas dynamics from the McLachlan variational principle (MVP) are modeled by projecting the Hamiltonian into the target basis set and exponentiating. A renormalization is used after each time step in the first case. Although one could perform the simulation with a time step where time-step error is negligible, we remove this component of the calculation for this example by making the Hamiltonian time independent. This allows direct analysis of the effect of time step on noncommutativity deviations.

In Fig. 8 we show the dynamics of the vanadium spin complex for the two approximate truncation levels (S and SD) with both methods and their associated error metrics [ and ]. Deviations in the qualitative features of the observable occur after even the first peak of the proposed metrics. In this particular simulation, the configuration interaction with singles and doubles spans all but one state in the full Hilbert space. The lack of this one state results in the large qualitative errors present, associated with the first and subsequent peaks present in these error metrics. The impact of later peaks is more difficult to discern, due to error in the wavefunctions, which accumulates as the propagation proceeds.

As predicted by the connection between the TEDVP and the McLachlan variational principle, although and are not identical for each case, in the short time limit they yield extremely similar information, which is highlighted in Fig. 8, displaying the resulting longer time dynamics for a time step of . In Fig. 9, however, we explore the effects of a significantly larger time step and begin to discern the result of the noncommutativity discussed here. Recalling that because the Hamiltonian is time independent in this case, the propagator used is numerically exact in both instances, so this effect is not a result of what would be traditionally called time-step error, resulting from intrinsic errors of an integrator. Interestingly, it is observed that begins to decay to a nearly constant value. This occurs because the action of projection after exponentiation breaks the degeneracy of the spectrum of the unitary operator, resulting in eigenvalues with norms different from 1. As a result, repeated action and renormalization of the operator are analogous to a power method for finding the eigenvector associated with the largest eigenvalue. This effect is exacerbated by taking long time steps.

## Conclusions

In this paper, we introduce a variational principle for time-dependent dynamics, inspired by the Feynman clock originally used for quantum computation. Unlike other previously proposed variational principles, the proposed TEDVP approach involves the solution of an eigenvalue problem for the entire time propagation. This perspective allows for readily using many of the powerful truncation techniques from quantum chemistry and condensed-matter physics that have been developed for the exact diagonalization problem. We show how this formulation naturally leads to a parallel-in-time algorithm and demonstrate its improved robustness with respect to existing methods. We introduce two error metrics for the TEDVP that allow one to characterize the basis approximations involved. The features of the method were demonstrated by simulating the dynamics of a hydrogen molecule and a molecular effective spin Hamiltonian. Further research directions include the use of other approximate techniques for the time dynamics, such as the use of perturbation theory (51) or coupled-cluster approaches (52), and further enhancement of parallel-in-time dynamics.

## Acknowledgments

We thank David Tempel for his valuable comments on the manuscript. J.R.M. is supported by the Department of Energy Computational Science Graduate Fellowship under Grant DE-FG02-97ER25308. A.A.-G. and J.A.P. thank the National Science Foundation for their support under Award CHE-1152291. A.A.-G. receives support from the Hughes Research Laboratories under Award M1144-201167-DS and from the University of California, San Diego, under Grant FA9550-12-1-0046. Research was sponsored by the US Department of Defense. We also thank the Corning Foundation, the Camille and Henry Dreyfus Foundation, and the Alfred P. Sloan Foundation for their continuing support of this research. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressly or implied, of the US Government.

## Footnotes

- ↵
^{1}To whom correspondence should be addressed. E-mail: aspuru{at}chemistry.harvard.edu.

Author contributions: J.R.M., J.A.P., and A.A.-G. designed research, performed research, contributed new reagents/analytic tools, analyzed data, and wrote the paper.

The authors declare no conflict of interest.

This article is a PNAS Direct Submission.

This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1308069110/-/DCSupplemental.

## References

- ↵
- ↵
- Feynman RP

- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- Poulsen JA

- ↵
- Haegeman J,
- et al.

- ↵
- ↵
- ↵Dion DR, Hirschfelder JO (2007)
*Time-Dependent Perturbation of a Two-State Quantum System by a Sinusoidal Field*(Wiley, Hoboken, NJ), pp 265–350. - ↵
- ↵Nielsen M, Chuang I (2000)
*Quantum Computation and Quantum Information*, Cambridge Series on Information and the Natural Sciences (Cambridge Univ Press, Cambridge, UK). - ↵
- Aspuru-Guzik A,
- Dutoi AD,
- Love PJ,
- Head-Gordon M

- ↵
- Kassal I,
- Jordan SP,
- Love PJ,
- Mohseni M,
- Aspuru-Guzik A

- ↵
- ↵
- Kassal I,
- Aspuru-Guzik A

- ↵
- ↵
- Frenkel J

- ↵
- ↵
- ↵
- ↵Farhi E, Goldstone J, Gutmann S, Sipser M (2000) Quantum computation by adiabatic evolution. arXiv:quant-ph/0001106.
- ↵Kitaev A, Shen A, Vyalyi M, Vyalyi N (2002)
*Classical and Quantum Computation*, Graduate Studies in Mathematics (Am Math Soc, Providence, RI). - ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- Mizel A

- ↵
- ↵
- ↵
- Luban M,
- et al.

_{6}-type magnetic molecules: Experiment and theory. Phys Rev B 66(5):054407-1–054407-12. - ↵
- Habershon S

- ↵
- ↵Smith B, Bjorstad P, Gropp W (2004)
*Domain Decomposition*(Cambridge Univ Press, Cambridge, UK). - ↵
- Lions J,
- Maday Y,
- Turinici G

- ↵
- Baffico L,
- Bernard S,
- Maday Y,
- Turinici G,
- Zérah G

- ↵
- ↵Gander MJ (2008) Analysis of the parareal algorithm applied to hyperbolic problems using characteristics.
*Bol Soc Esp Mat Apl*42(42):5–19. - ↵Hestenes MR, Stiefel E (1952) Methods of conjugate gradients for solving linear systems.
*J Res Natl Bur Stand*49(6):409–436. - ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵Westermann T, Manthe U (2012) Decoherence induced by conical intersections: Complexity constrained quantum dynamics of photoexcited pyrazine.
*J Chem Phys*137(22):22A509-1–22A509-11. - ↵
- Helgaker T,
- Jorgensen P,
- Olsen J

- ↵
- Kvaal S

## Citation Manager Formats

### More Articles of This Classification

### Related Content

### Cited by...

- No citing articles found.

### Similar Articles

## You May Also be Interested in

## Sign up for Article Alerts

## Jump to section

- Article
- Abstract
- Physical Dynamics as a Sequence of Quantum Gates
- Feynman’s Clock
- A Discrete-Time Variational Principle
- Connection to Previous Time-Dependent Variational Principles
- Many-Body Application of the TEDVP
- Parallel-in-Time Quantum Dynamics
- Norm Loss as a Measure of Truncation Error
- Conclusions
- Acknowledgments
- Footnotes
- References

- Figures & SI
- Info & Metrics