# Polynomial-time quantum algorithm for the simulation of chemical dynamics

Edited by Todd J. Martinez, University of Illinois at Urbana–Champaign, Urbana, IL, and accepted by the Editorial Board October 9, 2008

## Abstract

The computational cost of exact methods for quantum simulation using classical computers grows exponentially with system size. As a consequence, these techniques can be applied only to small systems. By contrast, we demonstrate that quantum computers could exactly simulate chemical reactions in polynomial time. Our algorithm uses the split-operator approach and explicitly simulates all electron-nuclear and interelectronic interactions in quadratic time. Surprisingly, this treatment is not only more accurate than the Born–Oppenheimer approximation but faster and more efficient as well, for all reactions with more than about four atoms. This is the case even though the entire electronic wave function is propagated on a grid with appropriately short time steps. Although the preparation and measurement of arbitrary states on a quantum computer is inefficient, here we demonstrate how to prepare states of chemical interest efficiently. We also show how to efficiently obtain chemically relevant observables, such as state-to-state transition probabilities and thermal reaction rates. Quantum computers using these techniques could outperform current classical computers with 100 qubits.

### Sign up for PNAS alerts.

Get alerts for new articles, or get an alert when an article is cited.

Accurate simulations of quantum-mechanical processes have greatly expanded our understanding of the fundamentals of chemical reaction dynamics. In particular, recent years have seen tremendous progress in methods development, which has enabled simulations of increasingly complex quantum systems. Although it is, strictly speaking, true that exact quantum simulation requires resources that scale exponentially with system size, several techniques are available that can treat realistic chemical problems, at a given accuracy, with only a polynomial cost. Certain fully quantum methods—such as multiconfigurational time-dependent Hartree (MCTDH) (1), matching pursuit/split-operator Fourier transform (MP/SOFT) (2), or full multiple spawning (FMS) (3)—solve the nuclear Schrödinger equation, including nonadiabatic effects, given analytic expressions for the potential energy surfaces and the couplings between them. These techniques have permitted the simulation of large systems; as examples we can give MCTDH simulations of a pentaatomic chemical reaction (4) and of a spin-boson model with 80 degrees of freedom (5) or an MP/SOFT simulation of photoisomerization in rhodopsin with 25 degrees of freedom (6). Ab initio molecular-dynamics techniques such as ab initio multiple spawning (AIMS) (7) avoid analytic expressions for potential energy surfaces and instead solve electronic Schrödinger equations at every time step. This allows one to gain insight into dynamical problems such as isomerizations through conical intersections (8).

However, there are also chemical processes that are best treated by completely avoiding the Born–Oppenheimer approximation. As examples, we can cite strong-field electronic dynamics in atoms and multielectron ionization (9, 10) or atomic and molecular fragmentation caused by collisions with energetic electrons or photons (11, 12). Systems that resist the application of the Born–Oppenheimer approximation require very general techniques, and the consequent unfavorable scaling has restricted such simulations to systems with a few particles. Here, however, we show that the Born–Oppenheimer approximation would not necessarily simplify simulations performed on quantum computers. Indeed, except for the smallest systems, an explicit treatment of all of the particles would be both more accurate and more efficient, even for nearly adiabatic chemical reactions.

Feynman's idea of using a quantum machine to mimic the quantum Hamiltonian of a system of interest was one of the founding ideas of the field of quantum computation (13). Lloyd (14) subsequently showed that quantum computers could be used to simulate systems that can be formulated in terms of local interactions, by using resources that scale only polynomially with system size. Zalka and Wiesner (15, 16) developed a quantum simulation algorithm for particles in real space and Lidar and Wang (17) applied it to the calculation of the thermal rate constant of chemical reactions. Smirnov

*et al.*proposed an analog quantum simulator for chemical reactions using quantum dots (42). We have previously shown (18) that quantum computers could be used to simulate the static properties of molecules, and in this work, we present a general scheme for using quantum computers for the study of dynamical chemical properties.To simulate a quantum system, we must prepare its initial quantum state, propagate it in time, and finally, extract data of chemical relevance, such as rate constants. For an efficient quantum simulation, all these tasks must be carried out by using resources that increase polynomially with increasing system size. We present a quantum algorithm that meets these requirements. We also show that for all chemical reactions with more than ≈4 atoms, it is more efficient for a quantum computer to simulate the complete nuclear and electronic time evolution rather than to use the Born–Oppenheimer approximation.

The polynomial scaling of these methods means they would enable the study of systems that are, in principle, out of reach for any classical computer. However, large quantum computers are far in the future, and so determining the requirements of interesting calculations in absolute terms is, perhaps, of more interest than their scaling alone. We show that a quantum computer using these techniques could outperform current classical computers using 100 qubits, within the design limits of a proposed 300-qubit quantum computer (19). Although we focus on chemical applications, these techniques are generally applicable to many physical systems, from strong-field, multielectron ionization to quantum liquids and condensed-matter systems.

This article is organized as follows. We first review Zalka and Wiesner's algorithm and show how the difficulty of computing a wave function's time evolution depends only on the complexity of evaluating the interaction potential. We then consider 3 approaches to the calculation of the interaction potential, including a fully nonadiabatic treatment of chemical reactions. We consider the problem of state preparation for all of the schemes and, finally, address the problem of measurement. We present 3 readout schemes for reaction dynamics—reaction probabilities, thermal rate constants, and state-to-state probabilities—that would allow for efficient evaluation of many parameters accessible to experiment.

## Quantum Dynamics

The problem of simulating quantum dynamics is that of determining the properties of the wave function ∣ψ(

*t*)〉 of a system at time*t*, given the initial wave function ∣ψ (0)〉 and the Hamiltonian Ĥ of the system. If the final state can be prepared by propagating the initial state, any observable of interest may be computed.We employ an improved version of the real-space quantum simulation technique developed by Zalka and Wiesner in which a discrete variable representation of the wave function is used (15, 16). In the 1-dimensional case, the domain of the wave function is divided into a discrete position basis of The spatial wave function is stored in the Hilbert space of the qubits, and so the spatial resolution grows exponentially with the number of qubits. For a system with

*n*= 2^{n}equidistant points. The wave function is represented as:*d*dimensions,*d*registers of*n*qubits each are used, ∣**x**〉 = ∣*x*_{1}〉…∣*x*_{d}〉, representing a grid of 2^{dn}points. The states of multiple particles can be stored by adding position registers for each particle. Therefore, only a polynomially large number of qubits is required to store the system wave function.For simplicity, we assume a time-independent Hamiltonian whose potential depends only on position, Ĥ = The operators The procedure is iterated as many times as necessary to obtain the system wave function ∣ψ(

*T^*+*V^*, where*T^*=*p^*^{2}/2*m*and*V^*=*V*(**x^**) are the kinetic and potential energy operators, respectively. The split operator method (15, 20, 21) computes the time evolution by separating the kinetic*T^*and potential*V^*energy contributions to the propagator*U^*(*t*) =*e*^{−iĤt}. Given a sufficiently small time step δ*t*, we can write to first order*e*^{−iV^δt}and*e*^{−iT^δt}are diagonal in the position and momentum representations, respectively. A quantum computer can efficiently transform between the 2 representations using the quantum Fourier transform (QFT) (22):*t*)〉 after an arbitrary time*t*to a desired accuracy.The application of diagonal unitaries is straightforward on a quantum computer. Suppose that we have a gate sequence that acts on an arbitrary position eigenstate as ∣

**x**〉 →*e*^{−iV(x)δt}∣**x**〉. Because ∣ψ〉 is a superposition of position eigenstates, when this gate sequence is applied to ∣ψ〉, one obtains*e*^{−iV^Vδt}∣ψ〉 in a single application.We depart from Zalka and Wiesner's method in the implementation of this gate sequence. We are free to take the lowest value of the potential in the domain as 0, and use such units that the maximum value of the potential is

*V*_{max}= 2^{m}− 1, with*m*an integer. With this choice of units*V*takes integer values, and we choose*m*large enough that*V*is resolved with sufficient precision. The integer*m*is therefore the number of qubits required to represent the desired range of potential values with the desired precision. The gate sequence 𝒱, which computes the potential*V*, acts so that 𝒱∣**x**,*y*〉 = ∣**x**,*y*⊕*V*(**x**)〉, where*y*is an*m*-bit integer labeling a basis state of the ancilla register, and ⊕ denotes addition modulo 2^{m}.We apply the diagonal unitary by phase kickback. The computer is initialized in the state ∣ψ〉 ⊗ ∣1〉where

_{m}, where ∣1〉_{m}in the ancilla register represents the state ∣0…001〉 in*m*qubits. Applying the inverse QFT to the ancilla register, followed by 𝒱, produces*M*= 2^{m}, and we choose δ*t*= $\frac{2\pi}{M}$. The equality obtains because the ancilla state is an eigenstate of addition (with eigenvalue*e*^{−2}^{πiq/M}corresponding to the addition of*q*) (23). We see that applying 𝒱 results in the requisite diagonal unitary action on the wave-function register. The states of register and ancilla are separable before and after each potential evaluation. We can also define a quantum gate sequence 𝒯 that computes the kinetic energy*p*^{2}/2*m*: 𝒯∣**p**,*y*〉 = ∣**p**,*y*⊕*T*(**p**)〉. This gate is diagonal in the momentum basis and has efficiently computable entries on the diagonal (namely,*p*^{2}). Thus, we use the quantum Fourier transform to conjugate into the momentum basis and 𝒯 is implemented by phase kickback in exactly the same way as 𝒱. The quantum circuit for this algorithm is shown in Fig. 1.Fig. 1.

This simulation algorithm is numerically exact in the sense that all introduced approximations are controlled, so that the error in the calculation can be arbitrarily reduced with an additional polynomial cost. The only approximations used are the discretization of time, space, and the potential

*V*(**x**). The error due to discretization can be made exponentially small by adding more qubits. The error due to time discretization can be systematically improved by use of higher-order Trotter schemes (24). The computational cost of the algorithm per iteration is the evaluation of*V*(**x**),*T*(**p**), and 2 QFTs. Although the QFTs and the quadratic form in the kinetic energy (*p*^{2}in the simplest case) can be computed in polynomial time (22, 25), the evaluation of the potential energy*V*(**x**) may not be efficient in general. For example, a random potential stored in an exponentially large database requires an exponentially large number of queries to evaluate. However, any classical algorithm running in*O*(*f*(*n*)) time can be adapted to a reversible quantum algorithm also running in*O*(*f*(*n*)) time (26). Therefore, the potential energy*V*(**x**) will be efficiently calculable on a quantum computer if it is efficiently calculable on a classical computer. Fortunately, this is true for all chemically relevant cases.## Chemical Dynamics

Every isolated system of chemical interest has the same Hamiltonian, which in atomic units iswhere the sums are over the nuclei and electrons,

*p*_{i}is the momentum of the*i*th particle,*M*_{i}is its mass,*q*_{i}is its charge, and*r*_{ij}is the distance between particles*i*and*j*. Both the potential and kinetic terms can be efficiently evaluated because the arithmetical operations can be performed in*O*(*m*^{2}) time (25), and for a system of*B*particles there are only*O*(*B*^{2}) terms in the sum.*The fact that the Coulomb potential can be evaluated in

*O*(*B*^{2}*m*^{2}) time implies that chemical dynamics could be simulated on a quantum computer in*O*(*B*^{2}*m*^{2}) time, an exponential advantage over known classical techniques for exact quantum simulation. Here,*B*is the number of particles, and*m*is the binary precision the potential in the region of interest. We want to emphasize that a quantum simulation would be substantially different from what is usually done on classical computers. Most significantly, we are proposing to explicitly track all of the nuclei and electrons on a Cartesian grid that is sufficiently fine and with time steps sufficiently short to capture the true electronic dynamics. We will show that this is not only more accurate but also requires fewer quantum resources.The supporting information (SI) contains a detailed computation of the numbers of gates and qubits required for the quantum simulation of the Coulomb potential. The number of elementary gates required to evaluate this potential in 3 dimensions is ( $\frac{75}{4}$

*m*^{3}+ $\frac{51}{2}$*m*^{2}) per pair of particles (Fig. 2). We chose a method that minimizes the number of ancilla qubits and so is suited for small numbers of qubits. Note that this scaling is not asymptotically optimal (the asymptotic requirement would be*O*(*m*^{2})), so further improvement could be achieved for computations with high precision (large*m*) if suitable arithmetical algorithms were implemented. Storing the wave function of a system with*d*degrees of freedom requires*nd*qubits, so a system of*B*particles, with*d*= 3*B*− 6 degrees of freedom, requires*n*(3*B*− 6) qubits. To this, one must add the qubits needed for the ancilla registers, only 4 of which are required for the Coulomb potential, meaning that simulating these potentials requires*n*(3*B*− 6) + 4*m*qubits (Fig. 2).Fig. 2.

On a small quantum computer, the computational cost of simulating the interactions between many particles may prove prohibitive. One could try to simplify matters and focus only on the nuclear motion by employing the Born–Oppenheimer approximation. Here, the solution of the electronic structure problem provides a potential energy surface on which the nuclei move. However, we show that quantum computers would benefit from the Born–Oppenheimer approximation only rarely: For many chemical reactions, simulating the full dynamics of both electrons and nuclei will not only yield more accurate results but will also, remarkably, be faster. This is in sharp contrast to the study of chemical dynamics on classical computers, where there is frequent need to simplify calculations by using the Born–Oppenheimer approximation.

It is difficult to estimate the precise computational cost of using the Born–Oppenheimer approximation, because different potential energy surfaces have different functional forms. Nevertheless, for any general fitting technique, the complexity of the interpolating function grows exponentially with increasing dimension of the system, because exponentially many data points must be used in the fit if one is to maintain uniform accuracy over the entire domain. We can provide an estimate of the computational cost for a potential energy surface that is an interpolating polynomial of degree

*K*along each dimension (see SI). In that case, the total cost of the adiabatic simulation is*K*^{3B−6}( $\frac{5}{4}$*m*^{3}+ $\frac{5}{2}$*m*^{2}) per nuclear time step (which is usually ∼1,000 times longer than an electronic time step). Numerical experiments with the BKMP2 surface for H_{3}(28) indicate that*K*must be chosen to equal at least 15 if one aspires to 0.1% accuracy in the potential and more for chemical accuracy. With*K*= 15, the exponential growth implies that even for heavy elements (*Z*≈ 100), the fully dimensional diabatic treatment is faster for all reactions involving more than four atoms and even for many smaller reactions, as shown in Fig. 3.Fig. 3.

It is perhaps beneficial to briefly discuss the intuitive reasons why the use of precomputed potential energy surfaces is not as useful on quantum computers as it is on classical machines. Classically, an exponential amount of memory is required in general to store the wave function. However, the ratio of computing power to memory in most classical computers is large, and the basic floating-point operations are hardwired. Because the storage capacity is often the limiting factor, if a wave function can be stored in memory, its motion on a surface can probably be computed. Quantum computers, on the other hand, require only linearly many qubits to store the wave function in a superposition state. However, the use of a precomputed potential requires either the evaluation of a complicated function or a look-up in a potentially large table. The potential energies must be computed on the fly to take advantage of quantum parallelism, and it is therefore imperative to keep the interaction potential as simple as possible. This is achieved by treating all of the particles explicitly, interacting via the Coulomb interaction.

An alternative way to compute a potential energy surface would be to embed an on-the-fly calculation of electronic structure in the quantum algorithm and thus avoid a classically precomputed fit. This can be done efficiently because electronic structure calculations can be performed in polynomial time on quantum computers (18). Hence, the quantum circuit 𝒱 would be replaced by a black box containing the efficient quantum version of the full configuration interaction (FCI) procedure (18). Because the quantum simulation algorithm exploits quantum effects, a single evaluation of electronic structure is sufficient for each time step: All of the nuclear configurations are evaluated in superposition. However, the electronic structure circuit for the proposed algorithm would require the atomic positions as input. This would require a modification of the original algorithm so that the Coulomb and exchange integrals are computed by using a quantum circuit rather than classically. Although this approach, unlike the Born–Oppenheimer approximation, is asymptotically appealing, the large overhead required to compute the exchange integrals quantum mechanically makes it uninteresting for near-future implementation.

Steane has recently proposed a design for a 300-qubit, error-corrected, trapped-ion quantum computer that could perform ∼10

^{9}quantum operations using methods for quantum gates that have already been experimentally implemented (19). On a three-dimensional grid of 2^{30}points, such a computer could store the wave function of a 10-particle system (Fig. 2). By comparison, classical computers implementing a comparable grid-based algorithm are limited to computing the full quantum evolution of a three-particle system, such as a helium atom (9, 10). Even a relatively modest quantum computer with 100 qubits could simulate the electron dynamics or ionization of the lithium atom, a task beyond the reach of classical computers using grid-based methods (10). The simplest chemical reaction, H + H_{2}→ H_{2}+ H, is a six-particle system and could therefore be simulated by Steane's computer in a fully dimensional diabatic regime. Although other classical methods may be able to reach somewhat larger examples, the exponential scaling of all known classical exact methods means that the examples given here are close to the cross-over point between classical and quantum computing for chemical dynamics. There remain 2 questions: how to prepare the initial state of the quantum computer and how to extract useful information out of the final state.## State Preparation

The preparation of an arbitrary quantum state is exponentially hard, in general (29). Nevertheless, we show that most commonly used chemical wave functions can be prepared efficiently. Because the significant deviations from Born–Oppenheimer behavior occur during evolution and usually do not concern initial states, we will prepare the initial state using the Born–Oppenheimer approximation. That is, the system wave function will be a product state ∣ψ〉 = ∣ψ

_{nuc}〉 ∣ψ_{elec}〉 of nuclear and electronic wave functions, each in its own quantum register.Nuclear motions can be expressed in normal mode coordinates if the displacements from equilibrium are small, which is the case in molecules at chemically relevant temperatures. The nuclear wave function is then, along each coordinate, a superposition of harmonic oscillator eigenstates, which are themselves products of Gaussians and Hermite polynomials. It is known that superpositions corresponding to efficiently integrable functions can be prepared on a quantum computer in polynomial time (15, 30). Therefore, Gaussian wave packets and Hermite polynomials are efficiently preparable, meaning that we can prepare good approximations to molecular vibrational states and Gaussian wave packets.

Gaussian wave packets can also be used to prepare the electronic wave function. Indeed, it is customary in electronic structure theory to expand molecular orbitals in terms of atomic orbitals, which themselves are superpositions of Gaussian-type orbitals. The orbital occupation numbers can be obtained from electronic structure calculations, including our earlier quantum algorithm (18). Consequently, the occupied orbitals, which are superpositions of Gaussians, can all be prepared efficiently.

One final consideration is the exchange symmetry of multielectron wave functions. Abrams and Lloyd proposed a method for preparing antisymmetric wave functions (31) starting with a Hartree product of molecular orbitals. We propose to use this method for preparation of multielectron wave functions, noting that it suffices to prepare the initial state with the correct exchange symmetry, because the exchange operator commutes with the Hamiltonian.

Of course, other strategies for state preparation can be pursued, such as the phase-estimation algorithm (32). If we are able to prepare a state ∣

*S*〉 that has a significant overlap 〈*S*∣*E*〉 with an eigenstate ∣*E*〉 (not necessarily the ground state), phase estimation followed by measurement will collapse the wave function to the desired eigenstate with probability ∣〈*S*∣*E*〉∣^{2}. Alternatively, the ground state can be prepared by the adiabatic state-preparation algorithm (18). This is of particular significance to the simulation of full chemical dynamics, because the electronic ground state is usually a good approximation for the reactants.## Measurement

After preparing the initial state and simulating its time evolution using the methods described above, we must extract chemically relevant information from the final system wave function. In general, quantum tomography is the most general approach to the estimation of an unknown quantum state or a quantum process (26) by measuring the expectation values of a complete set of observables on an ensemble of identical quantum systems. However, this full characterization of quantum systems always requires resources that grow exponentially with the size of the system. To avoid such problems, alternative approaches for the direct estimation of certain properties of quantum dynamical systems have been recently developed (33, 34). Here, we likewise show that the data of greatest chemical significance can be obtained directly with only a polynomial number of measurements. In particular, we present algorithms for obtaining the reaction probability, the rate constant, and state-to-state transition probabilities.

The reaction probability, given a certain initial wave function of the reactants, is the likelihood of observing, after a sufficiently long time, the products of the chemical reaction. To find it, we divide the real-space domain of the wave function into

*r*disjoint regions corresponding to subdomains of chemical interest. In chemistry, these regions are typically a few simple polytopes. The simplest division is into only 2 regions, 1 for the reactants and 1 for the products, separated by the transition state dividing surface (TSDS). The reaction probability is the sum of the probabilities of finding the final wave packet in the product region(s). It is straightforward to construct a classical point location circuit for the function*R*(**x**) that, given a nuclear position vector**x**, identifies which region it is in by returning an integer label corresponding to that region. There is a corresponding reversible circuit that performs the transformation ∣**x**〉∣*y*〉 → ∣**x**〉∣*y*⊕*R*(**x**)〉. We can apply this circuit to the final state ∣ψ 〉 = Σ_{x}*a***∣**_{x}**x**〉, to which we add an additional ancilla register with ⌈log_{2}*r*⌉ qubits. That is, applying this reversible circuit to ∣ψ〉∣0〉 yields Σ_{x}*a***∣**_{x}**x**〉 ∣*R*(**x**)〉. Measuring the ancilla register will return*i*with probability*P*_{i}, which equals the probability of finding the wave packet in the region*i*. We can obtain all of the probabilities*P*_{i}by employing an ensemble measurement of the ancilla register. Because individual measurements are uncorrelated, the error of the estimate of the probabilities decreases as 1/ $\sqrt{M}$ for*M*repetitions of the experiment. However, it is possible to achieve a convergence of 1/*M*, which is the ultimate limit of quantum metrology (35), by using techniques such as amplitude estimation (36, 37). Next, we use these disjoint regions to compute the rate constant.The rate constant where

*k*(*T*) at temperature*T*is a thermally weighted average of cumulative reaction probabilities (38):*E*is the energy,*Q*(*T*) is the reactant partition function, and*N*(*E*) is the cumulative reaction probability,*N*(*E*) = Σ_{ζ}*P*_{r}(ζ,*E*). The vector ζ is a specification of all of the quantum numbers of the reactants, and*P*_{r}(ζ,*E*) is the reaction probability given that the reactants are in the eigenstate specified by ζ and*E*. The sum ranges over all possible ζ and with*E*from zero to a cutoff. Note that on a quantum computer, the cutoff can be made exponentially large, or the energy step Δ*E*exponentially small, by adding more qubits.We can compute the rate constant on a quantum computer if we propagate in time not a pure wave function but the correct thermal mixed state. In that case, the expectation value of the reaction probability would equal the rate constant, up to a known factor. The required initial state is ρ(0) =

*C*^{2}Σ_{ζ,E}Γ(*E*,*T*)^{2}∣φ_{0}(ζ,*E*)〉〈φ_{0}(ζ,*E*)∣, where Γ(*E*,*T*) = (exp(−*E*/*k*_{B}*T*)Δ*E*/2πħ*Q*(*T*))^{1/2}is the square root of the appropriate Boltzmann factor,*C*is a normalization constant, and ∣φ_{0}(ζ,*E*)〉 is a real-space reactant eigenfunction corresponding to quantum numbers ζ and energy*E*. If we propagate ρ(0) for time*t*using the simulation algorithm, the system will be in the final state ρ(*t*) =*C*^{2}Σ_{ζ,E}Γ(*E*,*T*)^{2}∣φ_{t}(ζ,*E*)〉〈φ_{t}(ζ,*E*)∣, where ∣φ_{t}(ζ,*E*)〉 now denotes the time-evolved version of state ∣φ_{0}(ζ,*E*)〉 (note that, except in exceptional cases, ∣φ_{t}(ζ,*E*)〉 is not an eigenfunction of either reactants or products). If we have a quantum register in this mixed state, we can add an ancilla qubit and use the technique of dividing the domain into reactant and product regions as described above. Finally, a measurement of the ancilla qubit produces ∣1〉 with probability*C*^{2}*k*(*T*). The precision of*k*(*T*) thus obtained goes as 1/*M*with the number of measurements if we use amplitude estimation. The previously proposed approach for estimating reaction rates (17) evaluates the rate constant by computing a flux–flux correlation function based on a polynomial-size sample of the wave function in the position basis. In contrast, our approach carries out the integration explicitly on the quantum computer by using amplitude amplification, which provides a quadratic improvement over algorithms that rely on classical sampling and postprocessing.The thermal state ρ(0) can be prepared efficiently on a quantum computer. We begin by preparing a superposition of the reactant state labels in terms of ζ and

*E*, i.e.,*C*Σ_{ζ,E}Γ(*E*,*T*)∣ζ,*E*〉, with*C*and Γ as defined above. Here, ∣ζ,*E*〉 contain only the state labels and not position-space wave functions. If we assume that the thermally accessible states can be enumerated by a polynomial number of qubits and that the energy can be specified to a certain precision Δ*E*, we see that the states ∣ζ,*E*〉 require only polynomially many qubits to store. The superposition itself can be prepared efficiently because Γ(*E*,*T*) is an exponential function of the energy and is therefore efficiently preparable (15, 30).The next step is to generate, by doing state preparation in superposition, the state ∣Φ

_{0}〉 =*C*Σ_{ζ,E}Γ(*E*,*T*)∣ζ,*E*〉∣φ_{0}(ζ,*E*)〉, in which each term is the tensor product of ∣ζ,*E*〉 and the corresponding real-space reactant eigenstate ∣φ_{0}(ζ,*E*)〉. The states ∣φ_{0}(ζ,*E*)〉 must have definite energy*E*. Hence, one can represent an initial state as a direct product of discrete reactant internal states [specified by ζ with energy*E*(ζ)] and a wave packet with kinetic energy*E*_{k}=*E*−*E*(ζ) (38). The discrete part can be prepared by using the state-preparation approach described above. The incoming wave packet can be approximated with a Gaussian with a kinetic energy expectation value of*E*_{k}. This approximation can be improved by increasing the width of the Gaussian, which can be doubled by the addition of a single qubit to the position register. This would halve the momentum uncertainty of the wave packet. With sufficiently many qubits, the error incurred in this approximation could be made smaller than errors from other sources, such as grid discretization. Once we have prepared ∣Φ_{0}〉, we will no longer use the register containing the states ∣ζ,*E*〉. If we trace this register out, we can see that the remaining state register is a mixed state with density operator ρ(0), as desired.Finally, we show how to obtain state-to-state transition probabilities. Most chemical reactions can be regarded as scattering processes, and it is therefore desirable to know the scattering

*S*-matrix. In particular, it is these state-to-state transition amplitudes that are accessible to experiment. Heretofore we have considered the joint wave function of all of the molecular species. To compute state-to-state probabilities, however, we must ensure that each reactant molecule is in a well-defined molecular state. For example, to probe the state-to-state dynamics of the H + H_{2}reaction, we would need to prepare a particular state of the hydrogen atom plus a state of the hydrogen molecule, and not simply a state of the overall H_{3}aggregate. Given these states, prepared in the center-of-mass coordinate systems of each molecule, one must perform coordinate transformations to put them on a common grid. For each molecule, the Cartesian coordinates of the particles are linear, invertible functions of their center-of-mass coordinates. Because the coordinate transformations can be computed by an efficient, reversible, classical circuit, they can also be efficiently computed quantum mechanically.We concentrate here on obtaining only the vibrational state-to-state distributions. Using the techniques of state preparation above, we prepare each reactant molecule in a vibrational eigenstate so that along each of its normal mode coordinates, the molecule is in an eigenstate of the corresponding potential. After all of the molecules are thus prepared, their wave functions are transformed to a common Cartesian system. This initial state is evolved as usual until the molecules separate into isolated products.

At large intermolecular separation, the center-of-mass motion and the normal mode coordinates again become separable. Therefore, an orthogonal transformation can be applied to each product molecular fragment so that its Cartesian coordinates can be transformed into normal modes. The quantum phase estimation algorithm can then be used to extract the populations and eigenenergies of the product vibrational states.

For an isolated product molecule, we can expand the final state in terms of the normal modes: ∣Ψ′〉 = Σ

_{v′}α′_{v′}∣ξ′**〉, where ∣ξ′**_{v′}_{v′}〉 is the position representation of the eigenstate corresponding to product occupation number vector**v**′. The state-to-state transition probabilities are then*P*_{v′←v}= ∣α′_{v′}|^{2}, and as mentioned above, they can be determined by using the phase-estimation algorithm of Abrams and Lloyd (32) for each degree of freedom. We can obtain good measurement statistics with only a polynomial number of measurements because at typical temperatures, the products of chemical reactions will have appreciable population in only a small number of vibrational eigenstates.## Conclusion

The advantages of the methods presented here are not limited to chemical reaction dynamics but can be applied to many areas of physics. This is true, in particular, because the complexity of the algorithm is proportional to the complexity of calculating the potential and because the laws of nature are usually captured by simple, few-body interactions. For example, by using a quantum computer to study atoms in the presence of a strong, time-dependent electric field, one could simulate such effects as multielectron ionization or attosecond pulse generation (9, 10, 27). Quantum computers also offer the promise of predicting real-time properties of superfluids (39, 40) and of providing tests for effective potentials for water phases (41).

We close by reiterating the need for a careful reexamination of the suitability of traditional quantum approximations for use on quantum computers. Previously, we had shown that a quantum implementation of full configuration interaction scales better than coupled cluster approaches [in particular CCSD(T)], and in this work, we show that simulating the complete nuclear and electronic time evolution is more efficient on quantum computers than using the Born–Oppenheimer approximation, a central tool of theoretical chemistry. We can imagine the development of a wide variety of techniques and approaches tailored for natural quantum simulators, which, themselves relying on the rules of quantum mechanics, give us a deeper understanding of physical phenomena.

## Acknowledgments.

We thank Eric J. Heller and Jacob Biamonte for valuable discussions. This work was supported by the Army Research Office (Project W911NF-07-1-0304 and the QuaCCR Program) and the Joyce and Zlatko Baloković Scholarship.

## Supporting Information

Supporting Information (PDF)

Supporting Information

- Download
- 70.25 KB

## References

1

U Manthe, H-D Meyer, LS Cederbaum, Wave-packet dynamics within the multiconfiguration hartree framework: General aspects and application to NOCl.

*J Chem Phys***97**, 3199–3213 (1992).2

YH Wu, VS Batista, Matching-pursuit for simulations of quantum processes.

*J Chem Phys***118**, 6720–6724 (2003).3

M Ben-Nun, TJ Martinez, Nonadiabatic molecular dynamics: Validation of the multiple spawning method for a multidimensional problem.

*J Chem Phys***108**, 7244–7257 (1998).4

T Wu, H-J Werner, U Manthe, First-principles theory for the H + CH

_{4}→ H_{2}+ CH_{3}reaction.*Science***306**, 2227–2229 (2004).5

H Wang, Basis set approach to the quantum dissipative dynamics: Application of the multiconfiguration time-dependent Hartree method to the spin-boson problem.

*J Chem Phys***113**, 9948–9956 (2000).6

X Chen, VS Batista, Matching-pursuit for simulations of quantum processes.

*J Photochem Photobiol A***190**, 274 (2007).7

M Ben-Nun, J Quenneville, T Martínez, Ab initio multiple spawning: Photochemistry from first principles quantum molecular dynamics.

*J Phys Chem A***104**, 5161–5175 (2000).8

BG Levine, TJ Martínez, Isomerization through conical intersections.

*Annu Rev Phys Chem***58**, 613–634 (2007).9

JS Parker, LR Moore, KJ Meharg, D Dundas, KT Taylor, Double-electron above threshold ionization of helium.

*J Physics B***34**, L69–L78 (2001).10

C Ruiz, L Plaja, L Roso,. Lithium ionization by a strong laser field.

*Phys Rev Lett***94**, 063002. (2005).11

TN Rescigno, M Baertschy, WA Isaacs, CW McCurdy, Collisional breakup in a quantum system of three charged particles.

*Science***286**, 2474–2479 (1999).12

W Vanroose, F Martin, TN Rescigno, CW McCurdy, Complete photo-induced breakup of the H

_{2}molecule as a probe of molecular electron correlation.*Science***310**, 1787–1789 (2005).13

R Feynman, Simulating physics with computers.

*Int J Theor Phys***21**, 467–488 (1982).14

S Lloyd, Universal quantum simulators.

*Science***273**, 1073–1078 (1996).15

C Zalka, Simulating quantum systems on a quantum computer.

*Proc R Soc London Ser A***454**, 313–322 (1998).16

S Wiesner, Simulations of many-body quantum systems by a quantum computer., arXiv:quant-ph/9603028. (1996).

17

DA Lidar, H Wang, Calculating the thermal rate constant with exponential speedup on a quantum computer.

*Phys Rev E***59**, 2429–2438 (1999).18

A Aspuru-Guzik, AD Dutoi, PJ Love, M Head-Gordon, Simulated quantum computation of molecular energies.

*Science***309**, 1704–1707 (2005).19

AM Steane, How to build a 300 bit, 1 giga-operation quantum computer.

*Quant Inf Comput***7**, 171–183 (2007).20

M Feit, J Fleck, Solution of the Schrödinger equation by a spectral method.

*J Comput Phys***47**, 412–433 (1982).21

D Kosloff, R Kosloff, A Fourier method solution for the time dependent Schrödinger equation as a tool in molecular dynamics.

*J Comput Phys***51**, 35–53 (1983).22

D Coppersmith, An approximate Fourier transform useful in quantum factoring., arXiv:quant-ph/0201067. (2002).

23

E Bernstein, U Vazirani, Quantum complexity theory. (Assoc for Comput Machinery, New York), pp. 11–20 (1993).

24

DW Berry, G Ahokas, R Cleve, BC Sanders, Efficient quantum algorithms for simulating sparse Hamiltonians.

*Commun Math Phys***270**, 359–371 (2006).25

DE Knuth

*Art of Computer Programming, Vol 2: Seminumerical Algorithms*(Addison–Wesley, 3rd Ed., Boston, 1997).26

MA Nielsen, IL Chuang

*Quantum Computation and Quantum Information*(Cambridge Univ Press, 1st Ed., Cambridge, UK, 2000).27

BI Schneider, LA Collins, SX Hu, Parallel solver for the time-dependent linear and nonlinear Schrödinger equation.

*Phys Rev E***73**, 036708. (2006).28

AI Boothroyd, WJ Keogh, PG Martin, MR Peterson, A refined H

_{3}potential energy surface.*J Chem Phys***104**, 7139–7152 (1996).29

D Aharonov, A Ta-Shma, Adiabatic quantum state generation and statistical zero knowledge. (Assoc for Comput Machinery, New York), pp. 20–29 (2003).

30

L Grover, T Rudolph, Creating superpositions that correspond to efficiently integrable probability distributions., arXiv:quant-ph/0208112. (2002).

31

DS Abrams, S Lloyd, Simulation of many-body Fermi systems on a universal quantum computer.

*Phys Rev Lett***79**, 2586–2589 (1997).32

DS Abrams, S Lloyd, Quantum algorithm providing exponential speed increase for finding eigenvalues and eigenvectors.

*Phys Rev Lett***83**, 5162–5165 (1999).33

M Mohseni, DA Lidar, Direct characterization of quantum dynamics.

*Phys Rev Lett***97**, 170501 (2006).34

J Emerson, et al., Symmetrized characterization of noisy quantum processes.

*Science***317**, 1893–1896 (2007).35

V Giovannetti, S Lloyd, L Maccone,. Quantum metrology.

*Phys Rev Lett***96**, 010401. (2006).36

G Brassard, P Hoyer, M Mosca, A Tapp, Quantum amplitude amplification and estimation., arXiv:quant-ph/0005055. (2000).

37

E Knill, G Ortiz, RD Somma,. Optimal quantum measurements of expectation values of observables.

*Phys Rev A***75**, 012328. (2007).38

DJ Tannor

*Introduction to Quantum Mechanics: A Time-Dependent Perspective*(Univ Sci Books, Mill Valley, CA, 2007).39

RA Aziz, MJ Slaman, An examination of ab initio results for the helium potential energy curve.

*J Chem Phys***94**, 8047–8053 (1991).40

DM Ceperley, Path integrals in the theory of condensed helium.

*Rev Mod Phys***67**, 279–355 (1995).41

GS Fanourgakis, GK Schenter, SS Xantheas, A quantitative account of quantum effects in liquid water.

*J Chem Phys***125**, 141102 (2006).42

AY Smirnov, S Savel'ev, LG Mouroukh, F Nori, Modeling chemical reactions using semiconductor quantum dots.

*Europhys Lett*, 80:67008. (2007).## Information & Authors

### Information

#### Published in

#### Classifications

#### Copyright

© 2008 by The National Academy of Sciences of the USA.

#### Submission history

**Received**: August 22, 2008

**Published online**: December 2, 2008

**Published in issue**: December 2, 2008

#### Keywords

#### Acknowledgments

We thank Eric J. Heller and Jacob Biamonte for valuable discussions. This work was supported by the Army Research Office (Project W911NF-07-1-0304 and the QuaCCR Program) and the Joyce and Zlatko Baloković Scholarship.

#### Notes

This article is a PNAS Direct Submission. T.J.M. is a guest editor invited by the Editorial Board.

This article contains supporting information online at www.pnas.org/cgi/content/full/0808245105/DCSupplemental.

*

Other

*N*-body potentials could also be efficiently evaluated, with cost of*O*(*m*^{2}*B*^{N}) if they only contain arithmetic that requires*O*(*m*^{2}) time. For example, the Lennard–Jones potential could be computed by using ( $\frac{75}{4}$*m*^{3}+ $\frac{51}{2}$*m*^{2}) gates per pair of particles. Simulating potentials other that the Coulomb potential could be applied to situations such as liquid helium clusters, and although we do not discuss them in detail, the present algorithm could simulate such potentials with minimal adjustments.### Authors

#### Competing Interests

The authors declare no conflict of interest.

## Metrics & Citations

### Metrics

#### Citation statements

#### Altmetrics

### Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

#### Cited by

Loading...

## View Options

### View options

#### PDF format

Download this article as a PDF file

DOWNLOAD PDF### Get Access

#### Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Personal login Institutional Login#### Recommend to a librarian

Recommend PNAS to a Librarian#### Purchase options

Purchase this article to get full access to it.