# Elucidating reaction mechanisms on quantum computers

See allHide authors and affiliations

Edited by David P. DiVincenzo, Institute for Quantum Information, RWTH Aachen University, and accepted by Editorial Board Member Evelyn L. Hu May 24, 2017 (received for review December 31, 2016)

## Significance

Our work addresses the question of compelling killer applications for quantum computers. Although quantum chemistry is a strong candidate, the lack of details of how quantum computers can be used for specific applications makes it difficult to assess whether they will be able to deliver on the promises. Here, we show how quantum computers can be used to elucidate the reaction mechanism for biological nitrogen fixation in nitrogenase, by augmenting classical calculation of reaction mechanisms with reliable estimates for relative and activation energies that are beyond the reach of traditional methods. We also show that, taking into account overheads of quantum error correction and gate synthesis, a modular architecture for parallel quantum computers can perform such calculations with components of reasonable complexity.

## Abstract

With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.

Chemical reaction mechanisms are networks of molecular structures representing short- or long-lived intermediates connected by transition structures. The relative energies of all stable structures determine the relative thermodynamical stability. Differences of the energies of local minima and connecting transition structures determine the rates of interconversion, i.e., the chemical kinetics of the process. As they enter exponential expressions, very accurate energy differences are required for the reliable evaluation of the rate constants. At its core, the detailed understanding and prediction of complex reaction mechanisms then requires highly accurate electronic structure methods. However, the electron correlation problem remains, despite decades of progress (1), one of the most vexing problems in quantum chemistry. Although approximate approaches, such as density functional theory (DFT) (2), are very popular, their accuracy is often too low for quantitative predictions (see, e.g., refs. 3 and 4); this holds particularly true for molecules with many energetically close-lying orbitals. For such problems on classical computers, much less than a hundred strongly correlated electrons are already out of reach for systematically improvable ab initio methods that could achieve the required accuracy.

The apparent intractability of accurate simulations for such quantum systems led Richard Feynmann to propose quantum computers. The promise of exponential speedups for quantum simulation on quantum computers was first investigated by Lloyd (5) and Zalka (6) and was directly applied to quantum chemistry by Lidar, Aspuru-Guzik, and others (7⇓⇓⇓–11). Quantum chemistry simulation has remained an active area within quantum algorithm development, with ever more sophisticated methods being used to reduce the costs of quantum chemistry simulation (12⇓⇓⇓⇓⇓⇓⇓–20).

The promise of exponential speedups for the electronic structure problem has led many to suspect that quantum computers will one day revolutionize chemistry and materials science. However, a number of important questions remain. Not the least of these is the question of how exactly to use a quantum computer to solve an important problem in chemistry. The inability to point to a clear use case complete with resource and cost estimates is a major drawback; after all, even an exponential speedup may not lead to a useful algorithm if a typical, practical application requires an amount of time and memory that is beyond the reach of even a quantum computer.

Here, we demonstrate, for an important prototypical chemical system, how a quantum computer would be used, in practice, to address an open problem, and we estimate how large and how fast a quantum computer would have to be to perform such calculations within a reasonable amount of time. Our findings set a target for the type and size of quantum device that we would like to emerge from existing research and further gives confidence that quantum simulation will be able to provide answers to problems that are both scientifically and economically impactful.

The chemical process that we consider in this work is that of biological nitrogen fixation by the enzyme nitrogenase (22). This enzyme accomplishes the remarkable transformation of dinitrogen into two ammonia molecules under ambient conditions. Whereas the industrial Haber–Bosch catalyst requires high temperature and pressure and is therefore energy-intensive, the active site of Mo-dependent nitrogenase, the iron molybdenum cofactor (FeMoco) (23, 24), can split the dinitrogen triple bond at room temperature and standard pressure. Mo-dependent nitrogenase consists of two subunits, the Fe protein, a homodimer, and the MoFe protein, an *Left*) and the FeMoco buried in this protein (Fig. 1, *Middle*). Despite the importance of this process for fertilizer production that makes nitrogen from air accessible to plants, the mechanism of nitrogen fixation at FeMoco is not known. Experiments have not yet been able to provide sufficient details on the chemical mechanism, and theoretical attempts are hampered by intrinsic methodological limitations of traditional quantum chemical methods.

## Quantum Chemical Methods for Mechanistic Studies

At the heart of any chemical process is its mechanism, the elucidation of which requires the identification of all relevant stable intermediates and transition states. In general, a multitude of charge and spin states need to be explicitly calculated in search of the relevant ones that make the whole chemical process viable. Such a mechanistic exploration can lead to thousands of elementary reaction steps (25) whose reaction energies must be reliably calculated. In the case of nitrogenase, numerous protonated intermediates of dinitrogen-coordinating FeMoco and subsequently reduced intermediates in different charge and spin states are feasible and must be assessed with respect to their relative energy. Especially, kinetic modeling poses tight limits on the accuracy of activation energies entering the argument of exponentials in rate expressions.

For nitrogenase, an electrostatic quantum mechanical/molecular mechanical (QM/MM) model (26) that captures the embedding of FeMoco into the protein pocket of nitrogenase can properly account for the protein environment. Accordingly, we consider a structural model for the active site of nitrogenase (Fig. 1, *Right*) carrying only models of the anchoring groups of the protein, which represents a suitable QM part in such calculations. To study this bare model is no limitation, as it does not at all affect our feasibility analysis (because electrostatic QM/MM embedding will not change the number of orbitals considered for the wave function construction). We carried out (full) molecular structure optimizations with DFT methods of this FeMoco model in different charge and spin states to avoid basing our analysis on a single electronic structure. Although our FeMoco model is taken from the resting state, binding of a small molecule such as dinitrogen, dihydrogen, diazene, or ammonia will not decisively change the complexity of its electronic structure.

The Born–Oppenheimer approximation assigns an electronic energy to every molecular structure. The accurate calculation of this energy is the pivotal challenge, here considered by quantum computing. Characteristic molecular structures are optimized to provide local minimum structures indicating stable intermediates and first-order saddle points representing transition structures. The electronic energy differences for elementary steps that connect two minima through a transition structure enter expressions for rate constants by virtue of Eyring’s absolute rate theory (ART). Although more information on the potential energy surface as well as dynamic and quantum effects may be taken into account, ART is accurate even for large molecules such as enzymes (27, 28). These rate constants then enter a kinetic description of all elementary steps that ultimately provide a complete picture of the chemical mechanism under consideration.

### Exact Diagonalization Methods in Chemistry.

If the frontier orbital region around the Fermi level of a given molecular structure is dense, as is the case in

CASSCF is traditionally implemented as an exact diagonalization method, which limits its applicability to 18 electrons in 18 (spatial) orbitals, because of the steep scaling of many-electron basis states with the number of electrons and orbitals (34). The polynomially scaling density matrix renormalization group (DMRG) algorithm (35) can push this limit to about 100 spatial orbitals; this, however, also comes at the cost of an iterative procedure whose convergence for strongly correlated molecules is, due to the matrix product state representation of the electronic wave function, neither easy to achieve nor guaranteed.

### Ways Quantum Computers Will Help Solve These Problems.

Molecular structure optimizations are commonly found with standard DFT approaches. DFT-optimized molecular structures are, in general, reliable, even if the corresponding energies are affected by large uncontrollable errors. The latter problem can be solved by a quantum computer that implements a multiconfigurational wave function model to access truly large active orbital spaces. The orbitals for this model do not necessarily need to be optimized, as natural orbitals can be taken from an unrestricted Hartree–Fock (36) or small-CAS CASSCF calculation. The missing dynamic correlation can then be implemented in a “perturb-then-diagonalize” fashion before the quantum computations start or in a “diagonalize-then-perturb” fashion, where the quantum computer is used to compute the higher-order reduced density matrices required. The former approach, i.e., built-in dynamic electron correlation, is considerably more advantageous, as no wave function-derived quantities need to be calculated. One option for this approach is, for example, to consider dynamic correlation through DFT that avoids any double counting effects by virtue of range separation, as has already been successfully studied for CASSCF and DMRG (37, 38). Fig. 2 presents a flowchart that describes the steps of a quantum computer-assisted chemical mechanism exploration. Moreover, the quantum computer results can be used for the validation and improvement of parametrized approaches such as DFT to improve on the latter for the massive prescreening of structures and energies.

## Quantum Simulation of Quantum Chemical Systems

Ground state energies on a quantum computer can be obtained by quantum phase estimation (QPE). If we take the time evolution of an eigenstate to be *SI Appendix*), and efficient methods exist to implement each of the terms in a second quantized Hamiltonian (11). Although algorithms are known that can achieve better scaling than low-order Trotter–Suzuki methods for some problems (39⇓⇓⇓⇓–44), they are more challenging to lay out, as circuits and preliminary estimates suggest that they perform worse at this problem size. For these reasons, we focus on low-order Trotter–Suzuki methods here and leave the task of fully costing out these alternative methods for future work.

To achieve reliable results, a fault-tolerant implementation of the quantum algorithm is crucial. Encoding a single logical qubit in a number of physical qubits with a quantum error-correcting code, such as the surface code (45), protects the logical qubit against decoherence and other experimental imperfections.

Quantum error correction cannot directly protect any arbitrary quantum operation, but it can protect a discrete set of gates, from which any continuous quantum operation can be approximated to within arbitrarily small error (46). Approximation takes two steps. First, the exponentials in the time evolution are decomposed into single-qubit rotations and so-called Clifford gates. In the surface code, which we consider here, Clifford gates can be implemented fault tolerantly. The single-qubit rotations, however, require approximation by a discrete set of gates consisting of Clifford operations and at least one non-Clifford operation, usually taken to be the T gate, a rotation by

### Resource Estimates.

We now estimate the costs of such simulations, focusing on two prototypical structures of FeMoco that are an example of the complexity of those that naturally would arise when probing the potential energy landscape of the complex. We first estimate the run time of the computation assuming a quantum computer can perform a logical **T** gate every *SI Appendix*. We then determine the cost of performing this simulation fault-tolerantly using the surface code, such that each physical gate takes

We aim to compute the energies with a total error of, at most,

We consider three concrete implementations of the quantum algorithm and show the required gate counts in Table 1. In the “Serial” approach, the rotations are constrained to occur serially. In the “Nesting” approach (18), Hamiltonian terms that affect disjoint sets of spin orbitals are executed in parallel. In a third approach, programmable ancilla rotations (PAR), rotations are precomputed in parallel factories and then teleported into the circuit as needed (12). The overall cost of each approach is found by decomposing the rotations into Clifford and T gates using ref. 48 for the serial case and ref. 49 in the other cases. If all gates are executed in series, then we estimate that the simulation will complete in under a year and use a small number of logical qubits. PAR can reduce the time required to several days at the price of requiring nearly

These estimates can be improved, if necessary, using the techniques we provide in *SI Appendix*. Specifically, we provide simulation circuits that reduce the depth of the quantum simulation by a factor of

### Resource Requirements with Quantum Error Correction.

We next add the overheads required to perform the simulation fault-tolerantly and summarize the costs for structure 1 in Table 2. The underlying resource calculations for the fault-tolerant overheads follow ref. 45. These costs depend on the physical error rates in the hardware, and we consider three cases corresponding to (*i*) a near-term error rate of *ii*) error rates of *iii*)

We consider a high-level architecture for a quantum computer, sketched in Fig. 3, consisting of a classical supercomputer interfacing with a quantum computer that consists of a main quantum processor and dedicated separate T factories and rotation factories to produce T gates and to synthesize rotations, respectively. Only the main quantum processor is intended for general purpose computing, whereas the other factories are intended for special purposes.

The subdivisions in our architecture need not be physical. Indeed, we implicitly assume that they are logical in our analysis, but thinking about the device from this perspective reveals that quantum computer architectures can be tailored to quantum simulation. In particular, such hardware can exploit the fact that such simulations only require magic states when the algorithm is performing single-qubit rotations. Optimizing against a fixed architecture designed for quantum simulation can furthermore reduce the bandwidth needed to control the device, make compilation easier, and simplify communication within the device.

We first observe in Table 2 that the number of logical qubits in the main quantum processor is only of the order of a hundred, which translates into tens of thousands to millions of physical qubits, which is challenging but not out of reach. Most of the qubits are used in the T factories, which each need fewer physical qubits than the main quantum processor. The number of T factories needed to perform the serial calculation is small, with

If we parallelize with the PAR approach, then these costs are more daunting. The number of T factories required increases by a factor of roughly

To summarize, our estimates suggest that a quantum computer that operates with

## Discussion

Although, at present, a quantitative understanding of chemical processes involving complex open-shell species such as FeMoco in biological nitrogen fixation remains beyond the capability of classical computer simulations, our work shows that quantum computers used as accelerators to classical computers could be employed to elucidate this mechanism using a manageable amount of memory and time. The quantum computer is used here to obtain, validate, or correct the energies of intermediates and transition states and thus gives accurate activation energies for various transitions. The required space and time resources for simulating FeMoco using the 54-orbital basis and nesting are comparable to that of Shor’s factoring algorithm for

Parallelizing the quantum computation of the energy landscape will be crucial to providing answers within a timeframe of several days instead of several years. Bounding the number of repetitions of phase estimation needed to prepare the ground state from an initial ansatz remains an open problem (see *SI Appendix*), and parallelism may often be needed to allow us to tolerate low success probability. Quantum computers therefore must be designed with a scalable architecture in mind and also built with the realization that constructing a single quantum computer is insufficient to solve such tasks. Instead, we should aim to have quantum computers that can be built en masse, because clusters of quantum computers will be needed to scan over the many structures that need to be examined to identify and estimate all important reaction rates (25).

Finally, chemical reactions that involve strongly correlated species that are hard to describe by traditional multiconfiguration approaches are not just limited to nitrogen fixation: They are ubiquitous. They range from C–H bond activating catalysts; to those for hydrogen and oxygen production, carbon dioxide fixation, and transformation; to industrially useful compounds; to photochemical processes. Given the economic and societal impact of chemical processes ranging from fertilizer production to polymerization catalysis and clean energy processes, the importance of a versatile, reliable, and fast quantum chemical approach powered by quantum computing can hardly be overemphasized.

## Footnotes

↵

^{1}M.R. and N.W. contributed equally to this work.- ↵
^{2}To whom correspondence should be addressed. Email: mtroyer{at}microsoft.com.

Author contributions: M.R., N.W., K.M.S., D.W., and M.T. designed research; M.R., N.W., K.M.S., D.W., and M.T. performed research; M.R., N.W., K.M.S., D.W., and M.T. contributed new tools; N.W., K.M.S., D.W., and M.T. analyzed data; and M.R., N.W., K.M.S., D.W., and M.T. wrote the paper.

The authors declare no conflict of interest.

This article is a PNAS Direct Submission. D.P.D. is a guest editor invited by the Editorial Board.

This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1619152114/-/DCSupplemental.

## References

- ↵.
- Dykstra C,
- Frenking G,
- Kim KS,
- Scuseria GE

- ↵
- ↵
- ↵.
- Weymuth T,
- Couzijn EPA,
- Chen P,
- Reiher M

- ↵
- ↵.
- Zalka C

- ↵.
- Lidar DA,
- Wang H

- ↵
- ↵
- ↵
- ↵.
- Whitfield JD,
- Biamonte J,
- Aspuru-Guzik A

- ↵.
- Jones NC, et al.

- ↵.
- Peruzzo A, et al.

- ↵.
- Wecker D,
- Bauer B,
- Clark BK,
- Hastings MB,
- Troyer M

- ↵.
- McClean JR,
- Babbush R,
- Love PJ,
- Aspuru-Guzik A

- ↵.
- Hastings MB,
- Wecker D,
- Bauer B,
- Troyer M

- ↵.
- Poulin D, et al.

- ↵.
- Wecker D,
- Hastings MB,
- Troyer M

- ↵.
- Babbush R,
- McClean J,
- Wecker D,
- Aspuru-Guzik A,
- Wiebe N

- ↵.
- Bauer B,
- Wecker D,
- Millis AJ,
- Hastings MB,
- Troyer M

- ↵.
- Zhang LM,
- Morrison CN,
- Kaiser JT,
- Rees DC

*Clostridium pasteurianum*at 1.08 Å resolution: Comparison with the*Azotobacter vinelandii*MoFe protein. Acta Crystallogr D71:274–282. - ↵
- ↵.
- Spatzal T, et al.

- ↵.
- Lancaster KM, et al.

- ↵.
- Bergeler M,
- Simm GN,
- Proppe J,
- Reiher M

- ↵.
- Warshel A

- ↵.
- Olsson MH,
- Mavri J,
- Warshel A

- ↵
- ↵.
- Helgaker T,
- Jørgensen P,
- Olsen J

- ↵.
- Pulay P,
- Hamilton TP

- ↵.
- Stein CJ,
- Reiher M

- ↵.
- Stein CJ,
- Reiher M

- ↵
- ↵
- ↵
- ↵.
- Bofill JM,
- Pulay P

- ↵
- ↵.
- Hedegård ED,
- Knecht S,
- Kielberg JS,
- Jensen HJA,
- Reiher M

- ↵.
- Berry DW,
- Ahokas G,
- Cleve R,
- Sanders BC

- ↵.
- Childs AM,
- Wiebe N

- ↵.
- Berry DW,
- Childs AM,
- Cleve R,
- Kothari R,
- Somma RD

- ↵.
- Babbush R, et al.

- ↵.
- Babbush R, et al.

- ↵.
- Kivlichan ID,
- Wiebe N,
- Babbush R,
- Aspuru-Guzik A

- ↵
- ↵.
- Nielsen MA,
- Chuang IL

- ↵
- ↵.
- Bocharov A,
- Roetteler M,
- Svore KM

- ↵.
- Selinger P

- ↵.
- Sarma SD,
- Freedman M,
- Nayak C

## Citation Manager Formats

## Article Classifications

- Physical Sciences
- Physics