New Research In
Physical Sciences
Social Sciences
Featured Portals
Articles by Topic
Biological Sciences
Featured Portals
Articles by Topic
 Agricultural Sciences
 Anthropology
 Applied Biological Sciences
 Biochemistry
 Biophysics and Computational Biology
 Cell Biology
 Developmental Biology
 Ecology
 Environmental Sciences
 Evolution
 Genetics
 Immunology and Inflammation
 Medical Sciences
 Microbiology
 Neuroscience
 Pharmacology
 Physiology
 Plant Biology
 Population Biology
 Psychological and Cognitive Sciences
 Sustainability Science
 Systems Biology
P/NP, and the quantum field computer

Contributed by Michael H. Freedman
Abstract
The central problem in computer science is the conjecture that two complexity classes, P (polynomial time) and NP (nondeterministic polynomial time—roughly those decision problems for which a proposed solution can be checked in polynomial time), are distinct in the standard Turing model of computation: P ≠ NP. As a generality, we propose that each physical theory supports computational models whose power is limited by the physical theory. It is well known that classical physics supports a multitude of implementation of the Turing machine. NonAbelian topological quantum field theories exhibit the mathematical features necessary to support a model capable of solving all ⧣P problems, a computationally intractable class, in polynomial time. Specifically, Witten [Witten, E. (1989) Commun. Math. Phys. 121, 351–391] has identified expectation values in a certain SU(2)field theory with values of the Jones polynomial [Jones, V. (1985) Bull. Am. Math. Soc. 12, 103–111] that are ⧣Phard [Jaeger, F., Vertigen, D. & Welsh, D. (1990) Math. Proc. Comb. Philos. Soc. 108, 35–53]. This suggests that some physical system whose effective Lagrangian contains a nonAbelian topological term might be manipulated to serve as an analog computer capable of solving NP or even ⧣Phard problems in polynomial time. Defining such a system and addressing the accuracy issues inherent in preparation and measurement is a major unsolved problem.
It is known that the partition function, correlation functions, and other observables in field theory and statistical mechanics are generally hard to compute. In idealized models the level of hardness can often be established within the computational hierarchy. We find that for topological quantum field theories (TQFTs), where the combinatorial nature of the propagation allows a complete analysis, hardness and noncommutativity are tightly linked. More broadly, we propose that a physical system S with a nonAbelian topological term in its Lagrangian may have observables that are NPhard (or even ⧣Phard) functions of their preparation parameters. The topological character of S is consistent with the exact preparation of a discrete initial state, e.g., a knot type. A central difficulty will be extracting the “hard” information with measurements of limited accuracy. The accuracy challenge may have been met, at least theoretically, by certain models of quantum computation (1, 2, ∗) in which (i) a multibit state projection is read out a spin at a time and (ii) the algorithm employed ensures the state projection will, with high probability, contain useful information. Solving the accuracy problem for S could, in principle, lead to an analog computer based on preparation and observation of S capable of solving all ⧣Pproblems in polynomial time.
We begin with a brief sketch of computational concepts. The Turing machine T represents an abstraction of the principles of mechanical computation. The machine consists of a head and a tape. The head is capable of being in one of a finite number of “internal states” {q_{i}} and can read and overwrite a symbol ∈ {S_{j}} from a finite set of symbols and then shift one block left or right along the tape. It contains a finite internal program that directs its operations.
Consider a problem Q, with a yes/no answer, for which infinitely many instances exist, for example, the satisfiability of Boolean formulae. One asks: what is the fastest possible running time as a function of the size of the instance which a fixed program might achieve in correctly answering all of the instances of Q? One says that Q is in class P, if there is a program whose running time is bounded by a polynomial function of the number n of bits required to describe the instance I of Q on the Turing machine’s tape. One says Q is in NP if there is an “existential” program operating on I plus a number of “guess bits” that correctly answer all instances I of Q in polynomial time. The existential program is deemed to say “yes,” iff some setting of the guess bits returns a “yes” answer in polytime. Clearly P ⊆ NP. It is easy to map NP into an apparently larger class of questions ⧣P which ask of a given NP algorithm (with a fixed polynomial time cutoff), how many settings of the guess bits lead to “yes”?
The word “complete,” following a class, is used to denote a problem Q̄ within a class, which is maximally hard in the sense that any other problem in the class can be solved—again in polytime—with an oracle giving, in a single clock cycle, solutions of Q̄. The word “hard,” following a class, denotes a problem not necessarily in the class, but to which all problems in the class reduce (again in polytime). For example, counting the number of Boolean satisfactions is the paradigm ⧣Pcomplete problem.
It is a theorem (3) that evaluating the Jones polynomial at any primitive rth root of unity ζ, for r ≥ 5, is ⧣Phard. This ultimately employs a chain of reasoning relating the Jones polynomial to the Tutte polynomial to the chromatic polynomial to Boolean satisfiability to the operation of a Turing machine. The Jones polynomial (4) is a onevariable (t) polynomial invariant of smooth or P.L. knots in Euclidean 3space which obeys the skein relation: 1 where ( ) means insert a knot or link diagram which is identical in the three occurrences except near one point where three indicated variations are drawn. Given a normalization J(unknot) = 1, Eq. 1 uniquely defines J as a “polynomial” with nonzero coefficients at finitelymany positive and negative whole powers of t½ on all knots and links. The procedure for evaluation involves calculating a tree of resolutions and seems to be exponentially large in the number n of crossings in the knot diagram. This number n, or rather some small polynomial function of it, can be taken as the number of bits necessary to specify the knot, so the input size of each instance is easily quantified. Because the coefficients of J are ordinary integers, the output J(ξ) is a cyclotomic integer which may be regarded as an rvector of integers which, with little expansion of size, can be encoded as a single integer m. Interpreted in this way, the assertion that calculating J(ξ) is ⧣Phard makes sense and is a theorem.
Topological Quantum Field Theories
From a different direction (5), the Jones polynomial is connected to TQFT. This notion of a field theory capable of assigning scalar invariants to closed manifold or knots, i.e., one not depending on the background geometry, has emerged through work of Witten’s (see ref. 6). Formal properties of Feynman integrals in this theory imply the skein relation and from this it is calculated that the expectation value W_{k}(K) of an “observable” A_{K} associated to a Wilson loop K in SU(2)ChernSimons field theory satisfies 2 where W_{k} is formally given in terms of Feynman integrals as 3 We have written the Lagrangian as kCSA. In full we have 4 Much thought has been devoted to repackaging the information in {W_{k}} in terms of the perturbative ChernSimons invariants C_{ℓ}: 5 being an asymptotic expansion for W_{k}. In ref. 7 this expansion is written as a weighted sum over representations ρ of the exponentiation sums having the form 6 We have simply collected terms into a conceptually simple form at the risk of combining terms with distinct topological significance. The a_{ℓ,ρ} have been identified for ρ trivial a_{ℓ,ρ} = a_{ℓ} (7) with Vassiliev’s (8) finitetype invariants. The a_{ℓ} are individually computable in polynomial time in the complexity of K (9) but known computations are exponential time in ℓ. From the discussion of accuracy in the next section, even if {a_{ℓ}} is bounded, a_{ℓ}, ℓ up to poly(⧣), must be computed to determine W_{k}(K) exactly, where ⧣ is the crossingorder number of the knot K. This appears to be an exponentially difficult task as a function of the crossing number. Thus it is the nonperturbative Witten invariant W_{k}(K) which represents the computational power inherent in the physical theory.
Although TQFTs with nonAbelian geometry groups have been proposed in connection with the quantum Hall effect and anyons (10) they necessarily display physically peculiar features: (i) the theory is (2 + 1)dimensional so must describe surface effects; (ii) the spatial Hilbert space is finitedimensional; (iii) since it is topological, the theory is “static” in that the Hamiltonian H when derived by the standard rules, vanishes; and finally, (iv) the Lagrangian is topologically invariant, as the metric does not explicitly enter in Eq. 3 [although interpretation of the integral requires, after Polyakov (11), a regularization which brings in a metric quantity, writhe (K), which must be subtracted to achieve the identification with the Jones polynomial in Eq. 2].
It is this peculiar rigidity of a TQFT which simplifies the calculation of observables to the point where it is governed by a skein relation as in 2. This simplification and discretization has allowed computer science to evaluate the computational power of such theories. It was necessary to have, in addition to the Feynman integral “definition” of W_{k}(K), the precise (but exponentially slow) skein theoretic method for evaluating W_{k}(K) to discover the computational class, ⧣Phard, of the output. A physical theory lacking a discrete closed form evaluation of observables is not as easily placed in the computational hierarchy. This is why we focus on topological theories.
Consider the “evidence” tabulated below on the “computational power” of TQFTs. The striking pattern observed in Table 1 is that the TQFTs with an Abelian gauge group yield polynomialtime invariants, whereas all the nonAbelian TQFTs yield ⧣Phard information. Within this class of examples, the Abelian theories are free (the Lagrangian is Gaussian), whereas the nonAbelian theories contain a higherdegree (cubic) selfinteraction term A ∧ A ∧ A, see Eq. 4, in the Lagrangian. Thus our evidence leaves open the possibility that an Abelian field theory with particles, and hence cubic or higher terms, also performs a ⧣Phard calculation. In this direction, the duality between Donaldson theory (12) and SeibergWitten theory (13) shows that the information present in a matterless SU(2)theory can also be found in a U(1)theory containing an additional (spinor) field. Thus the broadest possible interpretation of Table 1 is that any interaction term in the Lagrangian confers computational power. This could be tested by finding a combinatorial evaluation of an Abelian TQFT incorporating particles.
Complexity of Physical Systems and Accuracy of Measurement
We take the view that a physical system S (and even an idealized one such as those considered in the table) can be prepared up to some level of accuracy in a state Ψ described by input bits B = b_{1}, . . . , b_{n}. A measurement (or several measurements), each assumed to take only one tick of our computational clock, produces a number(s), again to some accuracy, represented by bits B′ = b′_{1}, . . . , b′_{m} of output. It is assumed that B′ is a (perhaps statistically) reproducible function of B. We regard the complexity class of the function B′(B) as giving a lower bound on the complexity of the system. If B′(B) is, for example, ⧣Phard, then adding S as an oracle to the usual (Turing) theory of computation collapses the “polynomial hierarchy” (16). If S is a physically practicable system, it would be very attractive as the core of an analog computer.
A simple analogy will clarify the concept. A transformer has a primary solenoid, which we treat as a simple closed loop α ⊂ ℝ^{3} and a secondary loop β ⊂ ℝ^{3}. If an alternating current is sent through α, Maxwell’s equations tell us that a current will be generated in β, which is proportional to the linkingnumber link(α, β). This fact could serve as the core for the design of an analog computer, but not a good one. The quantity being computed by the physics, link(α, β), is directly computable from the link projection (count crossing of α over β according to sign) in linear time—certainly no longer than it would have taken to configure the link(α, β) as input. One is tempted to assign the weakness of this computation to the fact that electromagnetism is an Abelian theory. In contrast, if some “SU(2)current” could be driven through α to produce information about the Jones polynomial of the link(α, β), this would be much more promising.
The example (2), W_{k}(K) = J_{K}(ξ), illustrates the role of finite accuracy in the interpretation of a continuous measurement. The breadth b(J_{K}) of J_{K} is defined to be the difference between the highest and lowest powers of t. Both b(J_{K}) and the number of bits in each coefficient of J_{K} satisfy an upper bound which is linear in ⧣K = the crossingsnumber of K : b(J_{K}) ≤ 4⧣K and log_{2}(coefficient) ≤ ⧣K + const. Setting r = b + 1, there is a linear system which solves for the coefficients {c_{a}, 1 ≤ a ≤ r} of J_{K} in terms of {J(ξ^{a})1 ≤ a ≤ r}, where ξ = e^{2π}^{i}^{/}^{r}. Solving for {c_{a}} depends on inverting the Vandermonde matrix (ξ^{ij}, 1 ≤ i, j ≤ r). Since the matrix is Unitary up to a scale, determinant det(ξ^{ij}) = i^{r(r−1)/2} × r^{r/2}, this process is numerically stable. An approximate set of “observations” {), 1 ≤ a ≤ r} yields approximate coefficients {c̃_{a}} which can be rounded to the nearest integers, which if the observations were sufficiently accurate, will be {c_{a}}. Substitution of ξ^{a} now yields the exact {J(ξ^{a})}. Thus the number of reliable bits in each observation required to error correct to the exact values of J(ξ^{a}) is only linear in the size of the problem ⧣K. In other contexts, such as quantum computing (5), this scaling of accuracy requirements has been considered reasonable although a better goal for accuracy of individual measurements is poly(log K) bits or even a small constant number of bits. Because r ≈ 4⧣K observations are required to solve the linear system, a total of quadraticallymany bits must be collected before all observational errors can be corrected. A superior errorcorrecting scheme might be based on measuring nonlinear functions of {c_{a}} determined by “knot cabling” or other topological constructions. The goal is to find a polynomiallylarge set of constant accuracy measurements from which {c_{a}} can be inferred. We remind the reader that if no limit (or cost) on accuracy is imposed in a system with single step integer multiplication and “bit output,” then it is a result of ref. 17 that NPcomplete problems can be solved in polynomial time. Realistic computational schemes must account for the accuracy of observation.
To be computationally interesting, it appears that the Lagrangian should contain terms beyond quadratic. In QED this occurs with the inclusion of particles. For example, the creation of a photon (A) through the annihilation of a positron (Ψ̄) − electron (Ψ) pair comes from the Ψ̄AΨ term inside a Lagrangian density Ψ̄(ð_{A} − A)Ψ, which is cubic in the three fields taken together. The perturbative evaluation of an expectation value in such a theory is made by summing over (a technicallydivergent series indexed by) Feynman diagrams. However, the metric dependence of ð_{A} makes the theory nontopolocial and more difficult to relate with decision problems. The most interesting physical candidates seem to be solid state systems describable by Lagrangians containing topological terms [refs. 10 and 18 (http://xxx.lanl.gov/abs/quantph/970721)].
The Relation with Quantum Computation
Quantum computing has developed as an abstract variant of computer science with roots in early results on the universality of reversible computation (19) and ideas of Feynman (20). It received wide attention with results of Shor† on applications, e.g., probable factoring in polytime, of potential importance to cryptography. Recently, it has been shown that any problem which classically requires a search of 2^{n} cases can be accomplished in roughly 2^{n}^{/2} steps on a quantum computer, and that this speedup is essentially optimal.‡
Quantum computing (QC) and our proposal for “quantum field computing” (QFC) share the idea that a quantum system, exploiting superposition (in the first case, of states, and in the second case of complex weights of fields in the Lagrangian formalism), can explore an exponentially large computational tree. In quantum computing, after the exploration is completed, the reportback is a single eigenvalue of an observable which depends statistically on cacophony of voices. Some deconvolution is required to extract useful information. In QFC the ⧣Phard information is an average, or “expectation value,” rather than a particular eigenvalue, so the interpretation is direct. In the presence of accuracy limitations, many approximateexpectation values with known algebraic relations might be measured and used together to solve for the hard information.
Formally, QC occurs on a new type Turing machine which can write and read superpositions of symbols, i.e., vectors in a finitedimensional Hilbert space H, and where the read → write transformations dictated by the machine’s internal program is always unitary. The benefit of superposition and the departure from classical probablist computation in QC derives from the creation of “entanglement of states” (e.g., the Bell state 1/  ↑ > ⊗  ↓ > − 1/  ↓ > ⊗  ↑ >) which must be built up through successive elementary Unitary transformations (called gates) and defended against decoherence. In QFC, superposition (over all backround fields A) is postulated in the Lagrangian formulation to occur spontaneously. It is this superposition that QFC seeks to exploit.
Acknowledgments
I would like to express thanks to Peter Doyle, Dan Freed, David Meyer, Curt McMullen, and Cliff Taubes for stimulating conversations on the material presented. This work was partially supported by the University of California, San Diego, by National Science Foundation Grant DMS 9501105, and by Microsoft Research.
Footnotes

* Bernstein, E. & Vazirani, U., “Quantum complexity theory” in Proceedings of the 25th ACM Symposium on Theory of Computing, May 16–18, 1993, San Diego, CA, pp. 11–20.

↵† Shor, P., Proceedings of the 35th Annual IEEE Symposium on Foundations of Computer Science, Nov. 20–22, 1994, Santa Fé, NM, pp. 124–134.

↵‡ Grover, L., “A fast quantum mechanical algorithm for database search” in Proceedings of the 28th Annual ACM Symposium on Theory of Computing, May 22–24, 1996, Philadelphia, PA, pp. 212–219.
ABBREVIATION
 TQFT,
 topological quantum field theory
 Copyright © 1998, The National Academy of Sciences
References

 Schwarz A
 ↵
 ↵
 Jaeger F,
 Vertigen D,
 Welsh D
 ↵
 ↵
 ↵
 Atiyah M
 ↵
 BarNatan D,
 Witten E
 ↵
Vassiliev, V. (1992) Complements of Discriminants of Smooth Maps: Topology and Applications, translated from the Russian by B. Goldfarb, Translations of Mathematical Monographs (Am. Math. Soc., Providence, RI), Vol. 98.
 ↵
 BarNatan D
 ↵
 Wilczek F
 ↵
 Polyakov A M
 ↵
 Donaldson S,
 Kronheimer P
 ↵

 Freed D,
 Quinn F

Manoliu, M. (1998) J. Math. Phys., in press.
 ↵
 Toda S
 ↵
 Schönhage A
 ↵
Kitaev, A. Y. (1997) Faulttolerant quantum computation by anyons, quantph/9707021, 9 July 1997, preprint.
 ↵
 Bennett C
 ↵