Skip to content

Quantum Supremacy or Classical Control?

October 3, 2012


Final summations of the Kalai-Harrow debate

source—our congratulations

William Unruh is a professor in the Theoretical Physics group of the University of British Columbia. He is known for the Unruh effect, which predicts that an accelerating observer will feel radiation that an observer in constant motion will not. This does not contradict relativity, but experimental tests have so far produced debatable results. He also wrote a paper soon after Peter Shor’s famous algorithm appeared in 1994, showing how quantum noise would swamp a direct implementation of it. This was an early spur for quantum error correction, and Unruh is said to have withdrawn his skepticism after the quantum fault-tolerance theorem was established.

Today we conclude our debate between Gil Kalai and Aram Harrow on the scalability of quantum computers. What is our computational world—our mundus computationalis?—one with quantum supremacy or classical control?

Unruh’s Science Canada profile (see also this Q&A) includes this quotation in his own voice:

“William Blake said ‘If a fool would persist in his folly, he would become wise.’ I’m grateful that society allows me and other scientists to persist.”

On behalf of Aram we note that Unruh has not persisted in his critical stance—at least the quantum computing notes on Unruh’s group webpage are skepticism-free. On behalf of Gil, we note the value of persisting to make us wiser, besides the possibility of ultimately being right. We have persisted since January, and on behalf of Dick, I (Ken) thank both principals.

Our debate ends with a summation by Gil and then by Aram. The first part of the summation turned into a renewed discussion of Conjecture 1. In this post Gil begins by responding to Aram’s two other detailed defenses in our first round of the debate, posts II and III. He then reviews his original conjectures in light of these responses, our debate’s second round in which one conjecture was partially refuted, and copious back-and-forth discussion in many comments to these posts. Then Aram gives his summation, updating the considerations that answered Unruh’s questions long ago. We conclude with thanks and some “what could be next?” considerations.

Gil Kalai: An overall look at Aram’s posts

I will continue to discuss the arguments that Aram made in his interesting posts. Aram raised important technical and conceptual issues, and some of the conceptual differences between our points of view will certainly stay unsettled for quite a while. However, with the exception of Conjecture C as we discussed earlier, all the technical arguments against my conjectures raised by Aram’s are either weak to start with or incorrect.

Aram’s classical fault-tolerance test indeed poses a major challenge to quantum information theory, but I regard my explanation that is based on Conjecture 1 not only as adequate but also as advantageous compared to other explanations. Aram’s recent study of Conjecture 1 is useful, but his formulation is incomplete and his interpretation of the conjecture is incorrect. We discussed these matters in a previous post.

The concern that my conjectures are in conflict with the linearity of quantum mechanics is wrong—as exemplified by (among other things) Aram’s own shadow qubit example. The technical claim from the second post that pairwise positive correlation is insufficient to imply error-synchronization is incorrect. Aram’s thought experiments from the third post are interesting but do not support Aram’s claim. The specific claims that the high correlation of the kind predicted by my conjectures are not physical are not correct, and this general issue represents an important difference on appropriate modeling of noise. We will discuss some of these matters now.

Correlated Errors and Entanglement

A quick review

Central to my argument is the assertion that noise can be highly correlated and the assertion that errors will be highly correlated when we try to create highly entangled states. Before discussing Aram’s objections to my conjectures I would like to emphasize two important points.

The first is that the ultimate obstacle failing universal quantum computers is simply that the rate of noise will be too high. Correlation is an important part of the picture, but the first-order obstacle is the rate of noise in terms of qubit errors. High correlation and many qubit errors go hand in hand because it is reasonable to assume that the rate of noise in terms of trace-distance is stable over time for quantum evolutions since trace-distance is invariant under unitary operations. But, when the noise is very correlated, the rate in terms of qubit errors scales up linearly compared to the rate in terms of trace distance.

The second point is the distinction between special-purpose devices and general-purpose devices which was discussed in the previous post. I regard my conjectures as describing noisy special-purpose quantum devices, and this description will explain why general-purpose quantum devices cannot be built.

Many of the arguments and intuitions against my conjectural picture of noisy quantum devices are based on taking for granted a scenario of a universal quantum device. For example, Peter Shor commented:

“…the universe needs to know what code you are using in order to foil you. This attributes both more intelligence and more malice to the universe than I am willing to believe that it has.”

This makes sense if the code is created on a universal machine. But if every code (that can be created at all) requires a specific machine, then neither malice nor intelligence on the part of the universe to tell them apart is required.

The main objections to my conjectures

Conjectures 3 and 4 proposed an explicit relation between the intended state and the noise. Conjecture 3 asserts that a pair of entangled qubits are subject to error (information leak) with positive correlation. Conjecture 4 asserts that qubits whose state is very entangled are subject to error-synchronization, namely there is a substantial probability for errors effecting a large fraction of all qubits. In my papers I present a formal mathematical description of Conjectures 3 and 4 and point out that a certain strong form of Conjecture 3 implies Conjecture 4. One idea that was raised in the discussion (especially with commenter matt) is that error-synchronization follows from Conjecture 1, for every quantum error-correction code that genuinely depends on {n} qubits.

The main objections to my conjectures were:


{\bullet} The conjectured properties of noise violate the linearity of quantum mechanics.

To the best of my judgement this is not true. Let me mention three points:

  1. The class of Hamiltonian quantum evolutions (namely smoothed Lindblad evolutions) that describe the “bad noise” forms a subclass of all Hamiltonian evolutions of system and environment.

  2. Aram described a quantum-theoretic scenario (shadow qubits) that may support the phenomenon I conjecture. The same applies to variations on John Preskill’s models.

  3. There are strong reasons to believe that the conjectures do apply in many cases, e.g., for small quantum computers, say those with at most 20 qubits. Why isn’t this a violation of quantum mechanics linearity?

My conjectures can be regarded as nonlinear behavior of decoherence that are fully supported by quantum mechanics. The claim regarding QM’s linearity also manifests a confusion between the behavior of special-purpose machines and of general-purpose machines.


{\bullet} The conjectured properties of noise violate “known physics.”

My response to this important point raises the central issue of choosing between two approaches for modeling:

  1. The first approach is that in modeling open quantum systems we should take “known physics” as part of the noise model.

It turns out that if you impose the “known physics” on the noise it leads to surprising “unknown physics” that we will be able to create by controlled quantum evolutions via quantum fault-tolerance. Joe Fitzsimons’ 2-locality suggestions as well as John Preskill’s model represent this approach.

  1. The second approach is to let the “known physics” (both for the noise and for the signal) emerge from the model itself.

My modeling follows the second approach. I expect that my models of noise will confirm these “known physics” insights both for the noise and for the signal. I find the approach appealing since there is no reason to believe that when we have a natural quantum system the laws that describe the interaction of the system with the environment will be different from those that describe the interactions within the system.

src

Joe Fitzsimons’s 2-locality argument was a specific attempt to explain why my conjectures violate known physics. It turned out, however, that a strict version of it is too strong to be realistic (see this comment by Preskill), while a weak form of 2-locality is too weak to be in conflict with my conjectures. But again, my main objection to Joe’s attempt was different: insights of this kind should emerge from the model rather than being imposed on the model.

While I disagree with these two out-of-hand reasons to reject my conjectures, it is quite possible that my conjectures are simply false. They should be carefully examined on a case-by-case basis. Be that as it may, my conjectures are proposed also as a formal description for non-FT quantum evolutions even in a reality that allows or even demonstrates quantum fault-tolerance.

Towards a theory of non-FT quantum evolution

Switching the Quantum FT option OFF

Suppose that we have universal quantum computers, on every desk and every lap. One way to think about my conjectures is as follows. Let the quantum computers be equipped with a quantum-FT on/off switch, such as noise-cancelling earphones have. My conjectures describe the behavior of noise when the quantum-FT switch is off. Moreover, many of the attempts to build quantum computers or to create experimental quantum evolutions do not apply fault-tolerance schemes to start with, so quantum computers with the quantum-FT turned off should suffice to describe them.

A basic noise example

The most basic example of noise when the quantum FT is switched off is the following.

You have an algorithm described by a unitary operator {U}. Let {E} be a standard simple noise operation. Consider the noise operation {UEU^{-1}}. In words, this means that the noise at the end of the evolution is like applying {E} at the beginning and then running the unitary operator {U} noiselessly. My conjectures say that some part of the overall noise behaves just like this example.

Smoothed Lindblad evolutions

Start with a general (time-dependent) Hamiltonian evolution defined between times {t=0} and {t=1} on the system and the environment. Let {{\cal H}} denote the Hilbert space describing the system. The evolution can be described infinitesimally via the sum of two superoperators {A_t} and {B_t}, where {A_t A} describes a unitary evolution on {{\cal H}} and {B_t} describes the “noise.” (I assume that {B_t} is Lindbladian and correspond to POVM measurement. But we can consider more general or more restricted scenarios.) Let {U_{s,t}} be the unitary operator describing the evolution between times {s} and {t}. Note that unitary operators on {{\cal H}} induce an action on super-operators.

Let {K} be a positive kernel defined on {[-1,1]}. (In order to include standard noise we can allow an atom at 0.) The smoothing operator is obtained by replacing {B_t} by a weighted average of a 1-parameter family of super-operators of the form {U_{s,t}B_s U^{-1}_{s,t}} with weight {K(s-t)}. (Here {s} varies over all times between 0 and 1; it seems important to take times in the future as well as times in the past.) The discrete-time model is especially simple. I propose smoothed-in-time noise of this kind as a formal description of noise “without fault-tolerance”. Namely, this type of approximations to pure evolutions expresses formally the idea that quantum fault-tolerance was not activated and at the same time will not allow it.

Other Topics in Brief

The debate touched on many technical and conceptual issues, and I tried to round up many of them in this comment, plus some follow-ups. Let me relate some here:

Hamiltonian models and concentration of the number of errors

In various stochastic/Hamiltonian models for noise we witness Gaussian concentration of the number of errors. I regard this feature as unrealistic, and believe such models abstract away a crucial aspect of noise that may or may not allow fault tolerance. See this comment and the previous post overall.

Bose Einstein-condensation; superconductivity; teleportation

I proposed to examine Conjecture 1 (adapted) on Bose-Einstein condensation, and discussed it mainly in this thread.

David Deutsch proposed (privately) superconductivity as a counterexample for my strong principle of noise. Peter Shor proposed Kitaev’s teleportation-based quantum computing as a counterexample to my conjectures. (I have some reasons to think that smoothed Lindblad evolutions exclude this type of quantum computing.)

Perturbation methods; renormalization group

Understanding the feasible approximations for pure quantum states and evolutions is of interest not only in the context of controlled quantum evolutions. Non FT-evolutions are relevant for describing a quantum evolution on a small Hilbert space that neglect some of the degrees of freedom of a quantum system.

John Sidles, Aram, Hal Swyers, and other commentators mentioned some connection with perturbation theory, and indeed this is a direction that I find very interesting.

In a private discussion Aram mentioned the renormalization group as an example that there can be an effective theory at some scale (i.e. low energy, or large distances) that results from “integrating out” dynamics at higher energies/shorter distances/higher frequencies/etc, Again, this can be related to questions about modeling noise coming from neglecting internal structures of our qubits (or qudits).

Censorship

My Conjecture C from the first post was refuted by Aram in conjunction with Steve Flammia. We discussed this matter in this post, where we also discussed a related 1980 paper by Anthony Leggett. The post on censorship discusses bounded-depth quantum circuits as “Shor/Sure separators,” an idea that goes back to Unruh. Other alternatives for Conjecture C were offered by John Sidles in this comment based on tensor-rank, and also here, and by me in this comment (based on Pauli-expansion).

If small-depth quantum computation is indeed the limit of approximated pure evolutions this suggests that quantum computation not only offers no computational superiority, but rather that interesting computation in our world is classically controlled.

Can the computation hide the geometry and physics?

One insight of classical simulation that people tend to take for granted in the quantum case is that computation abstracts away the physics and geometry of the simulated device. I challenge this insight and our third-round post was devoted to this issue.

Anyons

There are fascinating suggestions for robust-to-noise states that (in principle) can be created experimentally, see this review paper.

The high-level concern is that since the proposed experimental process does not involve FT, the noisy states will satisfy Conjecture 1 and will not exhibit the expected robustness to noise. However, the description of how the stability of certain states increases, such as when two anyons are taken apart, is quite convincing—and it will worth the effort to try giving a specific physical mechanism that supports my conjectures in this case.

Simulating quantum physics on classical computers

The connections with classical simulations of quantum physics was mentioned quite a few times. A basic question is, what can be said about computations in physics that clearly require an exponential number of steps on a digital computer? Aren’t such computations (which apparently nature can carry on routinely) a clear evidence for “quantum supremacy?” My guess is that in cases where the computation is intractable, the answer is irrelevant.

The rate of noise

Neither the ordinary models of noise nor my conjectures for non-FT noise give a lower bound on the rate of noise. One proposal in my paper, drawn after some work by Sergio Boixo, Lorenza Viola, and Gerardo Ortiz, is:

Conjecture: The rate of noise in a time interval is bounded below by a measure of non-commutativity of the unitary evolutions {U_{s,t}} in this interval.

Non-flat substitutes for Hilbert spaces

John Sidles offered, over many comments, a certain geometric theory of quantum evolutions on non-flat spaces, and connected it with various aspects of our debate. This is certainly very interesting.

Non FT classical evolutions

It can be of interest (and perhaps harder compared to the quantum case) to try to describe classical evolutions that do not enable/hide fault tolerance and (long) computation.

src

Aram Harrow: Rebuttal and Summation

Quantum mechanics has been a counter-intuitive theory from its birth, and the theory of quantum information in a sense aims to distill the strangest aspects of quantum mechanics. So it is not surprising that they both have faced a chorus of skepticism from the start, including echoes of today’s debate. The problem with the skeptic’s task—from the original EPR paper to the Gil’s contemporary objections—is that their arguments entail an alteration of this supremely successful theory. Not only does the evidence unequivocally support quantum mechanics, there is not even any candidate theory that could replace it, waiting in the wings for experimental confirmation.

Gil’s conjectures, then, are really pre-conjectures, meaning that they are neither purely mathematical conjectures nor specific claims related to observable phenomena, but rather organizing principles around which we might look for such claims. He conjectures that a consistent theory can be found that will encompass both the impossibility of quantum computers and the last century of experiment supporting quantum mechanics. This is a worthy goal and a fascinating intellectual challenge. But it is at least very hard, and is probably impossible, as evidenced in part by the difficulty that Gil faces in making precise statements about his conjectures.

More concretely, he can make precise statements about his conclusions, such as “real-world quantum computers experience noise that makes them classically simulable,” but not about any microscopic claim about physics, even in any concrete model, that could plausibly lead to these conclusions. Similarly, his conjectures have difficulty avoiding reference to the “intended” operation of a quantum device, although any physical principle we discover is unlikely to depend on our intentions.

Of course, Gil does suggest a number of physical mechanisms that are candidates for de-railing quantum computing, or at least refuting the assumptions of the FT threshold theorem. He discusses self-synchronizing metronomes, smoothed Lindblad evolutions, constant-size fluctuations in temperature, special-purpose quantum computers introducing correlations in errors, and other ideas. Without at this point engaging with the details, I believe that each of these mechanisms suffers from some combination of failing to rule out quantum computing, inconsistency with existing observations (e.g., working classical computers) or imprecise predictions.

Why I believe quantum computing should be possible

I have tried to explain why I don’t believe Gil’s conjectures. But it is further important to ask why I, or anyone else, should believe that quantum computing is a plausible technological possibility.

Part of it, as I’ve said, is believing in quantum mechanics itself, with its exponentially large Hilbert spaces, its entanglement, its interference, and all its other counter-intuitive features. Quantum mechanics did not succeed by being a more intellectually appealing theory than its competitors. On the contrary, the route from Schrödinger’s equation to EPR to Bell’s theorem to secure quantum communication took over 50 years, even though today it can be covered in a few hours of lectures. This delay is basically because many of the principals—most notably Albert Einstein—didn’t want quantum mechanics to be true. Instead, many preferred to think about interpretations such as Bohmian mechanics that hold close to older intuitions, but are dogs’ breakfasts to work with.

Quantum mechanics succeeded by simply giving the right answer, over and over again, while no other theory could come close. And now, with the modern quantum-information perspective, many of the conceptual difficulties of the theory are being tamed. This textbook is a great example of the new pedagogical approach.

But couldn’t we have quantum mechanics without quantum computing? Isn’t that what this debate is all about? After all, as John Sidles has mentioned several times, many perturbation expansions are hard to compute, or not even convergent, and these can give rise to complicated many-body terms even when the original Hamiltonian is simple. Nothing about Schrödinger’s equation guarantees that we’ll ever see anything resembling textbook quantum mechanics in the real world.

Let me give a simple example of how things could have been much worse for us. Electrons are thought to be pointlike particles, but suppose that one day we discover they are, like protons, composites of much smaller particles. These particles could have their own dynamics that could ruin a two-slit experiment by effectively decohering the “which path” information. However, this turns out not to happen, and we do observe electron interference. Moreover, such alternate theories would work via the Pauli exclusion principle to alter the periodic table as we know it, so we can rule them out without any fancy experiments.

In practice, we see many areas of quantum coherence even in current areas of technology. We have multiple-quantum coherence in NMR, Bose-Einstein condensates, superconductivity (with topological effects and protection), lasers, and simply the fact that transistors work the way we expect. Higher-order perturbation theory, or some new physics, could have derailed any of these technologies, but did not—a fact explained in part by renormalization.

src

Thanks

(Gil:) This debate gave me a great opportunity, for which I am thankful, to discuss my work on quantum computers. First and foremost let me thank Aram Harrow for initiating a public discussion of my paper and for being my partner in this debate. Aram did a great job in presenting the case for why quantum computers are feasible. He presented serious and interesting objections to my conjectures, and made many enlightening comments on technical and conceptual levels. I thank also the many commentators, and allow me to mention John Sidles for the breadth and spirit of his involvement. There were many excellent comments and questions by many people; I tried to respond to questions raised to me, and I enjoyed reading some of the further discussion and interesting ideas not related to my work.

Of course, I am thankful to Ken Regan and Dick Lipton for hosting and editing the debate, for summarizing the earlier threads, for adding exciting introductions, for illuminating private discussions with Aram and me on the matter, and for very interesting related and unrelated blog posts during this time.

(Aram:) I also want to thank Gil for his discussions both here, and in many private emails. It’s been useful for me to think through these things, and has caused me to think about skepticism in a new way. (For example, am I like one of those establishment scientists who rejected continental drift out of hand? How certain can I be about my position on other controversial things, like the probability of global warming, or relativity, or my own atheism?) It has led to my work with Steve Flammia on Conjecture C, and also a new project on adversarial noise joint with Matt Hastings and Anup Rao, which I’ll talk about soon on the Pontiff blog.

Also, I wish to thank the many commenters, who have gone far beyond the original posts into many interesting realms. I hope that my participation here will turn out well for the field of quantum information, which has faced a large amount of skepticism and misunderstanding from the field of computer science. Many arguments in favor of quantum computing inevitably stem from physics; however, I hope that this will help encourage dialogue and mutual understanding between physics and computer science more generally. Finally, I am grateful to Ken and Dick for hosting, editing and framing this debate, and for the contributions of their blog more generally.

Conclusions

(Gil:) The construction of universal quantum computers is a serious scientific possibility and an excellent working assumption. Building universal quantum computers will be a once-in-a-lifetime scientific and technological achievement. Operating quantum computers will lead to further terrific scientific and technological fruits, for decades, just like the discovery of X-rays. Quantum error-correction is an important concept for building fault-tolerant quantum computers and is a fundamental scientific concept on its own. Building quantum computers is an endeavor which should vigorously be pursued, and indeed it is pursued theoretically and experimentally by top researchers who already established impressive achievements. My university, The Hebrew University of Jerusalem, just established a new Quantum Information Science Center.

Yet, it may well be the case that universal quantum computers are not possible, and that “no quantum fault-tolerance” should be taken as the guiding principle in modeling quantum decoherence. Developing a theory of non-fault-tolerant quantum evolutions, and studying how quantum computers can fail, is an interesting and important endeavor. Non-FT quantum (and classic) evolutions may shed new light on existing models, offer new relevant models for quantum evolutions, and lead to substantial new insights on related physical questions, models, theories, and computational methods.

Finally, it can be useful to draw two extreme scenarios as possible conclusions for this debate. One commonly believed scenario coined as “quantum supremacy” by Preskill asserts that the quantum world offers and enables superior forms of computational complexity. The other scenario, which we can call “classical control,” asserts that computation in our quantum world can only be based on classical information—via a repetition code or a similar mechanism—and that realistic nearly-pure quantum evolutions are (approximately) of bounded depth.

(Aram:) The quest to build quantum computers has been fraught with difficulties, but to my knowledge none of these are rooted in error correlations increasing with system sizes. Rather, they involve the sort of problems that elephants would have in dissecting fleas. Sometimes we don’t know how to decouple noise sources, sometimes we have a hard time addressing qubits individually instead of collectively, sometimes our fabrication techniques are unreliable, and sometimes we can initialize or coherently manipulate or measure, but not all three. All of these are important challenges, and some could lead us to abandon a technology (e.g. if we need an optical table the size of a football field), but none suggests a fundamental showstopper.

At a certain point, we gain confidence that there is no monster hiding under the bed, not only because there hasn’t been one before, but because nothing about our knowledge of the world would suggest such a monster.

Open Problems

Choose a platform that is currently being considered for an experimental implementation of quantum computing (e.g. ion traps, linear optics, superconducting qubits, etc.) and come up with a concrete noise model (i.e. derived from Schrödinger’s equation) that (a) is consistent with all existing experiments, and (b) provably rules out scalable quantum computing. Can one do this? or is it quixotic?

Does Conjecture 1 suffice to reduce the computational power of noisy quantum computers to BPP? Show that basing decoherence on smoothed Lindblad evolutions supports Conjectures 1-4 and leaves us with BPP.

Can smoothed Lindblad operations be used in a non-trivial way to show how classical memory and computation emerges under noise?

What is the correct picture of our mundus computationalis (Latin for “computational world”): quantum supremacy, classical control, or something in the middle?

66 Comments leave one →
  1. October 4, 2012 3:38 am

    Gil there seem to be possibility that will satisfy everyone. Phase transition. The single atoms are well described by the quantum mechanics (small circuits ), and it is not completely obvious what effects would be on the large scale. In particular, there is gaseous phase with nothing interesting can be done with it, except to fill balloons. On the other hand there is solid state (highly overlapped wavefunctions/strong interactions), and because of the banded electronic structure we can create electronic circuits and perform classical computation. It is quite possible, that large scale entangled bodies will produce “noise/correlated” bands that are required for computations, and at the same time produce large inevitable correlations.

    • October 4, 2012 8:12 am

      Dear mkatkov, as far as I am concerned everything is described by quantum mechanics. (Not that you said otherwise, but this is how what you wrote can be read.) In fact, my conjectures make no distinction between microscopic and macroscopic behavior and not even refer in any other way to the geometry of the quantum device. They only postulate a certain behavior for feasible approximations of pure quantum evolutions. Of course, the distinction between microscopic and macroscopic behavior and how it emerges from the rules of quantum mechanics is a familiar and interesting issue. Whether the emergence of classical information is a threshold phenomenon is an interesting question. This is an appealing possibility but it is also reasonable to believe that emergence of classical information occurs at all scales.

  2. October 4, 2012 11:43 am

    In part because I don’t know how else to subscribe to comments, let me start a little discussion about Gil’s smoothed Lindblad evolution. Many of my other objections apply to them: I think they lack any physical basis (either theoretical or experimental) and I think that they depend too much on knowing what is “intended.”

    I thought I’d also say that they’d refute classical computing. But I don’t think they do, nor do I think they refute quantum computing. Here’s why.
    Suppose we apply gates U_T … U_3 U_2 U_1 (The ordering reflects the order we apply them in.)
    If noise operator E (say a Pauli matrix, so everything is unitary; this is more or less WLOG) happens after step 1, then we obtain
    U_T … U_3 U_2 E U_1

    Gil’s smoothed noise process says that instead there is a chance we will get
    U_2^{-1} E U_2 happening after step 1
    or
    U_2^{-1} U_3^{-1} E U_3 U_2 happening after step 1
    etc.

    Let’s suppose we get E’ = U_2^{-1} U_3^{-1} E U_3 U_2 happening after step 1. Then our resulting evolution is
    U_T … U_3 U_2 E’ U_1 =
    U_T … U_3 U_2 (U_2^{-1} U_3^{-1} E U_3 U_2) U_1 =
    U_T … U_4 E U_3 U_2 U_1

    i.e. it’s identical to the original noise operator E happening after step 3.

    More generally, Gil’s smoothing operator causes noise to effectively move backwards or forwards in time. However, if the original noise is isotropic in time (e.g. 0.1% noise rate in each time step) then the smoothing makes no difference. Yes, it does change the noise operators. But it won’t change the effect of these operators once you combine them with the unitaries being applied.

    Ok, there are some subtleties here, but they mostly relate to the model being vague. e.g. what if U_1, .., U_T are replaced by quantum operations that are not always unitary (as is crucial to FT computing, either classical or quantum)? Then the smoothed evolution isn’t defined, as far as I can tell. Are the noise operators themselves included in the U_1 .. U_T used to compute smoothing? Or is there a separation between intended and unintended gates? This might mean that things could be different when noise operators move past each other. So I’m not totally sure about these details, but at least the basic model appears to be one that
    (a) violates the assumptiosn of hte FTQC threshold theorem, by having high-weight correlated noise, but
    (b) nevertheless is no more of a problem for FT computing than standard noise, because its effects are identical.

    • October 4, 2012 1:09 pm

      Dear Aram, Of course I will be very happy to look carefully at my smoothed Lindblad evolutions and thanks for this interesting technical point (in fact a few technical points). I look forward to examine your critique closely! It will probably take several days.

    • October 5, 2012 2:53 pm

      Just curious, did anyone consider resonances due to loops in the quantum circuit pumped either by the computational signal or vacuum fluctuations.

      • John Sidles permalink
        October 5, 2012 4:29 pm

        Mkatkov, the constructions associated to the postulate Roadhouse of the Larger State-Space rely essentially upon symmetry-breaking “resonant loops” to synthesize time-dependent computational dynamical elements — for example, system clocks, memory flip-flops, and gate arrays — wholly by unravelling time-independent Lindbladian generators.

        And of course, this is precisely the way that real system clocks, memories, and gate-arrays are constructed … consult the Wikipedia article “Multivibrators” for an introduction to these methods.

      • October 5, 2012 5:06 pm

        as a correlated source of the noise ;), since the physical signal used for computation is not synced with those modes. and John, (about multivibrators) I know that the first Moscow radio station was using parametric resonance for amplifying the signal, and finally, stable electronic orbits in atoms corresponds to “resonances” (what you would call spectrum) everything else decay. If you have loop in quantum circuit it has to have stable mode. You may be able to avoid exciting it, but it should exist. If you have complicated circuit, than you may have distribution of loops, and who knows what spectrum it would have.

    • John Sidles permalink
      October 5, 2012 4:16 pm

      Gil’s “smoothed” Lindblad generators are a terrific idea, and yet Aram has a good point that they lack an obvious physical basis (either theoretical or experimental).

      Perhaps a reasonable middle ground is to smooth the Lindbladian generators not in the time domain, but in the state-space domain, along the lines of Spielman and Teng’s justly celebrated Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time … with a long-term view toward a parallellized proof broadly along the lines of “Smoothed analysis of Lindbladian dynamics: why quantum trajectory unravelling usually takes polynomial resources.”

      This state-space Lindblad-smoothing has a reasonably natural physical basis: the quantenfeldtheoretisch Dreckeffecten that were mentioned in previous GLL comments.

      As Gil says “It will probably take several  days   weeks   months   years   decades ” to work through the details. 🙂

      • October 8, 2012 9:24 pm

        Dear John, The idea behind the smoothed Lindblad evolution is to formalize how a noisy quantum evolution that do not use quantum fault tolerance can be described (or approximated). It goes without saying that this idea requires (beside various sanity checks of the kind Aram is posing in his comment) to be conforted with existing experimental and theoretical iknowledge.

      • October 10, 2012 7:58 pm

        “Perhaps a reasonable middle ground is to smooth the Lindbladian generators not in the time domain, but in the state-space domain, ..This state-space Lindblad-smoothing has a reasonably natural physical basis: the quantenfeldtheoretisch Dreckeffecten that were mentioned in previous GLL comments.”

        Dear John, sure, we can think about your suggestion on its own, not as some middle ground. I must confess that another appealing thing for me in the time-smoothing I propose, is that it takes a few sentences to motivate and a few more sentences to describe. (See above.) So if a short self-contained from-scratch but fairly complete explanation of your proposal, not referring to arxived papers or textbooks is possible this will be especially useful for me.

      • John Sidles permalink
        October 11, 2012 5:18 pm

        Dear Gil, I will try to post a concrete, proof-susceptible, one-page conjecture this Sunday!

        Perhaps I should mention too, that you and I (and Ken Regan too) have posted D-WAVE-related comments on Scott Aaronson’s weblog.

        An especially good outcome of the Kalai/Harrow debate (as it seems to me) would be a set of conjectures whose truth (or falsehood) is suitable for discussion in the first chapter of coming generations of of quantum physics textbooks.

        To be useful to students, these “first-chapter” next-generation conjectures would usefully blend (one hopes) physical naturality with mathematical naturality … and the prospects for this union of physicality/naturality merger are the thrust of the Shtetl Optimized comment linked above.

      • John Sidles permalink
        October 15, 2012 11:52 am

        John Sidles hoped  “to post a concrete, proof-susceptible, one-page [state-space smoothing] conjecture this Sunday!”

        DOH! … it looks like one more weekend is going to be required … with a view toward specifying a state-space smoothing that is manifestly “intention-free” in regard to both naturality and physicality. 🙂

      • October 16, 2012 4:35 pm

        Dear John, I thought more about the issue of geometric models. Overall, I agree with your assertion that geometric modelling can be a middle ground which may give my conjectures some familiar (yet rather general) physical basis. The model of quantum computer itself has no geometry in it, and so are essentially the basic models of noisy quantum computers. This is why my approach is to model non FT quantum evolutions on this level of abstraction where indeed a lot of the physics is abstracted away.

        Since we are talking about approximations I don’t see the advantage to replace the Hilbert space itself with non flat spaces; If our evolution is related to some complicated manifold at the end we will work with tangents to this manifold which are “ordinary” Hilbert spaces. But for the sake of approximating complicated quantum evolutions on small Hilbert spaces, the target Hilbert space need not be fixed and the way it changes may very well has a geometric meaning.

      • John Sidles permalink
        October 16, 2012 5:16 pm

        Dear Gil, although I *love* simulating quantum trajectories by (geometrically natural) pullback onto curve manifolds, that is *not* the conjecture that I have in-mind to post (on MathOverFlow hopefully). Rather, the program is (1) specify a time-independent set of Hamiltonian/Lindbladian generators that unravel as FTQC trajectories (on a full/flat Hilbert space), and then (2) perturb the generators per Spielman and Teng’s “Smoothed analysis of algorithms.”

        The intuition is that idealized Hamiltonian/Lindbladian generators, whose Lie algebras have small finite dimension, may in practice be generically nonideal, in the sense that their Lie algebras don’t quite close, in consequence of the quantenfeldtheoretisch Dreckeffecten that were mentioned in previous GLL comments.

        Then the conjecture asks: Do the perturbed Hamiltonian/Lindbladian generators unravel as FTQC trajectories?

        Or more strongly (and geometrically): Do perturbed Hamiltonian/Lindbladian generators generically compress quantum trajectories onto manifolds of polynomial dimensionality?

        The point being, that if the latter (time-independent and purely geometric) conjecture is true, such that dynamical trajectories can be efficiently simulated, (a) with polynomial resources, (b) by pullback methods, (c) on quantum state-spaces that are effectively non-flat … then the resulting computational dynamics are unlikely to yield exponential improvements over purely classical simulation of those dynamics.

        Posing this as a serious conjecture on MathOverflow (as I hope to do) turns out to be a considerable undertaking (the references alone are a lot of work!) and so I apologize that this is taking awhile.

        Gil, please accept my appreciation and thanks, to you and Aram (and to Dick and Ken too) for sustaining this outstanding and immensely enjoyable FTQC debate!

      • John Sidles permalink
        October 17, 2012 10:57 am

        In regard to the above, there is also an geometric inspiration from (computational) symplectic integrators.

        We recall the central idea of symplectic integration: a Hamiltonian function that is nominally exact, whose time-step is only approximately a symplectomorphism, is exchanged for a Hamiltonian function that is approximate, but whose time-step is exactly a symplectomorphism.

        Similarly, in computationally unravelling FTQC trajectories, we begin by specifying a time-independent set Hamiltonian/Lindbladian generators that are nominally exact — or more realistically, that are subject only to such imprecisions as are compatible with the “FT” method chosen — such that the unravelled trajectories fill a state-space of intractably large dimension, so that FTQC is accomplished in accord with the standard error-correction and fault-tolerance theorems (that John Preskill has so admirably described).

        Then we suppose that these generators are generically perturbed in the sense of Spielman and Teng — such that the standard error-correction and fault-tolerance theorems no longer apply — so that we may reasonably conjecture that generically the unravelled trajectories are dynamically restricted (via the Lindbladian stochastic flow) to a state-space of tractable dimensionality, that is, an effective state-space dimensionality that is polynomial in the number of Hamiltonian/Lindblad generators.

        By intent, this conjecture is so constructed as to be naturally subject to geometric analysis methods, that complement the algebraic methods of traditional FTQC constructions.

        That’s the main idea, Gil!   🙂

      • John Sidles permalink
        October 22, 2012 1:13 pm

        Concrete examples of the quantenfeldtheoretisch Dreckeffecten that (as it seems to me) geometrized versions of the Kalai conjectures need to treat realistically, are provided by Peropadre et al. “Approaching perfect microwave photodetection in circuit QED” (<i<Phys Rev A, 2011).

        This article is of course barely a beginning … near-perfect detectors are of very little utility unless they are coupled to near-perfect emitters … and it is far from evident that both kinds of perfection can be optimized simultaneously.

        That’s why finalizing the “geometrized” Kalai conjectures is taking awhile. And more broadly, that’s why scalable presentations of (for example) the Aaronson/Arkhipov linear optics experiments are so very difficult to specify as realizable Hamiltonian/Lindbladian processes!

      • January 1, 2013 3:36 pm

        Dear John, Perhaps my recent post Symplectic Geometry, Quantization, and Quantum Noise, and the works by Leonid Polterovich (and others) on quantum noise mentioned there, may be related to the quest of  some geometric middle grounds that we discussed over this thread. Namely, a middle ground, with physical basis, for quantum computers and quantum evolutions, in between general noisy models and particular experimental implementations.

      • John Sidles permalink
        January 1, 2013 7:27 pm

        Dear Gil … I saw that post and it is terrific!

        My own interest in geometric quantum mechanics run precisely in the opposite direction: instead of geometric quantization (as the references you provide call it) the topic of interest is taken to be geometric dequantization (as it might be called), that is, the pullback of quantum dynamics from Hilbert space onto lower-rank submanifolds of Hilbert space.

        There is no one book on dequantization, but it turns out to be pretty straightforward to learn the framework of John Lee’s outstanding textbook Introduction to Smooth Manifolds — the concluding chapter of the Lee’s brand-new 2nd edition covers symplectic manifolds and Hamiltonian flows — and then translate the dynamical elements of Nielsen and Chuang’s Quantum Computation and Quantum Information and of Landau and Lifshitz’ Statistical Physics Part 1 (especially the chapters on Onsager relations), and selected examples from Bird, et al.’s Transport Phenomena, and finally selected examples from our QSE Group’s own Practical recipes for the model order reduction, dynamical simulation, and compressive sampling of large-scale open quantum systems, all into the unifying geometric language of John Lee’s outstandingly clear (2nd edition) textbook.

        There is no one step in this program that is difficult (or even new!) … but there are a *lot* of steps … and regrettably the effort is doubled because everyone’s mathematical notation (except Lee’s) is awkward and/or outdated … and then the effort is redoubled because no two texts use the same notation or even the same physical intuitions. Ouch!

        Eventually I’ll post the resulting math/physics/engineering synthesis — most likely as a *long* article on the arxiv server — of which one conclusion will (informally) amount to:

        Plausible Postulate  Polynomial computational resources suffice to simulate any quantum dynamical system that has a macroscopic thermodynamical description, at a level of accuracy that suffices to pass any validation test that can be applied with polynomial resources.

        The restriction regarding the validation test serves to accommodate “good enough” P-resource simulation of Aaronson-Arkhipov BosonSampling processes.

        This conclusion (for me) represents a viable synthesis of the very best elements of the Kalai/Harrow debate. In the words of the famous joke: You and Aram are both absolutely right!

        Happy New Year to everyone! 🙂

        ——————–

        New years bonus: A website with free WordPress LaTeX preview!. Shhh! Don’t tell anyone! 🙂

      • John Sidles permalink
        January 2, 2013 10:32 am

        Dear Gil, I have posted a long-ish comment to your site; a comment that concludes with my heartfelt appreciation & thanks to you and Aram Harrow (as expressed by a joke). 🙂

    • November 21, 2012 8:36 am

      Here is a response to Aram’s comment regarding smoothed Lindblad evolutions.

      Smoothed Lindblad evolutions are offered as a mathematical tool to describe noisy quantum evolutions which do not involve quantum fault-tolerance. The smoothing operation is described in the post above and  precise continuous-time and discrete-time descriptions can be found in my paper “When noise accumulates,” Section 2 formulas (3) and (5). The smoothing operation depends on a positive kernel which we can simply take as Gaussian. (We can also allow having an atom at zero to take into account an additional amount of standard noise.)

      Aram suggests that somehow if the noise before smoothing is time-independent then fault-tolerance schemes will still work (at the end) since the “smoothing” only move noise around in time. However, Aram is describing his own version of smoothing-in-time (in some much simplified setting) which is different from mine. (Greg Kuperberg made a similar observation a few years ago.) Thus, Aram’s assertion that the smoothed Lindblad evolution model is “no more of a problem for FT computing than standard noise, because its effects are identical,” is not correct.

      As an aside, let me mention that assuming that the noise is time-independent and even that the noise-rate is time-independent is, in my opinion,  unrealistic. My criticism  of John Preskill’s new general noise model is that this model exhibits  a strong concentration of the number of qubit-errors. Such concentration suggests that a major  aspect of realistic noise is missing from the model. (This criticism is “orthogonal” to the issue of quantum fault-tolerance.)

      • John Sidles permalink
        November 21, 2012 10:08 am

        In regard to “unsmoothed” (standard) Lindbladian dynamical evolutions, as contrasted to “smoothed” Lindbladian dynamical evolutions, there is a body of work in geometrical thermodynamics that suggests that a natural transition between the unsmoothed/smooth dynamical regimes is associated to a diffusion constant \Gamma_{\scriptscriptstyle\!\text{OZ}} that is given in terms of fundamental constants by

            \Gamma_{\scriptscriptstyle\!\text{OZ}} = \left[\frac{\hbar\,G}{c}\right]^{1/2} \sim 4.8\times 10^{-27}\,\text{m}^{2}\cdot\text{s}^{-1}

        Here “OZ” stands for Onsager-Ziegler, and the above expression results from joining orthodox Hamiltonian/Lindbladian dynamics (of Michael Nielsen & Isaac Chuang, Howard Carmichael, Carlton Caves, etc.) with orthodox thermodynamics (of Lars Onsager and Hans Ziegler, etc.) via natural mathematics (of Saunders Mac Lane, Vladimir Arnold, etc) in geometric notation (of Jack Lee’s Smooth Manifolds text, for example).

        Eventually the above train of analysis will be posted as a commentary on the Kalai/Harrow debate; its main conclusions are evident already, in the small magnitude of the above fundamental diffusion scale:

        • For purposes of quantum computation, nature’s state-space is effectively a Hilbert space … even though at sufficiently small space-time scales, nature’s dynamics becomes (“stringily”?) non-Hilbert.

        • For practical purposes of thermodynamical simulation, \Gamma_{\scriptscriptstyle\!\text{OZ}} is effectively very much larger than the above value, such that the associated dynamical state-spaces are effectively non-Hilbert … and for this reason, natural dynamical systems can be computationally simulated with PTIME resources.

        Needless to say, both of these two conclusions are very good news for 21st century science, technology, engineering, and mathematics. And upon this optimistic concluding note, best wishes for a happy holiday season are extended to everyone!

  3. October 4, 2012 3:57 pm

    I also do not know, how else to subscribe to comments, so I am recollecting a problem for skeptics. It is often discussed that Yuri Manin suggested idea of quantum computer (in fact, “quantum automata”) around 1980. It is less widely discussed, that it was in relation with question about DNA functionality – see translation in appendix to his own paper http://arxiv.org/abs/quant-ph/9903008

    A question for skeptics: to show that DNA may not be a quantum computer (automata). If the question is too difficult, to consider a simpler problem and to prove that DNA is not an universal quantum computer. Maybe it was some not very accurate speculations around that idea, but anyway it is a method to think about all that in more constructive way.

    • October 6, 2012 9:51 pm

      Dear Alexander, This is an interesting question, but I do not think that your question (or similar questions) should be addressed just to skeptics. Manin made ( very early, I suppose it is even before Feynman’s paper on quantum computers,) an argument that DNA’s replication requires some superior computational complexity of a quantum computers of some sort. I think that Manin is credited as one of the very early people to suggest quantum computers but I am not sure what is the status of his suggestions regarding DNA.

    • October 9, 2012 7:25 am

      Dear Gil, I do not think, that the consideration of models of quantum gates on polymer molecules is much more unreasonable than on ions in traps or on photons in cavities. The problem may be a precision and so doubtful possibility for “too digital” quantum algorithms like factoring of huge numbers.

  4. John Sidles permalink
    October 4, 2012 8:31 pm

    Today a story on SlashDot had a remarkable headline:

    The CIA and Jeff Bezos
    Bet $30 Million On Quantum Computing Company

    “The CIA’s investment fund, In-Q-Tel, and Amazon founder Jeff Bezos have invested $30 million in a Canadian company [D-WAVE] that claims to build quantum computers”

    It seems (to me) that recent groundbreaking insights from the Aspuru-Guzik group may reasonably justify this investment in quantum computing … and it seems too (to me) the Kalai/Harrow debate issues are at the heart of the considerations that will determine the long-term viability of this investment.

    • John Sidles permalink
      October 5, 2012 9:50 am

      MIT Technology Review has a follow-on article, The CIA and Jeff Bezos Bet on Quantum Computing, that includes plenty of snappy quotes — both optimistic and skeptical — from “the usual suspects” in the quantum computing community.

      It is plausible that D-WAVE’s enterprise value reflects breakthroughs in both hardware and algorithms … breakthroughs that D-Wave’s investors regard as real, and D-Wave’s skeptics regard as illusory.

      Assuming the former leads us to consider whether there may exist natural variants of Gil Kalai’s conjectures, that are associated to augmented algorithmic power, possibly along the lines that Aspuru-Guzik and colleagues have been suggesting (per the comment above).

      Assuming further that D-Wave’s advances have an algorithmic aspect, it remains to be determined whether quantum-entangled hardware is strictly necessary in implementing these novel algorithms … either way, D-Wave and its investors are well-positioned to benefit.

      That’s why D-Wave seems like a reasonable investment to me! 🙂

  5. October 4, 2012 10:35 pm

    When it come to classical control let me mention the problem of how to express in Latin the term “computational universe” or “computing universe”.  I will spare you my own initial suggestions based on google translate but both Ken and I consulted with experts in classical studies (and Ken himself seems quite strong in Latin) and were offered or considered also  “res mathematicae,” “res arithmeticae,”  “mathesis universalis,” “calculandum circumstant,” and “mundus computationi,” before making the choice “Mundus Computationalis.” In some draft, Ken even wrote the whole sentence “What is our computational world” in Latin which was very exciting. But at the end this sentence did not make it, and now I forgot how it goes.

    Of course, we need also to mention that our final debate post was launched a few hours after the first US presidential debate was concluded.  Aram has an awesome line related to the presidential debate as well as  to our debate, but did not find the context to say it. You should stay tuned, if nothing else just for that awesome line.

    So I watched with interest yesterday’s debate (I am at Yale right now). Now, I am a little hesitating if I should reply to Aram’s concluding section  addressing  my conjectures.

    Take, for example, the matter of intentions. Aram’s claim that the conjectures depend on intentions reminded me of the following nice story (or a profound philosophical joke according to Wittgenstein) from “Littelwood’s Miscellany”:

    “Schoolmaster: ‘suppose that x is the number of sheep in the problem.’ Pupil: ‘But, Sir, suppose that x is not the number of sheep.’ “(See also this post for a similar story.)

    So is it really difficult to state my conjectures without any reference to “an intended state” or “an intended evolution?” It is quite easy at least to go over the conjectures (or pre-conjectures) one by one and check this matter. In any case, my own intention is that these conjectures apply (in the relevant context to each one) to general realistic approximations of pure quantum evolutions. It’s really all about approximation…

  6. Serge permalink
    October 9, 2012 8:53 am

    Never in history did an invention, a scientific discovery, make it possible to suddenly solve all problems more efficiently. Even the greatest ones were made to solve only a class of problems. For example, the wheel was invented for transportation and topological spaces for continuity – but not the other way around.

    In contrast, if you proved P=NP or built a quantum computer, from the same minute you’d become able to solve all problems more quickly. This runs counter to what experience tells us – you can’t acquire infinite power in finite time.

    There used to be an old motto in computer science: all that can be done with hardware may also be done with appropriate software. For this reason, I believe it will take as long to build a quantum computer as to find a polynomial algorithm for integer factoring.

  7. October 9, 2012 12:36 pm

    Unlike large parts of the debate where Aram raised specific points against my conjectures, which we thoroughly discussed, Aram’s summation is of more general nature. Let me repeat seven points that Aram made. The first two give general reasons to believe in quantum computers:

    1)      QM is an extremely successful theory. A reality that allows QM and forbid QC requires an alteration of QM.

    2)      QM could have “conspired” even against quantum theoretic phenomena that we can witness like the two-slit experiment, but this is not the way nature works.

    The other five points are claims of general nature regarding my conjectures

    3)     The conjectures are not supported by any physical mechanisms on the microscopic level or a specific model.

    4)   Rather the conjectures suggest precise statements only about their conclusions. Aram mentions especially the conclusion that “real-world quantum computers experience noise that makes them classically simulable.”

    5)      The conjectures give only some framework for a hypothetical theory, they are neither pure-math conjectures nor they give precise predictions on observations.

    6)      It is difficult to state the conjectures without referring to an “intended” state or evolution. In contrast in natural processes there are no “intentions”.

    7)      There are indeed technical difficulties on the way of building quantum computers but in no case these difficulties come from errors-correlations that the conjectures talk about.

    All these points are very good! (Some of the points  remind me of Cris Moore’s very first comment of this debate which  led to fruitful discussion.) The second point was quite new to me although I raised a similar point myself in this comment .  I plan to relate to some of Aram’s summation points in later comments.

    On another matter let me mention that a few hours ago Serge Haroche and  David J. Wineland whose work is pretty much in the direction of building quantum computers were named as the 2012 Nobel prize recipients in physics “for ground-breaking experimental methods that enable measuring and manipulation of individual quantum systems.” Warm congratulations!

  8. Anonymous permalink
    October 12, 2012 2:59 am

    Two papers of interest
    A survey on quantum memories http://arxiv.org/abs/1210.3207

    Experimental tests on a stable qubit http://www.nature.com/nphys/journal/v5/n1/full/nphys1151.html

  9. October 13, 2012 6:45 pm

    Let me mention a few relevant posts (by John Preskill) on “Quantum Frontiers”. The recent post on the 2012 physics Nobel prizes mentions a skeptical paper from 1996  entitled  Quantum computing: dream or nightmare? by Serge Haroche and Jean‐Michel Raimond, and a  more optimistic note from that time written by John Preskill. An earlier post entitled “Supremacy now”  describes a recent paper by Martin Zwierlein’s group (arXive version) that suggests that “quantum supremacy” may have already happened. Yet an earlier post talks mainly about the Higgs but draws an interesting analogy with quantum information science.

    • January 3, 2013 3:32 pm

      Let me report on other relevant papers, posts, and prizes. Scott Aaronson’s recent post argues with a recent paper of physicist Michel Dyakonov which is skeptical towards building quantum computers. Michel amusingly compares building QC to teaching donkeys to speak, and Scott attempts to carry this analogy to interesting places. This donkey’s analogy is an additional zoological ingredient to the QC debate, after Aram’s amusing comparison of the task of skeptics to passing a camel through the eye of a needle (which I tried to carry to interesting places). Michel, Scott and the commentators go on to discuss aspects which I am very glad we did not discuss over here, like funding and sociology of science.

      Ignacio Cirac and Peter Zoller were announced as winners of the 2013 Wolf Prize for physics, “For groundbreaking theoretical contributions to quantum information processing, quantum optics and the physics of quantum gases.” Warm congratulations!

  10. October 13, 2012 7:18 pm

    The points  listed above that Aram made in this post are not as developed and concrete as those in his original three posts, but nevertheless they are quite interesting. Aram regards his point about “intentions” as central, and, at some point, even wished that he had constructed his reply around it. So it is not too late to discuss it in some detail.

    Intentions: Aram wrote that my conjectures “have difficulty avoiding reference to the “intended” operation of a quantum device, although any physical principle we discover is unlikely to depend on our intentions.”

    The reason I regard this as an important issue is that while the question of the limits of man-made controlled quantum systems is important, the question of the nature of approximations to pure-states and pure-evolutions in QM is of a larger scope and is perhaps even more important. As many commentators remarked the question of approximation is of crucial importance in quantum physics and this is the correct scope for many issues of this debate. Methods for approximating a state on a large Hilbert state using a much smaller Hilbert state are in the center of many of the glorious successes of quantum mechanics.

    Aram is correct that any physical principle should not depend on our intentions and this gives a clear consistency-check for my conjectures. Aram himself did not check the individual conjectures but we can do it together. Generally speaking, my conjectures talk about an “intended” pure evolution \phi_t and its realistic noisy approximation described using a noise operation E_t, (which I assume is a POVM-measurement ). Generally speaking,  you can simply omit the word “intended,” which originated in our specific context of controlled evolutions.

    Conjecture 3 asserts that when x’ and y’ are a noisy realistic approximation described by a noise operation E, to two qubits x and y in a pure joint state, then if x and y are substantially entangled then the information-leaks represented by E are substantially positively correlated for the two quits.  (The conjecture describes this in precise terms.) If your noisy evolution creates a  mixed joint state \psi on two quits which is approximately pure, the conjecture applies to every pure states \phi that the noisy state \psi well-approximates.  (As a matter of fact, you can even base the conjecture on the entanglement between the noisy qubits.) Similarly, Conjecture 4 refers to arbitrary noisy approximations for pure highly entangled states.

    The old and new versions of Conjecture C deal with the question of realistically approachable pure states and have no mention of intention. Take the conjecture which can be taken back to Unruh that when a realistic mixed state is well approximating a pure state then it is (approximately) bounded-depth. No intentions are involved. The dependence on “intentions” in the informal version of Conjecture 1 can easily be removed and the formal version of Conjecture 1 from the last post (which already took into account Aram’s “intention”-critique) avoids referring to an intended state altogether. Conjecture 2 has no mention of intentions.

    While I agree with Aram that the physical principles do not depend on our intentions, I want to emphasize again that a quantum device itself, human-made, or natural, may very well depends on the evolution it describes.

    Let me also mention that one of the reasons for considering POVM-measurements in my papers and to discuss leaks of information, is precisely to make the conjectures intention-free.  (And this is discussed in the papers.) This is needed because you can think about noise which is hard to be discussed intention-free. For example, when instead of one pure evolution your device produces another pure evolution. The issue of intentions is harder to deal with when it comes to classical noise. But, of course, noise and denoising, are very important issues  for classical processes.  Some aspects of the classical case are conceptually harder.

  11. November 22, 2012 4:07 pm

    Physical explanation (on the microscopic level):     Aram remarked that my conjectures are not supported by a physical mechanisms on the microscopic level or a specific model describing the microscopic behavior. This is, of course, a very important point and, as I said all along, my conjectures should be tested theoretically, experimentally, or both, on a case by case basis.

    The conjectures should be tested for controlled quantum devices, and also for approximations to pure evolutions in other physical contexts. My own attempts at a  case-by-case examination of my conjectures  are pretty modest and are mainly based on discussing the matter  with a few colleagues in theoretical and experimental physics over the years. This is certainly interesting but it is a separate project (and perhaps also a separate debate with other debaters). Let me mention a couple of cases that I looked at.

    Topological quantum computing

    As I wrote in the first post I think that it will be easier  to support (theoretically or experimentally) my conjectures for proposals which create, using conventional experimental methods, certain gadgets that are used later for the quantum computations. The proposal regarding (essentially) noiseless qubits based on anyons is especially interesting in this respect. (Another interesting direction is measurement-based quantum computation.) Indeed, the open problems (Questions 1 and 2)  in the first post were devoted to these two proposals.

    As far as I understand, the stable properties of qubits based on pairs of anyons, and related suggestions do not take into account decoherence in the physical process used to create these anyonic systems. Since there is no reason to believe that quantum error-correction and quantum fault-tolerance will be used in the proposed physical processes that create these anyons  it is hard to believe that these anyons will describe highly stable single qubits (which regarded as encoded qubits violate Conjecture 1).

    Of course,   experimentally creating quasiparticles obeying nonabelian statistics in laboratory systems, is important for many reasons.  We do not have to insist on highly robust qubits and we can ask: Looking at  (pairs of) anyons  as encoded qubits, what kind of mixing of the states of the encoded qubit are supported (i.e. approximated) by a bounded depth quantum circuit? The maximal entropy state is perhaps boring, but if we can figure out lower bounds for the entropy of such a mixture, and on the other hand, create interesting such states experimentally, this will be very nice.

    Ion-trap quantum computers

    I tried to explore a little the situation for ion-trap computation. It is interesting to try to examine specific suggestions for their implementation in light of my conjectures. For example, one concern is that controlling simultaneously the locations of the ion traps during computation can potentially lead to error-synchronization. Recall that error-synchronization for a tiny fraction (in term of trace-distance) of the overall noise  suffices to cause fault-tolerance to fail.  Of course, even this specific concern should be checked carefully and case-by-case within the wide area of ion traps computation. There are some teleportation-based ion-trap architectures for which there is no need to move the qubits around at all. These suggestions (which are fairly remote from experimental reality) should be examined separately.

    Ion-trapped qubits with internal Z-evolutions

    Specifically, the concern that simultaneously controlling the locations of the qubits may lead to error-synchronization (for some tiny fraction of the overall noise) applies to one type of qubits based on ion-traps that I want to mention now. The most stable ion-trapped qubits created today (as far as I know) are not static but rather they have an internal Z-evolution with known frequency.  However they rely on having reference clocks that are extremely precise on the relevant timescale.  Imprecise knowledge, drifts, all with respect to this clock, are considered contributors to unwanted noise, and the overall noise level for this kind of qubits is very low. In other words, the Z-component of the qubit is subject to a very stable periodic evolution. The gap between the ideal carrier (which is some sinus, I suppose) and the actual carrier is taken as part of the noise and is very small.

    Now, we can suppose that the actual carrier can be described as a function in time with some additional higher harmonics, so, in this case the noise has strong time-dependency.

    My concern is that these extra harmonics (which are part of the overall noise) can be synchronized for many simultaneously controlled qubits. (This is a place were the analogy with metronomes  might make sense.) Such an error synchronization can fail scalability (and fault tolerance) even if the rate of the harmful noise to start with tends to zero with the number of qubits (or, in fact, is barely larger than one over the number of qubits).

    Back to generalities.

    Finally, let me recall that  a quantum computer or a quantum circuit is a hypothetical abstract physical construction. Quantum computers abstract away much of the physics. It is a mistake to talk about a common detailed physical explanation on the microscopic level  for my conjectures on the level of abstract quantum computers.

    The model of a quantum computer (or a quantum circuit) is a useful umbrella to discuss together phenomena and devices in many different areas of physics, but for building quantum computers as well as for unbuilding quantum computers (i.e. for showing that quantum computers cannot be built,) you need to dive in and make a case-by-case study. In both the positive direction of building quantum computers or in the negative direction of understanding why they cannot be built the notions of quantum error-correction and quantum fault-tolerance appear to be crucial.

    There is also the important question of models of noisy quantum evolutions which are, on the one hand,  more specific than quantum computers so that after the correspondence into the classical world we do see (mathematically, of course,) some familiar classical physics; but, on the other hand, more general than a specific implementation of a quantum computer as described above. This is also a different project (not to say, comment), but we can return to this one day.

    • January 2, 2013 1:44 pm

      I heard today in our Quantum Information Science Center, a talk by David DiVincenzo entitled “Making superconducting qubits scalable.” After describing the progress in creating stable superconducting qubits (much by the “Yale group”) David talked about a remarkable proposition for quantum computers’ architecture  based on planar arrays of qubits, that allows to apply simple sequence of CNOT gates and measurements on half the qubits, to create a toric code on the remaining half! Implementation of this idea are on their way, at IBM among other places. (The IBM project was mentioned in some earlier comment of the debate.)  The theory behind this suggestion  involves a lot of recent theoretic advances in quantum error correction. Here is a related paper (I think) From Majorana Fermions to Topological Order by  Barbara Terhal, Fabian Hassler, and  David P. DiVincenzo. David’s paper from 2000, The Physical Implementation of Quantum Computation is also relevant to various aspects of our debate.

      The lecture was quite appealing and rather convincing. In this implementation you do not even need to move the qubits in order to implement the CNOT gates. My conjecture 1 says that you will not get the desired codewords, but rather a cloud of codewords (and additional standard noise). My other conjectures predict strong effect of error synchronization, and also that the error rate in terms of qubits error will scale up with the number of qubits. It would be nice to examine these predictions (theoretically or experimentally) for this case.

  12. John Sidles permalink
    November 23, 2012 8:19 am

    Gil Kalai posts   “One concern is that controlling simultaneously the locations of the ion traps during computation can potentially lead to error-synchronization.”

    Gil, your statement (it seems to me) makes a fundamentally important point, that we quantum systems engineers formalize as follows (and we shared this perspective with Aram at our seminar this week).

    Let us suppose that we have a Hamiltonian system (which can be classical, quantum, Kählerian, etc.) that conserves a vector of quantities \boldsymbol{Q} and whose dynamics are localizable on some spatial manifold U.

    In general physical localizability is associated to the existence of Lindbladian dynamical processes that generate a continuous stream of \boldsymbol{Q}-indexed, space-dependent, time-dependent, measurements \boldsymbol{\rho}(U,t) that satisfy

    \boldsymbol{q} = \int_U \ast\boldsymbol{\rho}(U,t)

    where \boldsymbol{q} is a vector of (conserved) global \boldsymbol{Q}-values.

    By the following chain of reasoning we appreciate that the space-time fluctuations of \boldsymbol{\rho}(U,t) have fundamental physical significance.

    • Ludwig Boltzman teaches that Hamiltonian systems are associated to a global entropy function S(\boldsymbol{q})

    • Lars Onsager teaches that localized entropy is associated to a function s(\boldsymbol{\rho}(U,t)), whose local values fluctuate in space and time, but whose global integral is constant: S(\boldsymbol{q}) = \int_U \ast s(\boldsymbol{\rho}(U,t))

    • Göran Lindblad teaches that entropy is spatially extensive — that is, the local entropy function s(\boldsymbol{\rho}(U,t)) is well-defined — iff the microscopic dynamical evolution is not strictly Hamiltonian, but rather is “Lindbladian.”

    • Howard Carmichael teaches that the localized values \boldsymbol{\rho}(U,t) are concretely computed as stochastic datastreams that are associated to the continuous observation of unravelled (the term unravel was originated by Carmichael) dynamical trajectories.

    The fluctuating Lindbladian observations \boldsymbol{\rho}(U,t) and the thermodynamic state-function s(\boldsymbol{\rho}) are concretely linked by a relation that was first appreciated by Onsager:

    \displaystyle\begin{array}{l}\displaystyle\text{Av}[\boldsymbol{\rho}(t)\otimes\boldsymbol{\rho}(t)]_t -\\[1ex]\displaystyle\quad\text{Av}[\boldsymbol{\rho}(t)]\otimes\text{Av}[\boldsymbol{\rho}(t)]_t\ {=}\\[1ex]\displaystyle\quad\quad\left[\,\frac{\partial^2\,s(\boldsymbol{\rho})}{\partial\rho_i\,\partial\rho_j}\,e_i\otimes e_j\,\right]\end{array}

    Here the \{e_i\} are a \boldsymbol{Q} basis (that is, the \{e_i\} comprise the set of physical units that is associated to the quantities that are conserved by the microscopic dynamics, e.g. kilograms, Joules, Coulombs, Ampere-meters^2, etc.). Per Onsager, the partial derivatives that are observationally specified from correlations of \boldsymbol{\rho}(U,t) (and that according to our 21st century understanding, necessarily originate in a Carmichael unravelling of a Lindblad process) are asserted to be Frobenius-integrable to an Boltzmann entropy function s(\boldsymbol{\rho}).

    Note: The above relation (or relations that readily are shown to be equivalent to it) can be found in multiple sources that include: Casimir’s article On Onsager’s principle of microscopic reversibility (1945, eq. 4), Kittel’s textbook Elementary Statistical Physics (1958, chs. 33 and 34), and Landau & Lifshitz’ textbook Statistical Physics: Part 1 (1980, following eq. 120.10). A modern survey is Martyushev and Seleznev Maximum entropy production principle in physics, chemistry and biology (2006), and the mathematical discussion of Zia, Redish, McKay article Making sense of the Legendre transform (2009) is so outstandingly clear and natural as to be essential reading.

    A crucial insight that is absent in the above references — and that owes its appreciation very largely to quantum computing research — is recognition that \boldsymbol{\rho}(U,t) is properly given as resulting from a Lindblad/Carmichael observational unravelling, and not as a Boltzmann-style classical observation, and neither as a von Neumann-style operator expectation. The dynamical back-action that is associated to Lindblad/Carmichael observational unravelling, and that is absent in Boltzmann-style and von Neumann-style formalisms, turns out to be essential in understanding multiple thermodynamical phenomena; the Third Law for example.

    With Lindbladian back-action, and thus localization dynamics, properly accounted, the above relation authorizes us to analyze (at our discretion) either backward to microscopic Hamiltonian dynamics (from the observed values \boldsymbol{\rho}(U,t) via Carmichael and Lindblad), or alternatively forwards to macroscopic thermodynamics (from the entropy function s(\boldsymbol{\rho}) via Onsager and Boltzmann).

    Obviously, there is far more to be said regarding these two immensely broad topics! We thus appreciate that quantum computing research greatly clarifies our 21st century understanding of thermodynamical phenomena. And this broader, more natural understanding has emerged as a main theme of our UW QSE Group’s 2012 seminar: Natural Elements Of Separatory Transport.

    From an FTQC point-of-view, the above formalism suggests that the statement “FTQC is feasible” is true only if we can construct macroscopic physical systems for which notions of thermodynamics are not well-defined. This is a question that (as it seems to me) is hardly likely to be settled by purely mathematical arguments.

    We can all be glad that advances in our understanding of quantum computation are proving to be so strongly and naturally linked to exceedingly useful advances in our practical engineering understanding of quantum thermodynamics. And this is (for me) a wonderful outcome to the Kalai-Harrow debate! Thank you, Aram and Gil! 🙂

    • John Sidles permalink
      November 23, 2012 8:45 am

      Oh dang the lack of a preview! A concluding “-1” exponent is missing from the above \boldsymbol{\rho}-relation, which should read:

      \displaystyle\begin{array}{l}\displaystyle\text{Av}[\boldsymbol{\rho}(t)\otimes\boldsymbol{\rho}(t)]_t -\\[1ex]\displaystyle\quad\text{Av}[\boldsymbol{\rho}(t)]\otimes\text{Av}[\boldsymbol{\rho}(t)]_t\ {=}\\[1ex]\displaystyle\quad\quad\left[\,\frac{\partial^2\,s(\boldsymbol{\rho})}{\partial\rho_i\,\partial\rho_j}\,e_i\otimes e_j\,\right]^{-1}\end{array}

      Eventually I will post an analysis of the Frobenius-integrability of these relations as an answer to Gil’s fine MathOverflow question What is an integrable system? within a quantum thermodynamical context that naturally unites Gil’s integrability question with his Physics StackExchange request for Onsager’s regression hypothesis, explained and demonstrated. That is a pair of deep, natural, wonderful, practically important questions, Gil! 🙂

  13. January 10, 2013 2:52 pm

    Greetings, everybody,  from beautiful snow-covered Jerusalem. Such a substantial snow storm here is a once-in-a-decade event, and it is great that it is happening today.

    The discussion on Scott’s blog (190 comments and counting, continuing in a second post)  about Dyakonov’s paper have led to a full-scale quantum computer debate and to several interesting comments related to our own debate, including to some items on my long “agenda” which were not discussed here and some further discussions of things that we did discuss here. Here is a partial list.

    1) Is QC skepticism amounts to QM skepticism? (This was item  11 on the agenda) A comment by me explaining why the claim that QM logically implies the feasibility of QC (that I attributed to Scott) is flawed; A reply by Scott explaining what his approach is. Eventually we agreed that if  quantum computers are not possible this reflects additional layer of physical laws based on QM, (and on a few other things, see below).

    2) Implications to QFT/QED computations. Our point 18 on the agenda was:  Quantum field theory computations. If quantum computers are impossible what does it say about computations in QM that requires exponential resources? (These computations were Feynman’s original motivation for QCs.) Jay asked me specifically about QED computations, I replied: If superior quantum computational power is not possible, then QED-computations in the range that requires superior computational power are irrelevant to our physical reality. Aram wrote that failure of QED would require changes to the operating system of the universe (quantum mechanics)

    3) Two comments (1 and 2) summarizing my approach with an item on “What is the environement?” (item #14 on the agenda) and my picture of our computational world of “classical control”.

    4) Discussion between me and John Preskill on his noise model and the “Gausian” concentration of the rate of noise that I regard as unrealistic. (My concern regarding the concentration – 1,  John’s reply and question about classical bits 2, my guess about digital bits 3)

    5) A comment by Michel Dyakonov and responses by Scott and by me.

    6) My very short explanation for why QC cannot work. Let me repeat it here (slightly edited).
    Why QC must fail –  my very short explanation
    Quantum systems based on special-purpose quantum devices, are subject to noise which systematically depends on the quantum evolution of the system; this dependence reflects dependence of the noise on the quantum device, and the dependence of the quantum device on the quantum evolution it is doomed to perform.

    This systematic dependence can be expressed formally as follos: Quantum systems are subject to noise which is expressed by smoothed Lindblad evolutions. The amount of noise in a time interval is bounded below by a non-commutativity measure of (the projections in) the algebra spanned by unitaries describing the evolution in this interval. Smoothed Lindblad evolutions reflect the idea that quantum fault-tolerance is impossible (or not active) and quantum noise must accumulate.

    This property of quantum systems based on special-purpose quantum devices explain why universal quantum devices are forced to fail. (The reference to “devices” applies both to human-made devices and nature-made devices.)

    7) Robert Alicki’s explanation and reference.

    Robert Alicki’s very very short explanation for why QC cannot work:

    “For me QC was a challenge similar to perpetuum mobile in XVIII-XIX century. After 12 years of trials and errors I believe I know the answer : a fundamental conflict between stability of information storage and reversibility of its processing which does not harm classical but kills quantum large scale computing (see arXiv:0901.0811, OSID , vol. 19, no. 03 ).

    BTW, I think also that I deserve Scott Aaronson’s 100 000 USD prize!”

    8)
    Five topics that Scott Aaronson and I agree on
    (The usual warning: we can both be wrong. QM=quantum mechanics, QC=quantum computers, my comment Scott’s reply)
    1) The basic framework to describe our physical reality is the framework of QM. The “operating system” of our physical reality is written in this language.

    2) There are additional layers of physical laws (or specific physical models) that we refer to as “program applications” written in the language of QM. Practically speaking, in some cases those layers require specific (sometimes very complicated) mathematical formulations that we do not always know how to express in terms of QM.

    3) Failure of universal QC require a profoundly different way to understand QM, again, on the level of the additional layers of physical laws.

    4) Building universal QC (or “just” computationally superior QC) represent a completely new reality in terms of controlled and observed quantum evolutions, and also a new computational complexity reality. (But most likely not the ability to solve NP-complete problems.) On this point our agreement is limited.

    5) The idea/model of quantum computers itself is a profound revolutionary way to look at quantum physics (I completely agree with Ethan about that). The notions of quantum error-correction and quantum fault-tolerance are additional profound new theoretical physics.

    • John Sidles permalink
      January 10, 2013 10:55 pm

      Gil, here is yet another view regarding the attainability of FTQC and/or BosonSampling (regrettably its links are incompatible with Shtetl Optimized‘s weblog filter).

      —————————–

      Scott Aaronson requests  “Explain where and how the truth is ambiguous, and then maybe we’ll get somewhere! :-)”

      That is a reasonable request!

      Ambiguity [1]  Nature’s quantum state-space is infinite-dimensional and perturbatively divergent, and moreover it is well-established that both attributes impact the noise observed in real devices and also their thermodynamical properties; therefore it is concerning that standard FTQC models have neither trait.

      Disambiguation [1]  the term realistic must be realistically defined, with scrupulous respect for the continuum interactions of quantum field theory.

      ————-

      Ambiguity [2]  All mathematical proofs (that present-day mathematicians accept as valid) are valid iff they are verifiable with polynomial resources; therefore it is concerning that complexity theorists do not similarly impose a polynomial constraint upon simulation verification resources (or even upon verification of membership in complexity classes).

      Disambiguation [2]  the term simulate must be defined with scrupulously realistic respect for polynomial restrictions upon verification resources.

      ————-

      It’s true that disambiguation means more work (or … does … it … ?), and surely we believe the truth will come to us, and hopefully scrupulous attention to verification will speed the day that truth arrives at our door! 🙂

    • Alexander Vlasov permalink
      January 13, 2013 6:57 am

      Dear Gil, Thank you for review. It seems to me, the operation system is not very good metaphor. We may see in S. Weinberg or other books, that QFT is inevitable relativistic form of quantum theory. We are not considering special relativity as some applied programs running on a theory with an action at a distance or something like that. Why we should talk about QFT in such a way? I myself used “QFT” in my comments as a synonym of “relativistic quantum theory”.

      • January 13, 2013 12:34 pm

        Dear Alexander, This metaphor was useful for Scott and me to reach the common grounds that I quoted. (I certainly am aware that it is problematic.) We refer by QM to the very wide framework of quantum probability. With this view of what QM refers to, QFT (or relativistic quantum theory) can be seen as a theory which fits in the fraemwork of quantum probability (in the sense that it does not violate the principles of quantum probability), so (in principle, at least) QFT can be written in the language of quantum probability. I think that this is a reasonable “framing” of the situation, especially In the context of the QC debate.

      • Alexander Vlasov permalink
        January 13, 2013 12:52 pm

        Dear Gil, seems idea of quantum probability was introduced into discussion by Greg, but I did not quite realized then, what does it mean in particular. Could it be applied to individual systems/qubits?

      • January 13, 2013 1:20 pm

        Alexander, Thinking about QM as a theory of quantum probability is a very pleasant interpretation (especially to a mathematician). Greg prefers this point of view see: A concise introduction to quantum probability, quantum mechanics, and quantum computation (36 pages) by Greg Kuperberg. (Itamar Pitowsky also prefered to think about QM in this way, see this post: Probability in physics, where does it come from?.) Both Itamar and Greg influenced my own way to think about QM.

  14. John Sidles permalink
    March 1, 2013 4:21 pm

    Gil, perhaps some readers of Gödel’s Lost Letter readers will be interested in the latest post on your weblog Combinatorics and More, titled Meeting with Aram Harrow, and my Lecture on Why Quantum Computers Cannot Work.

    Thomas Vidick’s fine new quantum-related weblog My CQState has an essay titled A Quantum PCP theorem? that (as it seems to me) nicely complements your MIT talk as follows:

    Postulate 1  To a very good approximation (TAVGA), the experiments we do in laboratories are governed by quantum electrodynamics (QED).
    Postulate 2  it is TAVGA-feasible to conduct experiments whose effective state-space is Hilbert, whose dimension is finite, and whose evolution is unitary.
    Postulate 3  In regard to the experiments of Postulate 2, it is TAVGA-feasible for the number of dimensions to be exponentially large in the number of variables controlled in the experiment.

    Then an emerging (very limited) skeptical model is

    Incompatibility Postulate  For field-theoretic reasons, P1 is not QED/TAVGA-compatible with P2 ∪ P3, at the degree of accuracy that is required for scalable FTQC and/or scalable applications of qPCP postulates.

    This is another way of saying that the UC Berkeley Simons Institute workshop for Spring 2014, titled Quantum Hamiltonian Complexity surely will be very lively (and important) workshop, as appreciation increases that (in essence) all scalable quantum information technologies require a deeper appreciation than we presently possess, of the dynamical and informatic subtleties that are associated to quantum field-theoretic vacuum interactions.

    • March 2, 2013 1:19 pm

      Thanks for linking to my post and the other interesting links and ideas, John. Indeed, a week ago I gave a talk at MIT, with very lovely participation, and met Aram for the first time (and also met quite a few old friends and colleagues and a few new ones.)

      The quantum PCP question that you mentioned is indeed a great question. Its informal description in the Simons’ site is : “Does the exponential complexity of general quantum systems persist at high temperature?” which I do not entirely understand (I suppose exponential complexity refers to the gap between QMA and NP but I am not sure.) This is quite close in spirit to a natural problem from my point of view:  “Does the exponential complexity of general quantum systems persist at high temperature, without quantum fault-tolerance?”. Here “high” just means bounded away from zero (as in Unruh’s old paper).  Of course, a main issue for me is to describe formally “general quantum systems without quantum fault-tolerance.”

      Both in my lecture, and private discussions, we came back to my “smoothed Lindblad evolutions.” At the time, Aram and I  were a bit tired to pursue them further here at the end of the debate but now we may come back to them.

  15. May 29, 2013 6:09 am

    A long and interesting discussion about smoothed Lindblad evolution took place over my blog.

  16. March 21, 2014 3:17 am

    Below are links to a videotaped lecture in two parts entitled “why quantum computers cannot work” recorded at the Simons Institute for the Theory of Computing on December 2013 and two additional videos: a short talk on topological quantum computers and a twelve minute overview of this whole movie project. The lectures draw on my year-long debate with Aram here on GLL, and, of course, on my earlier papers which also motivated the debate itself. 

    Here are the links: OverviewPart IPart IITopological QC.

    A related blog post “Why quantum computers cannot work: the movie!” over “Combinatorics and More,” and a subsequent post by Steve Flammia, Quantum computers can work in principle over The Quantum Pontiff.

    • John Sidles permalink
      March 21, 2014 12:08 pm

      Without regard for differences of opinion regarding the feasibility (or not) of scalable quantum computing, please let me say that the above resources are terrific, and that I admire especially the concluding summary (minute 13:16 of “Why Topological Quantum Computers Cannot Work”)

      “Quantum computers are larger-than-life. Quantum computers may well add to our long list of failures in larger-than-life human quests. Mathematically speaking, that large-than-life dreams end so often in disappointment demonstrates how large life itself is.”

      Thank you, Gil Kalai, and Aram Harrow too, and all of the many folks who contribute so generously and creatively to quantum-computing discourse.

      And thank you especially, Dick Lipton and Ken Regan, for GLL’s sustained hosting of this wonderful discourse!

  17. August 24, 2014 3:24 am

    Let me mention (in two comments) two topics which we did not discuss in the debate but are potentially quite relevant.  One topic  is the relevance of quantum computation and information to the black hole information paradox. The second topic is BosonSampling.

    The firewall black hole paradox is discussed, e.g.,  here and here.  (There are related exciting issues in the classical mathematical theory of gravitation and moving to the quantum theory makes the discussion more hypothetical and philosophical but still very exciting.) There is a recent attempt by Harlow and Hayden  to solve the AMPS firewall paradox using computational complexity. This is quite problematic since the fact that something is not efficiently computable does not mean that we do not have the information needed for the computation. (Harlow and Hayden mention in their paper various explanation around this but those are not so convincing.) Since the issue is essentially about information and not about computation we may try to assume that the computation power (for everybody, Alice Bob, Charlie…) is unlimited, and only information is limited.

    When we take this approach we need to pay a price: we cannot talk about closed quantum systems. For those, everybody can (in principle) compute everything from the initial conditions, and if we allow this type of computation the paradox (along with many other basic physical insights) goes away.  What we can do is to talk about open quantum systems, namely to assume to start with, that we consider unitary evolution on a large Hilbert space where we have access only to a smaller subspace. (So this is the setting of noisy quantum evolutions.) And the small Hilbert space will represent a good chunk of the physical situation outside and inside the black hole.

    I see reasons to hope based on standard assumptions on quantum noise that the black hole firewall paradox remains equally powerful (even with stronger informational foundations) for open quantum systems as well. (For example, we can send many Alices rather than one to overcome the noise, and we can take quantum computation for granted.) Maybe this is  already known.

    But perhaps conditioned on a  no quantum fault-tolerance principle we  have a chance to resolve the paradox . In particular, the hypothesis of positive correlations for information leaks for entangled subsystems seems  relevant. If so, then in turn, we may gain insight on the quantitative aspects of this hypothesis. Note that my no-quantum-fault-tolerance conjectures are of information-theoretic nature and not of computational complexity nature.

    In a series of recent papers (arXiv:1306.0533  , arXiv:1402.5674 ,  arXiv:1403.5695 ) Leonard Susskind (and collaborators) studied the paradox and posits that the paradox goes away by an appropriate understanding of the ER=EPR principle of  Maldacena and Susskind.   A crucial aspect in the argument is that Alice does not have “sufficient quantum-computational power,” but rather demonstrates “ordinary interaction with the environment.”  Thus a crucial ingredient of Susskind’s endeavor is to find a mathematical distinction between “ordinary interaction with the environment” and “quantum computational power.”  Such a mathematical distinction (if possible at all) is, of course, at the heart of my debate with Aram. But instead of making a distinction that will apply just to poor Alice,  perhaps the simplest principle to consider is simply that “quantum computational power” beyond classical computation is universally impossible.

  18. August 24, 2014 3:34 am

    BosonSampling is a proposal by Scott Aaronson and Alex Arkhipov to demonstrate “quantum supremacy” without quantum fault-tolerance. There are several experimental avenues towards this goal with fruits expected in the next few years.

    I truly like BosonSampling as a theoretical idea and I even devoted back in 2010 a post on my blog to discuss it.  However, I did not think that BosonSampling will exhibit “quantum supremacy” without quantum fault-tolerance. In fact,  I thought untill Scott explained to me otherwise that this is the common opinion. This perhaps explains why we almost did not mention BosonSampling  in our debate. (John Sidles mentioned it in this thread.)    Recently, Guy Kindler and I  wrote a paper about Noise sensitivity and BosonSampling.  The paper demonstrates that for a viable BosonSampling with n bosons the level of noise should be below 1/n per-boson, and demistifies (above this level of noise) the claims of “quantum supremacy” via BosonSampling.  As we learned, this 1/n threshold is consistent with some researchers’s expectations and there are strong beliefs that the noise-level can be pushed below 1/n per boson for 20-30 bosons. So we need to wait and see.

    The work by Guy Kindler and me relies on a theory of noise sensitivity that was developed in the late 90s by Itai Benjamini, Oded Schramm and myself. BosonSampling and Noise-sensitivity is briefly mentioned in the second part of my videotaped lecture on quantum computers. (Click here to jump to the relevant place.) A pleasant more recent feature of my work with Guy is the ability to make concrete computations via Fourier-Hermite like expansions.

  19. December 24, 2014 5:06 pm

    A paper by Greg Kuperberg and me Contagious error sources would need time travel to prevent quantum computation  gives a result in the positive direction! It shows that fault tolerance quantum computing based on teleportation can work against a large class of correlated noise.

    Here is the arXiv version of my paper with Guy Kindler on noise sensitivity and BosonSampling (mentioned above) and here is a videotaped presentation.

    • December 24, 2014 5:09 pm

      I forgot to mention that Greg and I heavily rely on a method developed by Manny Knill.

  20. February 11, 2015 12:20 pm

    Last month we had here at HUJI a workshop sponsored by the American Physics society and our Rakah Institute of Physics entitled “Quantum computers achievable reality or unrealistic dream,” featuring me and Professor Nadav Katz. My lecture was entitled “What can we learn from a failure of quantum computers” and Nadav’s lecture was on “Quantum information science – the state of the art.  Here is a blog post about it and here is a video of the event:

    https://www.youtube.com/watch?v=oIYs_hL8mQw&feature=youtu.be

  21. February 13, 2015 12:35 pm

    Thank you Gil, for posting links to your workshop “Quantum computers: achievable reality or unrealistic dream?” … I’ll review it just as soon as The Quantum Pontiffs finish their survey of another fine workshop: “QIP 2015”.

    Please let me note too that Scott Aaronson’s Shtetl Optimized has just posted an essay “Memrefuting” that critiques — not very sympathetically — recent exceedingly interesting (to me anyway) computation/simulation papers like “A coherent Ising machine with quantum measurement and feedback control” (arXiv:1501.07030 [quant-ph]) and “Memcomputing NP-complete problems in polynomial time using polynomial resources and collective states” (arXiv:1411.4798 [cs.ET]).

    It’s challenging to write sympathetically and seriously about radical quantum ideas … invariably the flaws are more evident than the virtues, and so the elements of comedy are present. The following appreciation attempts both comedy and sympathy, in assessing these works.

    ——————

    Fans of the Monty Python sketch The Annoying Peasant will appreciate the eerie echoes of its themes that are evident in this week’s Shtetl Optimized essay “Memrefuting”:

    A quantum computing mythos

    “Quantum computing forces skeptics to handwave about present-day practicalities, while the proponents wield the sharp steel of accepted physical law!”

    — Scott Aaronson’s “Memrefuting” essay,
       after Dirac’s Quantum Mechanics, (1930)

    An Arthurian mythos

    “The Lady of the Lake [angels sing], her arm clad in the purest shimmering samite, held aloft Excalibur from the bosom of the water signifying by Divine Providence that I, Arthur, was to carry Excalibur [singing stops]. That is why I am your king!”

    — Monty Python’s “Annoying peasant” sketch,
       after Tennyson’s Morte D’Arthur (circa 1835)

    Resolved for debate  The scientific literature provides abundant “annoying peasant” arguments that effectively refute the “Arthurian” quantum computing mythos.

    For the affirmative  Wikipedia’s article “quantum superposition” reminds us that

    “ordinary everyday ‘real’ (macroscopic, Newtonian) objects and events do not seem empirically to display quantum mechanical features such as superposition.”

    There is at present no strong scientific consensus regarding the origins of quantum nonsuperposition, and perhaps this is why the phrase “physical law” does not appear anywhere in Wikipedia’s account.

    A regrettable lack  On Physics StackExchange the highest-rated question (by far) regarding quantum superposition issues is Andrea Amoretti’s “Linearity of quantum mechanics and nonlinearity of macroscopic physics” (2010). Regrettably (and surprisingly to me), Amoretti’s question has attracted no high-quality answers to date (and it is specially moderated for this reason).

    Here is Amoretti’s Physics StackExchange question in its entirety:

    Amoretti’s Question  “We live in a world where almost all macroscopic physical phenomena are non-linear, while the description of microscopic phenomena is based on quantum mechanics which is linear by definition. What are the physics points of connection between the two descriptions?”

    Whence this lamentable shortage of good answers to Amoretti’s Question? The world wonders!

    The question is entirely reasonable and natural (as it seems to me): any student might ask it … and perhaps most do.

    Surveying the literature  When we seek good answers to Amoretti’s Question in the literature, we are led to:

    An annoying peasant’s literature survey 

    Observation One The answers to Amoretti’s Question that are strong aren’t linear.

    Observation Two The answers to Amoretti’s Question that are linear aren’t strong.

    Here “linear” is a shorthand for “dynamical flows that are isometries of large-dimension complex Hilbert state-spaces”, and “strong” is a shorthand for “mathematically natural and physically verified”.

    A considerable literature that supports Observation One — and it is supporting plenty of prospering 21st century academic and commercial enterprises too! — traces back twenty-six years, to Asher Peres’ critique “Nonlinear variants of Schrodinger’s equation violate the second law of thermodynamics” (PRL, 1989), to which Steven Weinberg’s concluding response (in “Weinberg replies”, PRL, 1989) has proved to be prescient (as I read it anyway)

    “It seems to me that the interesting question raised by Peres’ Comment is not whether the entropy can decrease in generalized versions of quantum mechanics, but how in such theories to generalize the quantum-mechanical definition of entropy.”

    Observation  An expanding universe of 21st century quantum dynamical simulators have in practice succeeded in “generalizing the quantum definition of entropy” (in Weinberg’s phrase). In this regard the free-as-in-freedom manuals and on-line videos of the Schroedinger Corporation are commended to Shtetl Optimized readers, as presenting practical recipes for predicting a vast array of entropy-associated parameters of complex quantum dynamical systems (free energies in particular).

    Where’s the mathematical beef?  Of the 21st century’s fourteen Fields Medal winners (to date), the work of at least eight — Terence Tao and Wendelin Werner (in 2006), Elon Lindenstrauss, Stanislav Smirnov and Cedric Villani (in 2010), and Artur Avila, Martin Hairer, and Maryam Mirzakhani (in 2014) — present quantum researchers (younger ones especially) with fresh mathematical universes that help in appreciating and extending the burgeoning capabilities of 21st century post-Hilbert quantum dynamics.

    A working answer to Amoretti’s Question  Practical experience teaches that macroscopic observations of a classical and thermodynamical world emerge as the mathematically natural unravelling of dynamical trajectories, as the integral curves of symplectomorphic flows, upon microscopic state-spaces that effectively are low-dimension algebraic varieties.

    Summary of the affirmative argument  Every quantum experiment ever conducted — including experiments regarded as precision tests of the linearity of quantum mechanics — can be accurately simulated with classical resources by dynamical models that can be mathematically appreciated as the natural pullback of the 20th century’s Hilbert dynamics onto the 21st century’s non-Hilbert algebraic state-spaces.

    Practical realities  These transformational capabilities are already being realized in academic and commercial enterprises that are the 21st century’s best-hope (as it seems to me) of creating family-supporting jobs in the abundance that the next generation of quantum researchers will require.

    The Annoying Peasant’s last word  In Pythonesque languge, the best hopes of young researchers for prosperous family-supporting careers are grounded in 21st century notions of universality, naturality, physicality, practicality, and responsibility … not some farcically flat 20th century state-space of mythically many dimensions!

Trackbacks

  1. What’s Blue and White and Red All Over? « Gödel’s Lost Letter and P=NP
  2. The Quantum Debate is Over! (and other Updates) | Combinatorics and more
  3. Thanks For Sharing « Gödel’s Lost Letter and P=NP
  4. Meeting with Aram Harrow, and my Lecture on Why Quantum Computers Cannot Work. | Combinatorics and more
  5. My Quantum Debate with Aram III | Combinatorics and more
  6. Interstellar Quantum Computation | Gödel's Lost Letter and P=NP
  7. Sex, Lies, And Quantum Computers | Gödel's Lost Letter and P=NP
  8. Quantum Supremacy and Complexity | Gödel's Lost Letter and P=NP
  9. Predictions For 2019 | Gödel's Lost Letter and P=NP
  10. Quantum Switch-Em | Gödel's Lost Letter and P=NP
  11. Gil’s Collegial Quantum Supremacy Skepticism FAQ | Combinatorics and more
  12. IBM Conference on the Informational Lens | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading