Skip to content

Nature Does Not Conspire

February 15, 2012


Second response by Aram Harrow

Albert Einstein was a great observer of science, as well as doer of science. Most of his quotations as well as theories have proved their staying power over the past century. We, Dick and Ken, feel that some of them are better when understood narrowly within science rather than taken broadly as they usually are.

Today our guest poster Aram Harrow opens his second response to Gil Kalai’s conjectures of impediments to quantum computation by applying one of Einstein’s quotations in such a focused manner.

The quotation, and Einstein’s own clarification of it, are conveyed in this article on him and the mathematician Oswald Veblen, regarding the latter’s request in 1930 to inscribe the quotation on a plaque in the new Princeton mathematics building:

Einstein consented to the inscription of the phrase: “God is subtle, but he is not malicious,” even though he wrote to Veblen that he had meant to say “Nature conceals her mystery by means of her essential grandeur not by her cunning.”

The narrow meaning is that in physics and other natural sciences, solutions may be hard to find but they will not evade theory in perverse ways. Even in cases like the {N}-body problem where closed-form solutions are unavailable and behavior may become unstable, the underlying elements are smooth enough that one’s expectations will not be wantonly deceived. This goes especially in a classical geometric theory like relativity—whether Einstein thought this of quantum we don’t know.

By contrast, those who work in number theory must cope with having expectations deceived all the time. For instance, the Mertens conjecture is true for the first zillions of cases but known be false, though a concrete counterexample is yet to be constructed. (See also this about expectations.) Nastiness may be as Platonic and existential as niceness—in math. If Max Tegmark is right that “all structures that exist mathematically also exist physically,” then Nature must get arbitrarily nasty somewhere. So Einstein and Tegmark cannot both be right, though we could live in a nice “pocket” of Tegmark’s cosmos.

Anyway Gil and Aram are talking about our world, and we give the matter back to Aram. This is Part 2 of his three-part response, and addresses Gil’s contention that quantum errors are subject to correlations that defeat the independence assumptions of fault-tolerant quantum computation (FTQC). There will be a Part 3 by Aram and then (at least) a longer rebuttal by Gil.

Second Response by Aram Harrow: Nature is Subtle, But Not Malicious

The possibility of a high rate of correlated errors seems reasonable in many ways. Claire Mathieu points out that independence of random events is the sort of thing that often gets blithely assumed, when actually it’s a major assumption. Independence is attractive because it lets us extrapolate from easily observable probabilities to infinitesimally small probabilities, but for this reason it is also dangerous.

Analyzing FTQC is more complicated than analyzing error correction, but the big picture is similar. If an error-correcting code on {n} qubits corrects {k} errors, then independent error rate {p} will be transformed into an error rate that looks like {\binom{n}{k+1} p^{k+1}}. If this is an improvement, then we can iterate this process via concatenation and drive the effective error rate down. In all cases, our goal is to apply Chernoff-type bounds that guarantee an overall low error rate.

But what if errors are not independent? Then the answer still depends on how likely we are to encounter clusters of {k+1} simultaneous errors. Gil points out (e.g. Lemma 1 of this 2008 arXiv paper) that if errors are pairwise correlated, then the probability of {k+1} errors can be significantly higher. But Chernoff-type bounds still hold. Indeed mere pairwise, or {O(1)}-wise, correlation of errors is not enough to derail FTQC. Gil mentions that one of his models of correlations corresponds to the Curie-Weiss model, which describes classical magnets. But here too, the chance of {k} errors decays exponentially with {k} (see Lemma 1 of these notes).

Fundamentally this is because the basic interactions in physics involve two or three particles, for instance when a photon bounces off an electron. Here is the standard model describing all known interactions. To get interactions with more particles, you need to go to higher levels of perturbation theory, but as you do so, amplitudes go down exponentially. This is also what we observe in experiments, when we entangle 7 atomic nuclei in NMR or 14 electronic states in trapped ions. Of course, one can imagine dangerously correlated noise models. One such model is that with probability {1-p} nothing happens, and with probability {p}, the user trips over the power cord, and all bits simultaneously fail. But are these more likely in quantum computers? Let’s look at the two ways that Gil claims these are more likely.

Does Entanglement Cause Correlated Errors?

Quantum computing (probably) requires producing large entangled states to be interesting. Gil suggests that this entanglement may be self-limiting, by increasing the rate at which correlated errors occur. This is one of the routes he proposes for generating highly correlated errors in quantum computers.

The key reason I regard this as unlikely is that quantum mechanics is linear, in the same way that stochastic maps act linearly on probability distributions. This means that there is no physical process that can distinguish an arbitrary entangled state from an arbitrary non-entangled state, just as no test can determine whether a probability distribution is correlated, given only a single sample. Specific states can be distinguished, but there is no “entanglement” or “correlation” observable that can be measured. In particular, when “noise” is modeled in the usual manner as a trace-preserving completely-positive linear map, then linearity forbids noise depending on whether the host system is entangled.

More generally, error processes are defined in terms of Kraus operators that do not depend on the state being acted on. The way this works is that the state is a vector, and the Kraus operators are matrices that act on this vector. Each matrix-vector product is a possible outcome, with probability given by its squared length.

For example, suppose our qubit is stored in the electronic state of a trapped ion. Spontaneously emission is a type of error with two Kraus operators: one that sends the excited state to the ground state, and one that merely reduces the amplitude of the excited state. These can be represented by 2×2 matrices. These act as a physical process on any state, entangled or otherwise. For the Kraus operators to change as a function of entanglement, quantum mechanics would have to become nonlinear, which would mean utter catastrophe, comparable to what would result from being able to act nonlinearly on the probability distribution of the universe.

How Classical Synchrony Emerges Linearly

But don’t we observe synchronization in real-world systems? Gil gives an example of metronomes: if you have a bunch of mechanical metronomes sitting on a table, then the weak coupling they experience via the common tabletop will cause them to eventually synchronize. There are also quantum analogues in which spin qubits interact with a common bath of bosonic modes. But for the metronomes, what we are really observing is the result of pairwise interactions together with dissipation.

In fact, the process is similar to the error-correction procedure for a repetition code. The differential equation has the form

\displaystyle  \ddot x_i = -\omega^2 x_i + \lambda \sum_j (x_j-x_i) - \epsilon \dot x_i.

These dynamics will synchronize the metronomes, and thus will generate correlations. But this process has nothing to do with the question of whether the metronomes were correlated in the first place. No physical process will make the {\lambda} term larger if the state is correlated/entangled.

Moreover, these types of correlations can be handled by FTQC. What it means for the error rate to be, say, {10^{-3}}, is that our control is 1000 times faster/stronger than the error processes. So even if the system would synchronize if left alone, sufficiently fast control can still prevent this by acting before multi-qubit correlations can build up. This argument is developed in detail in a paper by Aharonov, Kitaev and Preskill.

One subtlety concerns the semi-classical approximation. For example, suppose we apply a laser to the trapped ion. If it is the right frequency, the laser will cause X rotations. This can be viewed as applying a controllable Kraus operator. But now the Kraus operator still depends on the state of something; namely, the classical system controlling the laser. This appears nonlinear, but really the true Kraus operators involve two-body interactions between the ion and individual photons in the laser. Each such interaction is very weak, and when we add up the many photons comprising the laser beam, we get what looks like a Kraus operator acting only on the ion, but with parameters that depend on the settings of the laser. There is fundamentally no nonlinearity, and more important, nothing that is even approximately nonlinear in the state of the ion, which is what is relevant to the quantum computer.

Does Computation Cause Correlated Errors?

Another route to correlated errors is by piggy-backing on our computation. For example, suppose that we control our ion-trap quantum computer by blasting it with lasers. The lasers are macroscopic objects, and if there were other ions, or systems that behaved like ions, lurking near the ions we use for qubits, then these would be similarly affected by the lasers. If we were unlucky, these “shadow qubits” might interact with our computational qubits in ways that caused errors, and now these errors would exhibit complicated correlation patterns that would depend on the history of laser pulses we have used. Thus, even though there is no direct way for errors to depend on whether our states are entangled or not, errors could depend on shadow qubits, whose entanglement/correlation could be produced at the same rate as entanglement/correlation in our quantum computer.

This type of correlated noise is possible, and is indeed devastating for some types of error-correction.

But general situations are better behaved. For ion traps, we really have a single ion floating in space, with other ions far away. There are other quantum systems besides the electron, such as the nucleus, or the electrons in the inner shell, or modes of the electromagnetic field. They all look different from the qubits we are using for the computation, and they respond to different frequencies, have different interactions, and so on. The issue is whether these ambient factors can induce correlated errors into the computation qubits.

Shadow Errors Cannot Set the Pace

The errors may be correlated among themselves in various malicious ways. For example, if we move the ion to bring it closer to another ion, then the nucleus (whose spin could be called a shadow qubit) moves right along with the electron that we are using for our computation. But the point is that these effects can be brought within the modeling, and thus dealt with.

In fact, experimental physicists have spent a long time studying decoherence, and a large amount of it is due to what you could call shadow qubits. In many processes, it’s understood how this works, and often there are known methods of reducing decoherence with pulses that decouple shadow and computational qubits. So shadow qubits are something we need to be aware of, but in practice, when people look at implementations, they usually are aware of them.

Could shadow qubits generate the kinds of highly correlated errors that are dangerous for quantum computing? It seems unlikely. In general, they may cause extra two-qubit errors, but for them to track the computation precisely enough to cause multi-qubit errors, their task would be as difficult as that of the computation itself. This may be true of the stock market or international espionage, but atoms and photons are not intelligent or malicious enough to defeat us in this way.

In fact, nobody knows how to even design a hypothetical Hamiltonian for an environment in which single-qubit error rates are below, say, {10^{-6}}, but correlated decoherence dooms fault-tolerant quantum computing. A weaker goal would be to design such a Hamiltonian in which FTQC may be possible, but our proofs fail; see Preskill’s comment for thoughts along these lines. One reason for this may be that it is hard to know what a computer is actually doing.

Finally, to bring in a point of my first response, shadow qubits are already present in classical computers. For one instance, 0s and 1s produce different heating patterns in classical computer chips. This heat, in turn, might cause errors in the circuitry. But for the heat to cause bit flips in just the right way to overcome software error correction would require an incredibly unlikely conspiracy of events—a conspiracy that doesn’t have a plausible mechanism, and that isn’t observed in practice.

Brief Rejoinder by Gil Kalai on Correlated Errors and Entanglement

Correlated errors are not caused by entanglement but by the entire computing process leading also to the entanglement. Just like in Aram’s example with lasers, when we isolate all other ingredients and examine how the error operation relate to the entanglement of the qubits the relation is nonlinear, but, there is nothing about it that violates QM linearity.

Let me propose a thought experiment. It is so imaginary that I ask you to close your eyes.

Imagine a quantum mechanical world where in order to prepare or emulate a quantum state you need to create a special machine. Each state requires a different kind of machine. Of course, there are some quantum states you can design but cannot prepare at all, and for others it may take decades to build the machine.

The states you build are not perfect. Compared to the ideal pure state you intended to build they are noisy. The noise depends on the machine. And now since the machine is specific to the state that you want to build, there is some systematic relation between the state and the noise. You would like to understand and find the laws for this relation. There is no reason to believe that noise will be the same for all prepared state (or “linear” as people refer to such independence). Of course, it is not the state that causes the noise. It is the common properties for all the machines needed to create this specific state that lead to systematic relations between the state and the noise.

And now open your eyes. This is the world we live in.

Possible relations between the created state and the noise are part of our reality. When we will build universal quantum computers we will change this reality, but our attempts to build them should be based on this reality. My conjectures attempt to draw such systematic relation between states and noise.

Updates From Comments

The comments sections of Gil’s post and Aram’s first response have engaged the above points in more technical detail. Here is a digest:

  • Joe Fitzsimmons noted that particle interactions are known to great detail, and those found in nature obey a restriction on their Hamiltonians called 2-locality, so as to limit the scope for correlated noise. Kalai agreed but noted that the latter applies only to cases where the states along the evolution themselves obey 2-locality.

  • Commenter matt pointed out that Kalai’s conjecture 1 would be vacuously true if it concerned the problem of encoding an initially unprotected single qubit, as opposed to a full FTQC scheme with data qubits initialized, operated on, and measured always from within the protection of an error-correction code. Gil clarified that his conjecture states that even with fault-tolerant initialization schemes, the rate of logical errors will remain above some universal constant.

  • Back in the first post’s comments, Robert Alicki gave a detailed argument on how the environment could maliciously interfere with a computation. This included remarks on Kraus operators that Aram acknowledged but do not seem to refute the defense given above. The key point going further than Aram’s sections above on shadow effects is that the environment can add friction to the system’s dynamics. This confers stability on classical systems, at the cost of heat dissipation and irreversibility, but quantum systems cannot afford to pay this piper. Aram, however, replied that quantum error correction works as friction in this beneficial manner while avoiding the problems.

    Further discussion in this long comment thread led John Preskill to adjudicate some of the contentions based on mathematical modeling. This shows that cooling considerations do not single out difficulties for quantum over classical computation, and that some of Alicki’s objections fail to apply to non-Markov noise models. He then referenced his first comment with 6-page paper giving a less-restrictive Hamiltonian noise model under which a threshold theorem conditionally supporting FTQC can be proven. He allowed it may not be realistic, but it contradicts the objections in as far as they apply to it. The same was observed for a case where a long computation could drive the environmental “bath” into a highly adversarial state.

  • In reply to Preskill’s first comment, Cristopher Moore allowed that “issues of correlated noise are subtle and important,” but called out Kalai’s conjectures as lacking sufficient motivation or detail to see why the errors should conspire against QC’s, and failing to define how noise as treated as separate from the computation’s physical process when it is really part of that process.

  • Meanwhile from the ringside, Boaz Barak asked for an update on what is currently obstructing progress on building QC’s. This drew detailed replies by Joe and Aram.

  • Gil showed how the “roadmap” has expanded as a result of these discussions, and tried to answer Aram’s open problem about constructing even a single physical system where (some of) his conjectures hold. Although the initial attempt was headed off by Matt, this led to interesting further exchanges about possible concrete physical tests in that thread, and a separate discussion beginning here.

Open Problems

To you, dear reader, what appears to be the core of the dispute? Is there enough specific mathematical modeling of how errors could correlate in harmful ways? Or do the high-level arguments by Aram and others suffice to reason that quantum computations themselves—in at least some reasonable systems supporting them—can out-pace any special error-causing effects, so that the standard conditions for fault tolerance assuming low dependence confer enough protection for them?

120 Comments leave one →
  1. February 15, 2012 3:41 pm

    I am trying to understand Gil’s rejoinder, but I find it a bit difficult to read the description of his thought experiment with my eyes closed 🙂

    • February 15, 2012 5:03 pm

      OK—I guess you’re commenter “matt”. Thanks for the comments!

  2. February 15, 2012 11:23 pm

    Dear Aram,

    I dont understand this part of your argument:

    “Gil points out (e.g. Lemma 1 of this 2008 arXiv paper) that if errors are pairwise correlated, then the probability of errors can be significantly higher. But Chernoff-type bounds still hold. Indeed mere pairwise, or O(1)-wise, correlation of errors is not enough to derail FTQC.”

    • February 16, 2012 12:29 am

      Yeah, I realize this point is a little unclear, since pairwise correlations are just one property of distributions among many.

      Here’s another way of saying my point. What kills error-correction is not the failure of independence, but the failure of the Chernoff bound. The exception is that if you have predictably correlated errors, those are especiallly easy to correct. But in general, you need the amplitude of k simultaneous errors to scale like alpha^k for alpha<1.
      What I meant about pairwise not being bad is that it's ok if that scaling doesn't kick in until k is larger than some constant. FTQC can still work then, but with larger overheads.

    • February 16, 2012 5:29 am

      My claims is the following. Suppose you have n qubits. Suppose you know that the probability for every qubit to be faulty is small, say 1/10,000. (We both agree that if the errors are independent then the probability that more than n/5000 bits will fail at once is exponentially small with n.)

      Now suppose that you know that the pairwise correlation between qubit i being faulty and qubit j being faulty is >0.4 for every i and j. So you have information only about pairwise correlation. I think that this suffice to imply that there will be error synchronization. Do you disagree?

      • February 16, 2012 7:17 am

        “Now suppose that you know that the pairwise correlation between qubit i being faulty and qubit j being faulty is >0.4 for every i and j. So you have information only about pairwise correlation. I think that this suffice to imply that there will be error synchronization.”

        Gil, I don’t think this enough information to be very meaningful. A simple example of a noise which satisfies your description is that with probability 1/10,0000 we flip all qubits simultaneously. This gives completely correlated errors. However it is very easy to correct this particular noise.
        As Aram has already pointed out, correlated errors are easy to correct if the noise structure is known.

        This area is not one in which I would consider myself an expert but it seems to me that a much more specific model for the proposed noise is needed in order to have a meaningful discussion. But of course part of the discussion aims to show that there cannot be noise of the conjecture type.

        One question which looks relevant is whether or not you think that using a quantum error correcting code in order to transfer states which are completely classical will be impossible? This is a question which is only about the code, its preparation, and the final decoding, and does not involve any additional quantum algorithms.

      • aram permalink
        February 16, 2012 2:05 pm

        Well, certainly if you make the correlations large enough then you eventually get the “trip over the power cord” model, in which all bits fail simultaneously with a non-negligible probability.

        Maybe the best way to make this concrete is with the Curie-Weiss model, or something else that produces distributions of the form P(x_1, .., x_n) = e^{-f(x)} for f a low-degree polynomial of the bits. These definitely exhibit phase transitions.

        But for sufficiently low constant values of the coefficients of these polynomials, things would still be ok for a fault-tolerant quantum computer.

        Also, as for interactions, I don’t think it’s reasonable to assume that the interaction graph is a complete graph. Realistically, it should be something that can be embedded in two or three dimensions. (Yes, there is a long-range Coulomb interaction, but this can still be modeled by local interactions between electrons and photons.)

      • February 16, 2012 8:48 pm

        Dear Aram, good, it looks that we are in some agreement on this technical issue. (Of course, the readers should be worn that even if we agree we can both be wrong.) Let me say how I see it:

        Suppose that you can represent the distribution of errors by a distribution of 0-1 vectors of length n for the n qubits. 0 means no error and 1 means error. (Indeed, when you know the noise operation you can represent the errors
        in this way based on expansions via tensor products of Pauli operators.)

        If the probability for every bit to be 1 is 1/1000 (say)
        and you have indpendence; then the probability that 1/500 (say) or more bits have errors is exponentially small with n.

        If the probability for every bit to be 1 is again 1/1000 but the pairwise correlations between the events x_i=1 and x_j=1 is high, say above 0.4, then the probability that even 1/50 (say) of the bits will have errors is already substantial. (And dont go to 0 with n). So we do witness error synchronization.

        (Actually this will go on with more than 2 bits. If you have pairwise independence (or almost independence) then dependencies for triples may lead to error synchronization, and so on.))

        It would be interesting to find a good mathematical description for these type of observations.

        In any case large pairwise positive correlations do imply error synchronization.

        On a related matter you wrote: “Also, as for interactions, I don’t think it’s reasonable to assume that the interaction graph is a complete graph. Realistically, it should be something that can be embedded in two or three dimensions.”

        Well, this is indeed an important point. My conjecture 3 says that when a pair of qubits are entangled (or even have a large “emergent entanglement”) then the probability of errors between them will be substantially positively correlated. So assuming that this conjecture holds, the relevant “graph of interaction” will be complete for most pairs of qubits for interesting states that we use for quantum computation and quantum error correction. (And again, Conjecture 3, as all its sisters does not depend on the geometry of the computer.)

        You say that you regard Conjecture 3 as unrealistic and this is perfectly fine. If I understand you you regard the relation between the correlations between errors and the pair of qubits being entangled, is precisely the things that you do not regard as realistic.

      • February 17, 2012 3:14 pm

        Dear Klas; you wrote: “I don’t think this enough information to be very meaningful. A simple example of a noise which satisfies your description is that with probability 1/10,0000 we flip all qubits simultaneously. This gives completely correlated errors. However it is very easy to correct this particular noise.”
        The type of noise where a large fraction of all qubits will be hit simultaneously is sufficient to fail fualt tolerance schemes. I dont think it is easy to correct this type of noise.

      • February 17, 2012 5:31 pm

        Dear Gill; “The type of noise where a large fraction of all qubits will be hit simultaneously is sufficient to fail fualt tolerance schemes. I dont think it is easy to correct this type of noise.”

        This depends strongly on the type of noise considered. If the noise is highly structured it is quite easy to modify a code to take that structure into account. The basic observation is that saying that “a large fraction of all qubits will be hit simultaneously” is not sufficient. In my simple example where all qubits are flipped simultaneously one can simple say that every codeword and its negation are mapped onto the same word by the decoding procedure. That will lead to perfect protection against this particular noise type, no matter how high the error rate is.

        In general, a highly structured noise will mean that the volume of the set of words which an original codewords can be mapped into by the noise is smaller than for an uncorrelated noise, and typically a smaller volume means that it is easier to construct an error correcting code.

        My point is that in order to be able to say whether error correction will work or not it is not sufficient to say that a qubit is hit by the noise, one must also say in which way it is hit and how the type of error on that qubit is correlated with the type of error at the other qubits.

        This in itself does not rule out your conjectures, but I believe that more information than simple correlations is needed.

      • matt permalink
        February 17, 2012 5:48 pm

        Gil and Aram: if you have a model in which there is a large pairwise correlation between certain pairs of bits then we can, as you note, describe this by an interaction graph, where edges on the graph indicate which pair of bits are correlated. What happens then depends a little on what further assumptions you make. Let’s consider the simplest setting in which each bit is 0 or 1 with probability 1/2 (yes, I know that this is a much higher error rate, but it is easier to talk about). A natural guess then is that there is a Boltzmann-like weight for different configurations, and, as Aram notes, in two dimensions there will be a phase transition in which the errors synchronize once the correlations are large enough. But the point I want to make, since Gil likes to think about worst cases (though perhaps in this setting it is a “best case”….): it is possible to have all those local correlations in two dimensions without having global correlation. It is an easy exercise to make up probability distributions which have all those local correlations without global correlation (i.e., in which the correlation between two far-separated bits is small) on a two-dimensional lattice. However, on an expander graph this is not possible for sufficiently high correlations: once the pairwise correlations are big enough, local correlation implies global on an expander graph. (This comment has no bearing on actual quantum computers, just an interesting point of stat mech/comp sci).

  3. February 16, 2012 2:04 am

    Aram argues very eloquantly that errors which piggy-back on the computation are unlikely. However it seems like such piggy-backing is essentially impossible in general if it must only affect entangled states and not separable states, due to the existence of blind quantum computing protocols. These allow information about the current state of the machine to be hidden from its operator, and known only to some remote party.

    A trivial example of this is where two parties share N pairs of qubits in the anti-symmetric state. These are only entangled states of 2 qubits, and our ability to produce such states is well documented. The first party (alice) chooses a random bit R and measures all of their qubits in either the Z basis (if B=0) or the X basis (if B=1). The second party (Bob) then applies [1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 i] between nearest neighbours.

    If R=0 then Alice’s measurements project Bob’s qubits onto eignestates of the gate Bob applies and hence the result is a separable state. If R=1 however, Alice’s measurement projects Bob’s qubits onto X eigenstates, which lead to a highly entangled state (according to Gil’s definition).

    In this case although Bob is performing the entangling operations, it is impossible for them to tell whether they are producing an entangled state or not. Only Alice knows whether the result is entangled or not.

    While it seems like a loophole would be to suggest that Alice’s device malfunctions when R=1. However, since it’s operation should be independent of Bob’s device, it is possible to check this by instead of performing an entangling operation, performing single qubit measurements on Bob’s qubits.

    While it is possible to come up with ways to have Alice and Bob’s devices collaborate which defeat this argument, if you make the rather plausible assumption that the action of the devices involved is independent of whatever is done to the remote qubits this can be ruled out.

    • February 16, 2012 5:49 am

      Joe, I am not sure what you mean by piggy-back. In any case, the conjectures only say this: Every quantum computation comes with a noise (perhaps on top of other standard noises) with the following property: If your computation leads to a state where with qubuts i and j are entangled then the probability for i being faulty and for j being faulty will be positively correlated.

      I cannot guarantee you that this harmful noise will act independently on pairs of qubits which are not entangled. This depends on the computation process, the device , etc.

      Perhaps one thing we can discuss and perhaps agree about is that if we we have a quantum computer without quantum fault tolerance then this property is likely to be satisfied.

      • February 16, 2012 6:08 am

        Hi Gil,

        Sorry, I was using “piggy-back” in the sense Aram used it in the “Does Computation Cause Correlated Errors?” section.

        Your response to Aram seemed to suggest that you believed that the design of your device and operations you apply necessarily determine the entanglement of the state you produce. I’m referring specifically to statements like this: “Imagine a quantum mechanical world where in order to prepare or emulate a quantum state you need to create a special machine. Each state requires a different kind of machine……This is the world we live in.”

        Such a belief would be incorrect, since we know ways to avoid there being enough information present in the (quantum) instructions the device receives to be able to determine virtually anything about the state it has produced.

        However, I could be misunderstanding your meaning. Perhaps you simply meant that if you try to build a universal quantum computer you will always get massive correlated errors independent of what you try to do with it (including classical computation for example).

      • Jon permalink
        February 16, 2012 2:22 pm

        “Perhaps one thing we can discuss and perhaps agree about is that if we we have a quantum computer without quantum fault tolerance then this property is likely to be satisfied.”

        If you consider Joe’s example, then it does not satisfy this property even if there is no quantum fault tolerance.

    • February 19, 2012 4:53 pm

      Joe wrote: “A trivial example of this is where two parties share N pairs of qubits in the anti-symmetric state. These are only entangled states of 2 qubits, and our ability to produce such states is well documented. The first party (alice) chooses a random bit R and measures all of their qubits in either the Z basis (if B=0) or the X basis (if B=1). The second party (Bob) then applies [1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 i] between nearest neighbours.

      If R=0 then Alice’s measurements project Bob’s qubits onto eignestates of the gate Bob applies and hence the result is a separable state. If R=1 however, Alice’s measurement projects Bob’s qubits onto X eigenstates, which lead to a highly entangled state (according to Gil’s definition).”

      Joe, I dont understand the specific example. The ability to produce a pair of entangled states is well documented but it is also well known (and we always assume that) that such entangled states produced by quantum gates are noisy and the noise is not necessarily independent. (In fact, it is reaonable to assume that gated entangled states will be subject to measurement-noise where errors are positively correlated.) A similar remark may apply to the gates you propose to perform at the second stage (although I am not sure what they are, precisely.)

      The whole question is if we can use quantum error-correction to create pairs of entangled qubits subject to independent errors when we assume that the errors for gated qubits are dependent.

      There is the more general issue if “blind computation” can be used as an argument “against” my conjectures. Let’s return to it.

      • February 20, 2012 10:09 pm

        Hi Gil,

        I don’t quite follow your comment. Surely you accept that it is possible to produce maximally entangled pairs such that even though these states may be somewhat noisy, there is not correlated errors across pairs?

        My point was that if you can do this (and, really, we know that you can) then you can construct a non-local computation in such a way that no local system can tell whether the resultant state is entangled or a product state. In my example I only limited the information for the party performing the entangle gates, but it is pretty trivial to extend this to the second party. So you can construct protocols whereby it is impossible for the noise to adversely affect preferentially (or only) entangled states without violating linearity, since the noise is presumably a local process.

      • February 20, 2012 10:38 pm

        Hi Joe, OK, let me try to understand your example step by step. Suppose Alice choose always R=1 and measure her qubits accordingly, without any random bit. Is this simplified version also a counter example to my conjecture (and if not how do we see it)?

      • February 20, 2012 10:41 pm

        No, because then the protocol dictates the final state of the system.

      • February 20, 2012 11:00 pm

        OK, specifically to this case, the strong version of conjecture 3 talks about emergent entanglement. (which is what you get by measuring other qubits and looking at the results) so in your scenario the two qubits will always have large emergent entanglement and hence by the conjecture positively correlated errors.

        more generally: when I say “if A then B” it does not mean “if and only if A then B” and it also does not mean “always B”. It seems that you allow only two ways to understand my conjecture: either I say: “the errors are correlated precisely when the qubits are entangled” or I simply say “the errors are always correlated”. No, what I say is: the errors are correlated when the qubits are entangled.

      • February 20, 2012 11:23 pm

        Yes, I am clear on the difference between “if” and “if and only if”. However, there are a number of things which have led me to making my last few comments on this. One such example is the context in which these conjectures are made, which attempts to rule out quantum computation while allowing classical computation. The alternative interpretation seems to rule out more than quantum computation, since your conjecture would seem to suggest that even in the case where the computation results in classical states, they must contain these disastrous errors. This seemed a stronger statement about noise than I thought was your intention to make.

      • February 21, 2012 12:06 am

        Well, you are right, Joe, that a constant worry in formulating the conjectures is that they do not fall into the trap of applying to classical correlations. Of course, if the conjectures as stated fall into this trap then they should be sent back to the drawing board or eliminated altogether. When it comes to conjecture 1 that we discussed last time it looks to me that it does not have any consequence for classical error-correction. Contecture 3 (even in its strong form regarding emergent entanglement) was also tailored not to apply to classical correlation. This was clearly my intention, but of course, maybe I missed something, and if so I will be happy to know it. So if this particular example, or another one supports what you say that they “seem to suggest that even in the case where the computation results in quantum states, they must contain these disastrous errors” please let me know.

  4. February 16, 2012 8:58 am

    Please let me say that I am very grateful for all of these wonderful posts. In (slow) preparation for a longer post next week, here is a concrete bit/qubit system that I am thinking about, that has given me considerable respect for the plausibility of Gil Kalai’s conjectures.

    As a warm-up exercise, let’s consider a dynamical system of nine classical bits, each subject to thermal noise. Consulting the Wikipedia page “ Hamming Codes”, we see observe that these nine classical bits support a (7,4) Hamming code, plus one parity bit, yielding a 4-bit SECDED memory, encoded in 8 classical bits (with one bit left over).

    An instructive next step is to read at least the introductory chapter of an engineering textbook about fault-tolerant computing: for example, Jangwoo Kim’s Two-Dimensional Memory System Protection (2008). The lesson-learned is the humility-promoting reality — which Nature already teaches with respect to biological error-protection dynamics — that classical large-scale memory devices, both engineered and evolved, make maximal use of error-correction principles; this solidifies our appreciation that classical error correction is routine, and that multiply-nested levels of classical error correction are common practice.

    Now let’s promote our nine classical qubits to nine quantum qubits, and apply (per its Wikipedia page) “Shor’s algorithm” to protect one of those qubits. To simulate noise in a fully quantum manner, let’s introduce nine more qubits (making eighteen in all), and we introduce small random dipole couplings pairwise among all eighteen qubits. Now we have two {9}-qubit systems, and each {9}-qubit system serves as a generator of noise to the other. We will call these two {9}-qubit systems {A} and {B}.

    Gil Kalai’s general conjecture (as I read it) then predicts the following: Shor’s algorithm can either protect {A}‘s qubit from {B}‘s noise, or else it can protect {B}‘s qubit from {A}‘s noise, but when Shor’s algorithm simultaneously protects both {A}‘s qubit and {B}‘s qubit from each other’s noise, the error rate is severely (even non-scalably?) increased.

    The physical intuition is that the Shor error-correction process itself, when applied to {A}‘s qubits, alters the dynamics of the noise generated by the {A} qubits, in such a fashion that the now-correlated {A}-qubit noise becomes fatal to the {B}-qubit error correction process. And vice-versa.

    More broadly, within this coupled-qubit model of quantum noise, we can postulate that Shor-type error correction works for individual qubits, but does not scale to large ensembles of qubits, for the concrete reason that error-correction in any one portion of a QC memory, generically acts to propagate adverse forms of correlated noise to the remaining qubits, thus disrupting their ongoing error-correction.

    The primary intent of this post is to convey an appreciation that the Kalai conjectures are more than philosophical speculations: these conjectures can be associated to specific predictions regarding quantum computing that are physically plausible and practicably testable, both immediately in terms of numerical simulation, and possibly even in small systems of physical qubits.

    New-fangled methods for “geometrizing” the preceding dynamical intuitions, with a view toward proving rigorous theorems, via the powerful proof technologies that algebraic geometry provides, are a present interest of mine. The above multi-qubit example was conceived within this algebraic/geometric context, and in the next week or two I hope to post here on GLL and/or MOF about the capabilities of these algebraic/geometric proof technologies.

    In the meantime, we can all be appreciative of and thankful to Gil and Aram (and everyone else posting here on GLL) for this wonderfully thought-provoking QM/QC/QIT debate. Surely there are many more good posts to come. Please sirs, we’d like some more! 🙂

    • February 16, 2012 11:04 am

      To summarize the above toy-example aphoristically:

      A Kalai-type Conjecture: The transformation of classical noise that is induced by classical error-correction processes is generically innocuous, but the transformation of quantum noise that is induced by quantum error-correction processes is generically adverse.

    • February 16, 2012 1:50 pm

      As the first of two final remarks, the intended Wikipedia reference was not to the “Shor algorithm”, but rather to the “Shor Code” (both of which of course are truly seminal innovations).

      The second remark is a suggestion: anyone (maybe a student?) who contemplates testing Kalai-type conjectures in numerical experiments, would be well-advised to consult both John Preskill’s article “Fault Tolerant Quantum Computation” (arXiv:quant-ph/9712048) and (for reasons of numerical efficiency) consider implementing the (shorter) 7-qubit code of Steane, and/or the (shortest) 5-qubit code of eq. 10.104 of Nielsen and Chuang’s textbook Quantum Computation and Quantum Information. In particular, the existence of 5-qubit error-correcting codes would seem to render Kalai-type correlated-noise conjectures reasonably amenable to numerical experiment on a ordinary laptop computer.

      • February 21, 2012 12:57 am

        As a further update, working through the details of the above computational program, with a variety of more-or-less realistic noise models, teaches great respect for the generality and scope of the theorem proved in John Preskill’s earlier GLL comment and note titled “Sufficient condition on noise correlations for scalable quantum computing”.

        Moreover Preskill’s theorem dovetails nicely within the student-friendly multimedia content that he has provided on the UCSB-hosted web page “Pedagogical Lecture: Fault Tolerant Quantum Computation” (which a Google search will find).

        A natural question is: Do we live in a world in which the quantum assumptions of Preskill’s theorem do not apply, even in idealized cases?

        Assuming that we do live in such a world, it may be a world in which either: (1) essential aspects of the dynamics of thermal reservoirs and measurement devices are nonperturbative (because Preskill’s sufficient conditions assume a convergent perturbative dynamical expansion), and/or (2) the state-space geometry of quantum dynamical systems is locally Hilbert, but globally has some other geometry (because Preskill’s expansions assume a global Hilbert geometry for the expansion).

        Alternative (1) leads straight to the well-known complexities that are associated to renormalization, quantum field theory, and cavity QED, whereas Alternative (2) leads straight to the well-known complexities that are associated to spatial localization, geometric quantum mechanics, and general relativity.

        Both alternatives thus lead to tough classes of open quantum questions, that refreshingly are seen from new perspectives. That’s why this is a great topic!

  5. February 17, 2012 3:43 pm

    Dear all, let me just mention a few technical and conceptual issues that were raised here:

    1) Joe raised the possibility to apply “blind” quantum computation which allows information on the state of the computer be hidden from its operator. So according to Joe, this possibility is in direct conflict with my conjectures and the possibility to have a relation between the state of the computer and the noise.

    2) Jon asserted that Joe’s example does not even require quantum fault tolerance.

    3) Aram raised the question if pairwise correlation alone can lead to error synchronization. (I think we are now in agreement that the answer is yes, but it will be useful to understand the connection quantitavely.)

    4) Aram pointed out that we can can assume that the “graph of interaction” is sparse. Perhaps it will be useful to elaborate on that, since I am not sure I understood it.

    5) There is also some debt regarding a possible loophole Joe suggested for Conjectures 3 and 4 based on moving to another basis.

    6) Of course, the main issue raised by Aram is to what extent it is reasonable to expect highly correlated noise.

    7) Yet another question that Aram raised, is if there is tension between my conjectures and linearity of quantum mechanics.

    8) I raised in my reply the following point: We can think about a quantum computer as a universal machine for creating complicated quantum evolutions and quantum states. The situation today (before quantum computers) is that we need special purpose machines for each quantum state that we want to create; Much of the intuition of people regarding the hypothetical behavior of quantum computers, and relevant models, implicitely assume such a general purpose machine.

  6. February 18, 2012 12:21 pm

    Worthwhile discussion on how D-Wave’s machine actually works. The more technical discussion starts at 29:50. When they say in the discussion that it is literally an physical analogue to a math problem they aren’t kidding. The circuit board really is built to simulate a matrix. Is it a quantum computer? I think that depends on what the technical definition of quantum is. Does it solve IP problems quickly, it looks like it can.

    It looks like what they are doing is building a magnetic field that can be tuned, and then adding some energy and seeing how it distributes across the chip. The chip then approaches some final state that represents an optimal distribution. Because all the elements are coupled, quadratic effects are accounted for.

    It is literally a physical analogue of illustration I.1.1 on page four of A.Zee’s book “Quantum Field Theory In A Nutshell”.

    http://webcasterms1.isi.edu/mediasite/SilverlightPlayer/Default.aspx?peid=a0ada8c3b0f9448f8692a8046534f99c1d

  7. Gil Kalai permalink
    February 18, 2012 6:44 pm

    Hi everyone, Let me add to the list as point number 9) Klas’s intersting comment that if the noise is very structured, errors can be corrected even in cases of error-synchronization. This is related to a point raised by Chris in the first post about the heuristic/unclear nature of the term “noise” to start with. Great points.
    Matt wrote “This comment has no bearing on actual quantum computers, just an interesting point of stat mech/comp sci.” Matt, it goes without saying that from my point of view such comments are most welcome.

    • Gil Kalai permalink
      February 29, 2012 4:52 pm

      Perhaps one more question is this: 10) Is nature which satisfies my conjectures regarding correlated errors, the relation between signal and noise, noisy codes, etc, really malicious? I realize that such a behavior would be considered malicious to the idea of efficient factoring, but nature has other things also to worry about.

      • February 29, 2012 6:30 pm

        I’d say that correlated noise would be shocking not because we have a right to fast factoring algorithms, but because it would cast doubt on the scientific process and the idea of reductionism. We believe we can understand the properties of electrons by studying one or two of them, and that the same principles apply when we have 10 or 10^23. In principle, this doesn’t have to hold, but if it didn’t then scientific induction would be a less useful tool.

      • Gil Kalai permalink
        March 1, 2012 8:45 am

        Aram, I don’t see the relation with reductionism; the phenomenon I considered – that two entangled qubits have positive correlation of errors (or specifically of information-leaks) is most probably true for gated qubits, and for quantum computers with few qubits. This phenomenon just talk about two qubits. (I think we are, more or less in agreements that high positive pairwise correlation suffices to imply error-synchronization.) So I don’t see how scientific induction is relevant here. It is true that if quantum fault tolerance is possible we can create two entangled qubits with almost independent errors, but if quantum fault tolerance is not possible, or until it is possible we cannot expect such a pair.

      • March 1, 2012 9:20 am

        I wouldn’t say we can create two entangled qubits with independent errors; the interaction itself could always be faulty (and this is a standard assumption of FTQC).
        But I would say that whatever errors there are on those interactions, we should be able to repeat the interaction many times in parallel, with little-to-no correlation among these parallel entanglement-creation experiments. If this were false, scientific induction would be thrown into doubt.

        Further, I think we should be able to repeat the same faulty interaction on the same entangled qubits that have been moved around and matched up in different ways. And here I believe that generically the new errors should have little-to-no correlation with the past errors, based on the principle that particles generally don’t remember where they’ve been and what has happened to them in the past. This principle definitely doesn’t apply to the stock market, but it’s been (somewhat) experimentally demonstrated for things like photons and buckyballs via things like the two-slit experiment, in which creating an external record of a particle’s path changes observable quantities.

      • Gil Kalai permalink
        March 1, 2012 11:06 am

        Dear Aram, all your beliefs are reasonable and this is what we try to discuss. I just referred to the reductionism claim. But I realize that Point 10 was unclear so let’s forget it.

      • March 1, 2012 12:29 pm

        It seems to me that Aram’s physical assumptions are reasonable, and in particular that all of the great FTQC theorems are rigorously applicable, in an ideally reductionist world in which no two qubits are in thermalizing interaction with a shared vacuum (that is, a shared zero-temperature bath).

        So have Aram’s arguments prevailed? This is less clear (to me).

        We all appreciate that, to a degree that has been challenging to quantum experimentalists, Nature seemingly requires of qubits that (1) they generically couple to vacua (of one sort or another), and that (2) qubit vacua cannot readily be reductionally isolated from one another to the degree required for FTQC.

        In short, the reductionist program of independent qubit noise associated to independent qubit vacua has been surprisingly hard to achieve.

        So is qubit vacuum design solely a practical problem, to be overcome by creative quantum systems engineering, or does it point also to fundamental gaps in our appreciation of quantum vacua? It seems hardly likely that the Harrow/Kalai debate can reach any robust conclusion, until these vacuum-related issues are clarified, by arguments that are more nearly rigorous than any that have yet been presented.

  8. February 19, 2012 5:16 pm

    While there are various important technical and conceptual issues that were raised by Aram’s post and by the following comments, I would like to have a better understanding of Aram’s overall logic:

    First Aram describes my conjecture:

    “Gil suggests that this entanglement may be self-limiting, by increasing the rate at which correlated errors occur.”

    Aram’s reply is: “The key reason I regard this as unlikely is that quantum mechanics is linear,”

    Later Aram’s described shadow qubits and say

    “Thus, even though there is no direct way for errors to depend on whether our states are entangled or not, errors could depend on shadow qubits, whose entanglement/correlation could be produced at the same rate as entanglement/correlation in our quantum computer.”

    (And then he explain why he thinks that we can deal with shadow qubits.)

    The way I see it is that Aram’s interesting shadow qubit story is precisely an example for my claim how a systematic relation between the state created by the computation and the noise can occur. Isn’t this example suffice to put to rest the claim that such a relation is necessary in conflict with linearity of QM?

    Also, Aram, are you claiming that the shadow qubit story is the only way, that may lead to a systematic relation between the state and the noise?

    • February 19, 2012 8:17 pm

      “Aram, are you claiming that the shadow qubit story is the only way, that may lead to a systematic relation between the state and the noise?”

      yes.
      I think the original conjecture needs to be modified in this sort of way to make it possible. Linear QM implies that there is nothing intrinsic to entanglement that can enhance/change noise, so any such effects must be extrinsic, i.e. must result from other systems, which could be called shadow qubits.

      The problem is that once you make this modification, or make the conjecture more concrete in any way, it because less plausible. Because the types of shadow qubits we see in actual implementations are totally different in NMR than in ion trips than in optical experiments. So it’s hard to imagine what a unified law of correlated decoherence could look like.

  9. February 19, 2012 8:10 pm

    Here’s a thought on a route to keep a quantum state in coherence indefinitely. Here you would use quantum teleportation. So in the standard scheme, one has an unknown quantum state (A) and wants to transfer that state to another location. So one creates an entangled pair and one member of the pair interacts with A. A is then measured and destroyed and the information is transmitted over a classical channel to B. B interacts with the second entangled and the classically transmitted information and takes on the unknown state of the original A.

    Now B by itself doesn’t want to sit around to long because of decoherence, so what does one do? Well one can teleport it back to its original location (which we will now call C). As long as one can reasonable protect the entangled particles en route, then classical fault protection protocols can be used to protect the information in the classical channel. The quantum information is “hidden” from the environment during the transfer process.

    One can think of this along the lines of creating an oscillating quantum state that moves back and forth from point to point.

    This might protect the quantum state, but the question is whether it can be used for any meaningful calculation.

    • February 19, 2012 8:25 pm

      This sort of technique is a part of many fault-tolerant schemes, but isn’t enough by itself. If a qubit starts to decohere, then teleporting it won’t slow down that decoherence at all. In fact, it’ll even result in the qubit being affected by any noise that had affected the entangled pairs used for the teleportation.

      • February 19, 2012 10:35 pm

        Hmm…I am thinking that one has to look at this in terms of probabilities. Conjecture 3 reads that errors involving entangled qubits will be correlated. Assuming that’s the case, then one has to ask what the source of errors are. We say its the environment. Ok, so it is reasonable that we can say something about the statistical distribution of measurements performed by the environment? If we can then can we account for those errors over the course of several runs by the computer program?

        So for instance, using the teleportation scheme outlined above, we know that there is some amount of decoherence that the system will undergo within a period of time and we know that there is some amount of noise that might effect entangled pairs. So I run the program, and get some result that I know is potential wrong. So I run the program again, and get another answer, etc (like running a multiple stochastic simulations). So shouldn’t the distribution of results take on some of the characteristic of the noise in the environment? Suppose the noise is gaussian in nature? Shouldn’t my result be gaussian as well?

        This might not be the ideal set for some applications, but since they randomness is more pure than what one finds in a classical machine, it might be useful in certain applications.

      • February 21, 2012 10:19 am

        Hi Hal, One of the interesting features of quantum noise, and, in particular, the noise anticipated by my conjectures is that it does not “average away” by repetitions. (This is indeed different from various cases of classical noise, that when the noise is highly structured it will be averaged away by repetition; this is also related to Klas’s question.) So if you look at Conjecture 1 which anticipated a cloud of codewords around the intended codewords: when you avarage you will just increase that cloud. Conjecture 3 talks about positive correlation for information leak for two entangled qubits, again this property will remain under averaging.

        There is another aspects of repeting the same experiment many times and this is post-selection which is actually in use in various quantum fault tolerance schemes. I expect that if you achieve low error rates by massive post-selection then you will typically get a noise respecting Conjecture 3. Even if you will eliminate pairwise correlation by even more massive post selection, I expect that you will still get error-synchronization because of higher order correlations.

      • aram permalink
        February 21, 2012 11:35 am

        Gil, can you explain this a little more?

        “One of the interesting features of quantum noise, and, in particular, the noise anticipated by my conjectures is that it does not “average away” by repetitions. (This is indeed different from various cases of classical noise, that when the noise is highly structured it will be averaged away by repetition;”

        It sounds like you’re talking about properties of error-correcting codes rather than noise; is that right?

        Also, there are quantum codes that are fairly repetition-like, like the Shor code (which is basically like two 1 to sqrt(n) repetition codes concatenated with each other), and the toric/surface code (which is obtained by tiling a local pattern over a 2-d region).

  10. February 19, 2012 10:35 pm

    I am really trying hard to understand why Aram regards the shadow qubit description as the only way that may lead to correlated errors
    of the kind I am describing.

    Maybe it, and the issue of linearity raised here, are related to the folllowing point of John Preskill from the first post. I do hope to get to John’s full comment and manuscript a bit later but let me just relate to this part:

    JP: “Gil says that while he is skeptical of quantum computing he is not skeptical of quantum mechanics. Therefore, I presume he takes for granted that a quantum computer (the “system”) and its environment (the “bath”) can be described by some Hamiltonian, and that the joint evolution of the system and bath is determined by solving the time-dependent Schroedinger equation.”

    Yes, of course!!

    (continued) “If Gil’s conjectures are true, then, no matter how we engineer the system, the Hamiltonian and/or the initial quantum state of the joint system must impose noise correlations that overwhelm fault-tolerant quantum protocols as the system scales up.”

    Right. As a matter of fact, I even present a class of time-dependent evolutions (a certain subclass of all time-dependent evolutions) which are meant to describe the kind of noise I am talking about.

    Maybe thinking about evolutions involving the computer’s qubits and the environment is what is behind Aram’s comment that the “shadow qubits” scenario is the only scenario that may support my conjectures. But this is unclear to me.(On my side, while I am committed to the relevant mathematics, including, of course, the linearity of QM, I don’t always subscibe to all the intuitions and mental images that may come with it. Those can be misleading. In particular the issue “what is the environment?” for controled quantum evolutios is interesting.)

    • aram permalink
      February 21, 2012 12:02 pm

      Yes, my “shadow qubits” are part of what John calls the bath, or environment.

      You said ” I even present a class of time-dependent evolutions (a certain subclass of all time-dependent evolutions) “.

      Are you talking about the one where qubits fail in a way that has large pairwise correlation? Because what you’ve described isn’t a fully description of the noise, but merely some properties of it: namely, its one- and two-qubit marginals. I think that once you work it out, then this model either has small enough errors to be qualitatively like uncorrelated noise, or is equivalent to the “trip over the power cord” model. In either case, I don’t see how it would separate the feasibility of classical and quantum computing.

      There is one tricky part of my argument, which emerges even when you have i.i.d. noise, let’s say at rate p. (I’m not sure about exact numbers in what follows, and they depend in any case on the exact model.)

      If p < 1/100 then we can prove that FTQC is possible.
      If p 2/3 then we can prove that FTQC is impossible.
      If p > 3/4 then we can prove that FT classical computing is impossible.

      This leaves open the (likely) possibility that for some values of p we can to FT classical computing but not FTQC. Even more intriguingly, we might have complexity classes intermediate between BPP and BQP.

      But it’s pretty farfetched to imagine that noise *always* must be in this range. After all, people have pushed noise far belong the threshold in systems where the only thing stopping quantum computing is the difficulty of putting many qubits in the same place, such as liquid NMR, where qubits are nuclei and getting more qubits means bigger molecules, or in superconducting circuits, where running more control lines into the circuit introduces more noise.

      • February 24, 2012 7:29 am

        “Are you talking about the one where qubits fail in a way that has large pairwise correlation? Because what you’ve described isn’t a fully description of the noise, but merely some properties of it.”

        No, my paper contains a description of a subclass of all time dependent noisy evoluions which I refer to as “Smoothed Lindblad evolutions” which are proposed as a proposed model of “noisy non-fault tolerance QC”.

  11. February 20, 2012 11:58 am

    One more interesting point from Chris Moore’s first remark: “Building a quantum computer will certainly require huge advances in our ability to control quantum states, and shield them from unwanted interactions. But we have built exotic states of matter before (e.g. semiconductors) and I see no fundamental reason why we can’t make these advances.”

    The purpose of my conjectures is to describe fundamental differences between what we can do without quantum fault tolerance and what we can do with quantum fault tolerance. The conjectures are not aimed to give a “proof” that quantum computers are impossible. I think that we are in some sort of agreement that there are fundamental differences between the type of states we can achieve now and the type of states that we will be able to achieve with universal quantum computers (but giving a technical formulation of this difference is a tricky business), and , as we discussed last time, between classical fault tolerance and quantum fault tolerance. So trying to study such fundamental differences is important and interesting.

    Actually, The first version of the post that I had sent to Aram did not include the paragraph about my own beliefs. I only mentioned the conjectures and their weak, medium and strong interpretations. Aram asked me to say in which interpretation I believe and why, which I gladly did. Indeed I tend to believe that universal quantum computers are impossible, and I find that discussing our beliefs and motivations adds value to the debate. However, seeking fundamental differences between the reality without quantum fault tolerance and a reality with quantum fault tolerance is of independent value.

  12. February 20, 2012 2:07 pm

    {\bf Pairwise positive correlation.}
    It looks that we were in agreement regarding the qualitative aspects of pairwise positive correlation of errors but the quantitative aspects were not discussed yet.

    Aram wrote: “Regarding our sub-discussion on pairwise correlation, Aram wrote: Well, certainly if you make the correlations large enough then you eventually get the “trip over the power cord” model, in which all bits fail simultaneously with a non-negligible probability.

    Maybe the best way to make this concrete is with the Curie-Weiss model, or something else that produces distributions of the form P(x_1, .., x_n) = e^{-f(x)} for f a low-degree polynomial of the bits. These definitely exhibit phase transitions.

    But for sufficiently low constant values of the coefficients of these polynomials, things would still be ok for a fault-tolerant quantum computer.”

    We are left with the quantitative issue of what amount of pairwise positive correlation will be harmful to FTQC. My thinking about it is as follows: Suppose that for a certain architecture the threshold is 0.001, and suppose that the error rate for each of the qubits is 0.0001 so it is comfortably below the threshold. Suppose that the number n of qubits is large. It looks to me (by rather naive calculations, I did not apply the Curie-Weiss model) that if the pairwise correlation between every pair of qubits is 0.01, then this will already be damaging for FTQC. In facts, it looks that the required “threshold” for the pairwise correlation is similar to the threshold for the error-rate itself. With paiwise correlation of 0.01 you do not get precisely the “trip over the power cord” effect but you get a substantial probability that at least 1/100 of the qubits will be faulty.

    Here I assumed that the error rate will remain 0.0001 and discussed only the error-synchronization effect. As I already mentioned, the most devastating aspect of correlated errors is that the error-rate itself (in terms of qubit errors) scales up.

    • aram permalink
      February 21, 2012 12:08 pm

      Gil writes:
      “We are left with the quantitative issue of what amount of pairwise positive correlation will be harmful to FTQC. My thinking about it is as follows: Suppose that for a certain architecture the threshold is 0.001, and suppose that the error rate for each of the qubits is 0.0001 so it is comfortably below the threshold. Suppose that the number n of qubits is large. It looks to me (by rather naive calculations, I did not apply the Curie-Weiss model) that if the pairwise correlation between every pair of qubits is 0.01, then this will already be damaging for FTQC. In facts, it looks that the required “threshold” for the pairwise correlation is similar to the threshold for the error-rate itself. With paiwise correlation of 0.01 you do not get precisely the “trip over the power cord” effect but you get a substantial probability that at least 1/100 of the qubits will be faulty.”

      Such noise is fine for FTQC. Well, 1/100 is right at the edge, but if you said “In every time step a random set of n/200 qubits are randomized” then codes like the surface code (see http://arxiv.org/abs/0803.0272 ) would have no problem. (And this is worse than what you are saying, which is more like “In every 100 time steps, n/100 qubits are randomized.”)

      More generally, if your pairwise correlation model meant that a random set of n/200 qubits were randomized in each time step (or one in every 100 time steps), then this could be treated the same as i.i.d. noise at the rate of 1/200. Yes, perhaps this is a higher rate than the single-qubit error rate, but that just says that the single-qubit error rate is not the relevant parameter.

      • February 21, 2012 2:39 pm

        Aram,

        I am just referring to a small fragment of the discussion regarding the effect of pairwise positive correlation. (This was point 3 on my list and it referred to your claim that 2-qubits correlations and even O(1)-qubit correlations are not enough to cause FTQC to fail).

        My intuition is that if you do not assume independence then you have to impose a similar threshold on the pairwise correlation to the threshold for the error rate itself. So even if the rate is 1/10000, a pairwise correlation of 0.01 will be roughly as damaging as a 1/100 error rate, and a pairwise correlation of 0.1 will be roughly as damaging as a 1/10 error rate. *In a case* where the required threshold is 1/1000 even pairwise correlation of 0.01 between every pair of qubits will be damaging.

      • February 21, 2012 4:04 pm

        Gil,
        In the example I gave there is complete correlation between the errors not just 0.01, either no qubits fail or all fail, and the errors are still easy to correct. In that example an error rate of 0.5 for the individual qubits is perfectly fine together with complete error correlations.

        This is why I claimed that more information than just the size of the correlations and the error rates must be given in order to find something truly problematic.

  13. February 21, 2012 11:11 am

    Again on the general theme of renormalization effects, the (excellent) recent article by Hui Khoon Ng and John Preskill “Fault-tolerant quantum computation versus Gaussian noise” (arXiv:0810.4953) suggests (in effect) a concrete path toward proving a theorem of Kalai-Alicki type, in the following passages:

    Each qubit has a time-independent coupling to its own independent oscillator bath (though admittedly these are dubious assumptions when multi-qubit gates are executed) […] It might be interesting to see if further technical assumptions (which one would hope to justify a posteriori) about the system-bath state {|\boldsymbol{\psi}_{\text{SB}}(t)\rangle} during the course of the computation would lead to a less ultraviolet-sensitive threshold condition, but we have not yet succeeded in finding useful results with this character.

    Here we might hope to construct a Kalai-Alicki theorem along the lines of

    Conjecture (Shared Ohmic baths obstruct fault-tolerant quantum computations) For multiple qubits simultaneously coupled to a single zero-temperature Ohmic bath of (possibly non)-independent oscillators, fault-tolerant quantum computation methods generically fail in consequence of long-range qubit couplings that are induced by the bath.

    Key to this line of inquiry would be the choice of a technical means for regulating the ultraviolet divergences associated to Ohmic couplings. In this regard a simple regulatory method that has been effective our own practical calculations (of spin-entangled nanoscale mechanical oscillators whose dynamics is “dressed” by Casimir interactions) has been to expand “bare” quantum operators as a power series in “dressed” quantum operators (rather than the reverse) and then pass to the Ohmic limit. Conceivably these reverse-renormalization techniques might usefully be extended to the cavity QED regime in which vacuum fluctuations are Ohmic to an excellent approximation.

    Here the broad mathematical intuition is that, generically in quantum computing, obstructions and solutions alike are associated to non-perturbative dynamics, and this suggests the concrete physical intuition that Ohmic (nonperturbative) quantum dynamics — as present in thermal baths, sensors, and photon sources — may be associated to a class of quantum errors that is particularly challenging to correct.

    • aram permalink
      February 21, 2012 11:42 am

      John, is there a reference that describes where non-perturbative dynamics apply in a quantum-computing-related setting?

      I should also have mentioned that my post was in some ways too much in a defensive crouch. Often correlated errors are *easier* to deal with. Techniques like spin echo, composite pulses and decoherence-free subspaces all exploit knowledge about correlations in noise to correct errors much better than we could hope to do with small QECCs. So it’s a giant leap to go from “errors exhibit long-range correlation” to “these make FTQC impossible.” Do you have a model of long-range-correlated errors that looks like it would prevent FTQC?

    • February 22, 2012 10:05 am

      Just to be clear, our result applies in a more general setting than the case where each qubit has a time-independent coupling to its own independent oscillator bath, though we considered that special case as an illustration of the more general result. It is okay for the qubits to share a common bath. What we require is that each qubit couples to a bath variable obeying Gaussian statistics, so that the higher-point correlation functions of the bath are all determined by the two-point correlation function.

      Our condition for scalability is that the modulus of this two-point function, when we sum one of the points over all system qubits and integrate that point over all times during the execution of the quantum circuit, is sufficiently small. If the bath two-point correlation function decays two slowly in space or in time this sum/integral may not converge, so in this setting one sees explicitly how our argument breaks down if the noise is too strongly correlated.

      The argument is rigorous in the context of a Gaussian noise model, but as John Sidles remarks, ultraviolet divergences can be an issue; for Ohmic noise in particular the scalability criterion depends on the ultraviolet cutoff. I don’t see a direct connection, though, between that ultraviolet problem and the long-range correlations referenced in John’s conjecture formulated above.

      • February 22, 2012 10:56 am

        John please let me express and appreciation and thanks (that many GLL readers share) for the wonderful constellation of articles that you and your colleagues have produced regarding quantum computing.

        For me, one illumination in particular came when reading “Fault-tolerant quantum computation with long-range correlated noise” (quant-ph/0510231) in appreciating that the noise model, in accommodating pairwise mutual qubit interactions with a bath, included as a special case the general pairwise mutual interaction of qubits with a trivial (identity) bath interaction (so that the bath is a passive spectator).

        Then the analysis that you posted here on GLL, titled “Sufficient condition on noise correlations for scalable quantum computing,” extended this model to multiple interacting qubits, such that the resulting threshold theorem formally encompasses the multiple/mutual qubit interactions that concern Robert Alicki and Gil Kalai, insofar as those multiple/mutual qubit interactions are sufficiently weak, localized, and perturbative.

        It’s not easy to step outside the broad scope of these beautiful and powerful theorems, and simultaneously retain any realistic expectation of obtaining concrete results. The point of my post above was that the Ohmic noise model of Ford, Lewis and O’Connell possibly offers one such avenue, via a strategy of investigating within their tractable Langevin noise model the nature of the quantum entanglement that is associated to the simultaneous interaction of two (or more) qubits with a shared Ohmic bath.

        Would this approach yield any useful insights? Don’t ask me! Yet each of the above articles is worth reading in its own right, and perhaps some GLL reader will perceive (more clearly than me) the possibilities for uniting their physical insights and proof technologies.

  14. February 21, 2012 12:41 pm

    Aram, there is considerable literature on the “independent oscillator” bath model that Ng and Preskill use, and our QSE group is particularly fond of a old-but-good review by Ford, Lewis, and O’Connell “Quantum Langevin equation” (Phys Rev A, 1988), which analyzes issues associated to ultraviolet divergences in-depth. Our own reverse-renormalization application of this formalism is in “The Classical and Quantum Theory of Thermal Magnetic Noise: with Applications in Spintronics and Quantum Microscopy” (Proc IEEE, 2003; see Appendix III.C “Oscillator renormalization”. The resulting theory works well (for example) in statistically characterizing the quantum dynamics of the continuous experimental observation of one qubit by an Ohmic-damped mechanical oscillator.

    As for the (tougher) coupling and observing of multiple qubits, that is a topic that our UW seminar Natural Elements Of Separatory Transport (NEST) will cover starting April 17. Although the NEST Seminar will emphasize polarization transport dynamics rather than computational dynamics, the seminar’s point-of-view is that the former may usefully be regarded as the large-N high-temperature limit of the latter, such that the main math-and-physics intuitions and techniques are the same. All are welcome, needless to say! 🙂

  15. February 21, 2012 6:03 pm

    Joe’s point regarding blind computation.

    Let me discuss in some detail Joe’s important point regarding blind computation. Joe’s remark started with a quotation from my response to Aram followed by a refutation:

    ” (Quoting me) ‘Imagine a quantum mechanical world where in order to prepare or emulate a quantum state you need to create a special machine. Each state requires a different kind of machine……This is the world we live in.’

    (Joe’s refutation) Such a belief would be incorrect, since we know ways to avoid there being enough information present in the (quantum) instructions the device receives to be able to determine virtually anything about the state it has produced.”

    Joe’s reference to blind computation has bearing to other aspects of my argument. An important aspect of all my conjectures is the assertion that there can be (and there is) a systematic relation between the state we create and the noise. Would such “blind computation” contradict my claim? As a matter of fact, isn’t it the case that such blind computation leads to quantum computing without entanglement? (We also had a specific discussion if a certain version of this blind-computation argument leads to a loophole in the formulation of my conjecture 3.)

    First, let me repeat my remark that in today’s world or, if you wish, in the pre-quantum-computers era, we need carefully designed special purpose machines to create different types of quantum evolutions and quantum states. It is not clear if the correct way to think about universal quantum computers is as analogs of our digital computers, or, perhaps, as universal machines for creating quantum states analogous to a very sophisticated model of a James Bond car that can simultaneously serve in many different functions.

    Whan Joe says “we know ways to avoid…” he is correct, but these ways assume that we have a quantum computer at our disposal to start with.

    Let me make two comments. My first comment is that it does not seem reasonable that the theoretical possibility of making “blind computation” is really relevant to the reality of emulating quantum states that I was referring to.

    Here is an example: In the previous thread Aram challenged me to present a physical situation where my conjectures hold. I proposed the prediction given by the conjecture 1 for Bose-Einstein states created in the laboratory. Matt asserted that while my prediction is correct the phenomenon, described already by Anderson, has nothing to do with my Conjecture 1 and has everything to so with the fact that Bose-Eintsten states, considered as a code, is a very bad code. Very well. We agreed that Conjecture 1 can be tested when certain abelian and nonabelian anyons will be created, something that is anticipated in the next couple of years. If I understood Matt correctly for these states he does not expect that the prediction given by Conjecture 1 is correct.

    Of course, if Conjecture 1’s prediction will hold for one such experiment it may still fail for another, so it may take a while to be convinced. In any case, the fact that the experimental procedures that Matt described do not apply quantum fault tolerance, makes Conjecture 1 especially appealing in my eyes.

    Now, is it reasonable to believe that if the group of researchers preparing these anyons will split into two groups where one group will make the experiment according to some secret instructions of the other group, then this can somehow have an effect on the type of noise these anions have? Or perhaps that the mere possibility of the scientists to split into two such groups give enough evidence that my conjecture 1 will not be satisfied? This seems to me unlikely.

    The second comment is that blind computation talks about a different, more general, model of computation. While in my work the basic noiseless model is of pure evolutions (of course, with noise and FT it is not pure anymore,) for the blind computation model the basic noiseless computation is of mixed states. Understanding what is going on and translating matters to this more general model requires some work.

    To a large extent this work was done. The main issue which supplied the motivation to the research I will now mention is that you can have a quantum computation with a quantum computer controlled by a classic computer where in all times the state of the quantum computer is a maximal mixed state. This seems to be a quantum computation without entanglement. This was carefully studied by Dorit Aharonov (with no connection to QC skepticism), and while the paper was not written yet, there is a talk by Dorit that was given at the Perimeter Institute and can be found on the Internet. When you consider the quantum computer and the classical control device as a one large quantum computer (which brings us back to the pure state model for the intended evolution) you recover the lost entanglement. In particular, it will be manifested, I believe, by emergent entanglement of the qubits in the quantum device.

    (Klas, I did not forget you.)

    • February 22, 2012 5:34 am

      Gil, when we translate the noise models that appear in threshold theorems (both classical and quantum) into geometric language, we see a substantial difference between the canonical assumptions of classical noise and the canonical assumption of quantum noise.

      We specify that our state-space (both classical and quantum) are equipped with both a symplectic structure (so that Hamiltonian dynamics is entropy-conserving) and a metric structure (such that thermal noise is entropy-increasing). Then when we integrate stochastic trajectories (both classical and quantum), it is natural to consider the invariance properties of the \binom{0}{2} tensor \mathcal{N} that characterizes the second moments of the noise increment.

      Then the following difference becomes apparent:

      (A) Classical dynamical simulations commonly specify \mathcal{N} \propto \text{Id}, that is, classical simulations commonly have no preferred noise basis.

      (B) Quantum dynamical simulations commonly specify that \mathcal{N}, has polynomial rank, that is, thermal noise is restricted to a set of preferred basis vectors having only polynomially many dimensions.

      It seems (to me) that the non-classical ubiquity of preferred basis vectors in standard QM is intimately bound-up with the viability (or not) of the Kalai conjectures. Because (a) hypothesis (B) is accepted as physically correct, and (2) the noise is made suitably weak by skilled engineering, such that (3) perturbative expansions work, and moreover (4)  assume the state-space metric is flat, then it seems to me that standard theorems of FTQC are rigorously correct.

      Conversely, we discern four curmudgeonly grounds for QC skepticism:

      (1) Maybe quantum noise ain’t sparse? and/or

      (2) Maybe quantum noise ain’t weak? and/or

      (3) Maybe quantum noise ain’t perturbative? and/or

      (4) Maybe quantum state-spaces ain’t flat?

      We become mired in a quantum Ground-Hog Day when dialogs endlessly repeat: Yes, quantum noise is sparse, but is it sparse enough? Yes, quantum noise can be made weak, but can it be made weak enough? Yes, quantum noise effects can be nonperturbative, but can they be regarded as perturbative for practical purposes? Yes, non-Hilbert state-space geometries are useful for practical computations, but is the state-space of Nature strictly Hilbert?

      As Phil Connors (played by Bill Murray) showed us in the wonderful comedy Groundhog Day, the key to escaping GroundHog Day is embrace the hard questions. That is, in the 30++ years that we have been pursuing quantum computing, have we been asking these four questions in the most enjoyable way? Have we been testing them in the most ingenious way? Can we conceive enjoyable new ways to embrace these four curmudgeonly questions?

      • February 22, 2012 7:06 am

        As a followup, please let me commend Elizabeth Allman’s delightful Salmon Prize as one example of a non-Groundhog non-Bœotian approach to a broad class of curmudgeonly 21st century questions (in both QM and biology).

      • February 22, 2012 6:39 pm

        Dear John,

        I dont understand (A) and (B) well enough so I can only guess what (1)-(4) stand for. The issue if the noise level can be made weak enough is intimately related to the issue of correlations. (One way to think about the questions I raise would be “how quantum evolutions look above the threshold for quantum fault tolerance”).

        Regarding geometry. My very basic approach is geometry-free. The models/requirements/axioms for decoherence that I am looking at are geometry free. (I agree that the issue of FTQC may have bearing on geometry and this was one of the items on my long list that I planned to comment about at some point.)

        About your suggestion that quantum state-space isn’t flat. The truth is that I don’t understand the non-flat evolutions you refer to. (And a more sad truth is that probably referring me to papers wont help, although face-to-face or very very self contained description might.) On the conceptual level, it is not clear to me if what you say is: “non flat evolutions are useful way to describe (approximately) in a an efficient way, quantum evolutions which really take place on some high dimension Hilbert spaces”. Or perhaps that you regard it as a genuine alteration of QM formulation.
        (You may regard the distinction itself as unimportant, but this will also be interesting to know.)

        I am also not sufficiently familiar with perturbative methods used to study noisy quantum systems. This is a place where my conjectures may indeed be in some conflict with some standard physical computations. Of course, I will be happy to know if this is the case.

        For example, look at Kitaev’s toric code that encodes 2 qubits or the Kitaev-Preskill 4D code that encodes (to the best of my memory) 6 qubits. My Conjecture 1 will say that what we should expect is a mixture of an intended pure state (represented by a codeword) with a cloud of unintended codewords. (In addition to standard noise given by excited qubits.) If this is indeed in contrast with what perturbation theory say I will be very interested to know it. (I suppose that both Robert and John P. studied these specific states.) A similar question can be asked about anyons that Matt and I talked about.

      • February 22, 2012 7:00 pm

        Dear Gil — all of your remarks are very good, and in fact, my own remarks are based largely upon your posts (as I understand them).

        Regarding (A) and (B), the intuition is direct and simple: in classical simulations it makes physical sense (and is very common) to increment the position and/or momentum of a particle, or the direction of a classical spin, by a small amount in some randomly chosen direction. But when we model quantum noise, we *never* increment the state by a small randomly chosen state. Heck, we can’t even write down such a random state in any tractably short representation. For this pragmatic reason, all numerical quantum simulations are “low-noise” in a dimensional sense. Hopefully this low-dimension property of quantum noise expresses a deep truth about nature, since otherwise all error-correction schemes fail.

        Regarding state-space geometry, your post expresses my own views very nicely. After all, wooden-ship sailors understood spherical geometry long before Gauss and Riemann formalized their methods. Similarly, simulationists today understand reasonably well how to do Hamiltonian dynamics on curved state-spaces (both classical and quantum), even though those methods are not (yet) fully formalized for the quantum case.

        As for nonperturbative and/or ultraviolet-divergent and/or infrared-divergent noise models, they are located at the fuzzy boundary of my comprehension, and moreover toric/topological codes lie well beyond that boundary. So all I can do is share your interest in them … and hope that other GLL readers will post about them! 🙂

    • aram harrow permalink
      February 22, 2012 6:50 pm

      Gil, that Dorit talk looks interesting; thanks for mentioning it!

      I’m confused about how BECs could be a counterexample, though. As I see it, they aren’t quite the right format.

      For an example of a system satisfying conjecture 1, I assume you want a system that appears to have all the prerequisites for FTQC—individually addressable qubits, fast control, low single-qubit error rates, etc.—but nevertheless fails to permit effective error correction because of some sort of noise process that is dangerously correlated either with itself, or more likely, the computation.

      By contrast, plenty of systems just fail to even come close to meeting the requirements for FTQC. Not only BECs qualify, but so do ordinary superconductors, and objects like computers, cups of coffee, etc.

      But maybe I’m putting words into your mouth.
      Can you say what a positive demonstration of conjecture 1 would look like? Is such a thing ever possible, given that it’s formulated as a statement about what is *impossible*?

      • February 22, 2012 9:42 pm

        Hi Aram,

        As you may remember Conjecture 1 talks about a code which uses n qubits to encode a single qubit. The pure states codewords form a 2-dimensional Hilbert space inside the huge 2^n dimensional Hilbert space which corresponds to the individual qubits. My conjecture is about what mixed states will necessarily be created when you create such a code. It asserts that we will necessarily obtain a mixture of a “cloud” of codewords. (on top of ordinary noise where each qubit can be excited independently.)

        This conjecture and small extension of it apply to rather general situations. The BEC states can be regarded as a code although it is not the state of a single qubit that is encoded. The BEC state can be regarded as encoding a state of one atom (say) in a huge number of atoms. The type of noisy BEC states my conjecture (adapted to this case) predicts is mixture of different BEC states.

        The cases I mentioned of anyons of various kinds, toric codes etc are actually defined as codes.

        “For an example of a system satisfying conjecture 1, I assume you want a system that appears to have all the prerequisites for FTQC—individually addressable qubits, fast control, low single-qubit error rates, etc.—but nevertheless fails to permit effective error correction because of some sort of noise process that is dangerously correlated either with itself, or more likely, the computation.”

        No, not necessarily, the conjecture is suppose to be rather general

        There are many possible examples where conjecture 1 can be tested. In my first post I asked about noisy cluster states and noisy anyons. We can try to think about other examples. The conjecture proposes how noisy states near pure enoded states look like, so it has a fairly positive nature.

      • aram harrow permalink
        February 22, 2012 9:49 pm

        But I guess your conjecture has a big \forall quanitifier: for all quantum codes, and for all attempts at implementing them. And implicitly this is followed (or maybe preceded) by a \exists quantifier: “there exists a nonnegligible source of noise that exists in Nature, and is not corrected by the code.”

        I can give examples of quantum codes and attempts at implementing them for which this conjecture holds, but these are just examples of bad codes, e.g. codes with low distance, or non-fault-tolerant preparation procedures. This is not at all surprising. e.g. BECs are a bad code, since their distance is 1, and the “preparation procedure” of cooling the system is known to put only a constant fraction of atoms into the ground state (albeit with overwhelming probability).

        But there’s an unlimited number of bad codes and bad preparation procedures. A positive demonstration of your conjecture must somehow look different.

      • February 22, 2012 9:47 pm

        In particular, I would regard a noisy BEC state where the noise consists just of independent excitations of the individual atoms as a counterexample.

    • February 23, 2012 1:09 pm

      Gil, sorry for taking so long to respond. I must admit I don’t understand your counter argument. My point was that you can use blind computation two prepare one of two states (one entangled, one not) in such a way that the information about whether the state is entangled or not is not local to the entangling system. Blind QC does not allow quantum computing without entanglement, but the reduced density matrix of the system present in the QC is mixed and separable, where as when the state of the other party is taken into account entanglement is recovered in one of these cases.

      You seem to be talking like this is some theoretical oddity which can never be realised in practice, but blind quantum computing *has* now been demonstrated experimentally in 4-qubit systems, with the information leakage bounded via tomography. See for example http://www.sciencemag.org/content/335/6066/303.abstract and the related perspectives.

      • February 23, 2012 11:26 pm

        Hi Joe, there is no need to be sorry and no hurry whatsoever. I think we discussed the blind computation issue quite a bit (and it is also mentioned in my papers). I tried to express my thoughts about it as good as I could, and perhaps it is time for us to keep thinking about it off-line, (and hear what others have to say) and discuss more things. Thanks for your always interesting remarks and participation which I hope will continue.

  16. February 22, 2012 10:32 pm

    “I can give examples of quantum codes and attempts at implementing them for which this conjecture holds, but these are just examples of bad codes, e.g. codes with low distance, or non-fault-tolerant preparation procedures.”

    Aram, does this means that we are in agreement in expecting conjecture 1 to hold when codes are created with non-fault-tolerance-preperation-procedures?

    • aram harrow permalink
      February 22, 2012 10:39 pm

      “Aram, does this means that we are in agreement in expecting conjecture 1 to hold when codes are created with non-fault-tolerant-preparation procedures?”

      Yeah, I generally agree, since every gate we ever do will have some nonzero constant error rate.

      Conversely, I think the conjecture is false because fault-tolerant preparation is, in principle, possible. (Note that there’s no such thing as fault-tolerant encoding, or more precisely, when converting qubits from code A to code B, the process can offer no error-protection greater than what is offered by A or B separately. Thus encoding qubits that are initially unencoded is an inherently non-fault-tolerant procedure. Instead, FTQC prepares encoded |0> states and then does encoded gates on them.)

      • February 22, 2012 10:55 pm

        “‘Aram, does this means that we are in agreement in expecting conjecture 1 to hold when codes are created with non-fault-tolerant-preparation procedures?’

        Yeah, I generally agree, since every gate we ever do will have some nonzero constant error rate.”

        In this case, Aram, would you tend to agree that this makes both topological quantum computing and measurement-based quantum computing non viable approaches to quantum computing, as they both seem to depend on creating with “ordinary experimental procedures” states which are supposed to violate Conjecture 1?

      • aram harrow permalink
        February 22, 2012 11:06 pm

        “In this case, Aram, would you tend to agree that this makes both topological quantum computing and measurement-based quantum computing non viable approaches to quantum computing, as they both seem to depend on creating with “ordinary experimental procedures” states which are supposed to violate Conjecture 1?”

        I think I disagree here, although I’m not totally sure what “ordinary experimental procedures” are. FTQC merely relies on the right sequence of ordinary experimental procedures, so the two are not mutually exclusive.

        Anyway, neither topological QC nor MBQC (meaurement-based QC) are intrinsically fault tolerant (well, there might exist versions of topological QC that are intrinsically FT, but none are yet known in 3 dimensions). But they are both compatible with fault-tolerance. For example, the surface code can be actively error corrected, and this gives a memory threshold of about 1%. (Here’s a recent paper arguing that this has all the necessary ingredients for FTQC http://arxiv.org/abs/1111.4022 )

        MBQC is nonviable if you do it without any error-correction. But it too can be combined with quantum error correction in a way that allows threshold theorems to be proven. Here’s one reference:
        http://arxiv.org/abs/quant-ph/0405134

  17. February 23, 2012 8:59 pm

    Gil Kalai asks On the conceptual level, it is not clear to me if what you say is: “Non-flat [quantum dynamica] evolutions are useful way to describe (approximately) in a an efficient way, quantum evolutions which really take place on some high dimension Hilbert spaces.” Or perhaps that you regard it as a genuine alteration of QM formulation. (You may regard the distinction itself as unimportant, but this will also be interesting to know.)

    Gil, you have asked a very interesting question, and it seems to me that it is perfectly consistent to hold both opinions.

    The practical utility of non-flat (that is, Käherian) quantum state-spaces is evident already: there are tens of thousands of articles that compute quantum dynamics on these state-spaces. Regarding the physical reality of non-flat quantum state spaces, my survey of the literature — which is posted on the ComplexityTheory StackExchange as an answer to Joshua Herman’s question [what is a] “Physical realization of nonlinear operators for quantum computers” — suggests to me that (somewhat contrary to a widespread folk belief among experimental physicists) there is at present no very strong experimental evidence either way regarding the flatness of Hilbert space.

    Thus, my answer will be supported solely by frail strands of philosophy and hope … which amounts to asking: Which alternative is more aesthetically pleasing, flat versus nonflat quantum state-spaces?

    One such hope-driven philosophical answer is this: We appreciate that Nature has made space curved and time relativistic, perhaps so as to economize in creating a universe that is finite in spacetime. It is therefore plausible that Nature has made quantum state-space non-flat, and and possibly dynamic in its geometry, so as to similarly economize in creating a universe that is polynomial in dimensionality. Moreover, in both cases we may presume that Nature realizes her economy via wonderful mathematics.

    With regard to classical physics, the slow emergence of the wonderful mathematics of general relativity eventually catalyzed dozens of subtle, sensitive, and beautiful experiments (perihelion precession, stellar parallax, Gravity Probe B, the cosmic microwave background, gravitational lensing, etc.), not to mention amazing technologies (GPS!). Similarly in the quantum case, the wonderful mathematics of curved quantum state-spaces is slowly coming into focus, and in coming decades we have every reason to be confident that this mathematics similarly will catalyze dozens of subtle, sensitive, and beautiful experiments, not to mention amazing technologies (spin microscopes!).

    By deliberate intent, this answer embodies the fondest hopes of the world’s mathematicians, scientists, engineers, and even medical researchers … and to me, this seems like a good idea!  🙂

    • February 28, 2012 8:59 pm

      “Gil, you have asked a very interesting question, and it seems to me that it is perfectly consistent to hold both opinions.”

      Thanks for the answer, John,

      If I understand you correctly (but do correct me otherwise), the non-flat quantum dynamical evolutions you mention are accomodated by “ordinary” quantum mechanics so they cannot in anyway falsify QM or show that the QM formulation is not sufficient to describe our physical world. Is a fair description of your view (which includes as perfectly consistent the two perspectives I asked about earlier) that you regard the relation between “ordinary” (flat Hilbert space) QM and the non-flat versions as the relation between machine language and say FORTRAN?

      (Unfortunately I am not familiar with your suggested theories on the technical level.)

      • February 29, 2012 8:45 am

        Gil, what I have in mind is less like FORTRAN versus machine code, and more like the difference between arbitrary-precision arithmetic (sometimes called “bignum” arithmetic) and IEEE floating-point arithmetic.

        We all know that floating-point arithmetic can be severely criticized on formal mathematical grounds. Heck, floating-point operations are neither commutative nor associative … formally “float” operations don’t even form an algebra!

        And yet, for most practical purposes floats work just well as bignums — for example, floating-point operations are approximately commutative and approximately associative — and of course floating point calculations generically are exponentially faster and economical of storage.

        So it is natural to ask: Is there an analog of floating-point arithmetic for quantum simulation, that for most practical purposes yields exponential speedups? The answer is “yes.”

        As researchers we are all of us uniquely fortunate that in the late decades of the 21st century the tools of algebraic geometry and geometric dynamics evolved to a point that we can map quantum “bignum” calculations onto quantum “float” calculations, by procedures that can be naturalized, formalized, dualized, localized, and linearized … which are all of them “izes” that provide solid foundations for mathematical narratives! 🙂

        Therefore, (in my view) the following three reasons for studying “quantum floats” are equally valid, and are wholly compatible, such that we need not privilege any one of them over the other two:

        (1) For systems engineers the practical advantages of “quantum floats” are sufficiently great that a working knowledge of them has become essential to the social and technical process of designing and producing any product that presses against the quantum limits to speed, size, sensitivity, and power efficiency;

        (2) For mathematicians “quantum floats” are fun and beautiful and are linked to numerous mathematical questions that are open, broad-ranging, and considered deep;

        (3) For physicists there is the great and still-unanswered experimental question of whether Nature conducts her calculations via the exact but immensely slow and memory-intensive “quantum bignum” algorithms of Hilbert space, or whether she economizes by computing with infinitesimally imprecise yet exponentially more efficient “quantum floats”!

        It is evident that exploring the mathematical nature and the practical applications of “quantum floats,” for any or all of the preceding three reasons, requires that we embrace much the same engineering, math, and science challenges as constructing quantum computers, in service of objectives and narratives that are broader and (in the end) more relevant to the challenges of our 21st century.

        This is why I am very grateful to you (Gil) and Aram for your debate, and to Dick and Ken for hosting it!

        And that is why (it seems to me) the Harrow-Kalai debate here on GLL is only superficially about the feasibility of quantum computers, because the real debate here on GLL inextricably concerns a broad class of 21st century engineering enterprises that are comparably urgent, and mathematical questions that are comparably wonderful, and scientific challenges that are comparably exciting, to those of any preceding century. 🙂

  18. February 23, 2012 11:11 pm

    Let me remark on Conjecture 1 which is at the heart of the matter, and was the center of my recent exchange with Aram. As a reminder, Conjecture 1 asserts that a noisy encoded qubit is a mixture of a “cloud” of codewords around the intended encoded state (in addition to some standard noise). This conjecture has wide applicability and it can be extended to cases where we talk about more complicated “qudits” (like BEC states).

    Aram wrote: “I can give examples of quantum codes and attempts at implementing them for which this conjecture holds, but these are just examples of bad codes, e.g. codes with low distance, or non-fault-tolerant preparation procedures. This is not at all surprising.”

    I agree that this phenomenon is not surprising and the conjecture is that this phenomenon always holds. A weaker conjecture is that this phenomenon holds for quantum devices which are not applying quantum fault tolerance based on qubits and gates architecture. The weaker conjecture is already in contrast with some researchers’ hopes from anyonic systems that are expected to be built in the next few years.

    Conjecture 1 is relevant to topological quantum computing and measurement-based quantum computing as ideas to shortcut (or replace) the standard qubits and gates models. (Of course, these forms of quantum computing are important also in the framework of qubits and gates computers.) Such “shortcut” ideas are based on preparing certain complicated states, not by an ordinary quantum computer, but by special purpose experimental processes, which by themselves do not involve fault tolerance protocols. The concern is that the resulting noisy states will satisfy Conjecture 1 and will not enable universal quantum computing.

  19. February 24, 2012 8:02 am

    Gil, also relevant to conjecture 1 is Dick’s statement:

    The basic interactions in physics involve two or three particles, for instance when a photon bounces off an electron. … To get interactions with more particles, you need to go to higher levels of perturbation theory, but as you do so, amplitudes go down exponentially.

    Here emphasis has been added to to the assertion that in higher orders of perturbation theory “amplitudes go down exponentially.” Generically speaking, Dick’s assertion is simply not true, and it would be more fair to say: “In higher orders of quantum perturbation theory amplitudes generically increase exponentially.”

    It is striking that noise models for which FTQC is feasible typically are associated to microscopic physics for which perturbative expansions diverge; this includes (for example) ohmic baths, unit-efficiency single-photon sources, and unit-efficiency photon detectors … one wonders whether a more rigorous noise & renormalization analysis would tend to affirm Conjecture I?

    More broadly, one of the virtues of Kalai Conjecture 1 (it seems to me) is that it brings a fresh perspective to broad class of long-standing open problems in quantum physics. These problems center upon the non-perturbative quantum dynamics that for decades has obstructed our understanding of the vacuum considered as a zero-temperature bath.

    E.g., perhaps Conjecture 1 motivates us to conceive the divergences associated to the physical vacuum as Nature’s fundamental mechanism for obstructing the physical universe from violating the (extended) Church-Turing thesis. And from a pragmatic point-of-view, all of the technological roadmaps to practical QC that ever have been proposed, in pushing against quantum limits to spatial localization, thermal isolation, and signal detection, require a firmer grasp of nonperturbative quantum dynamics than we presently possess.

    • February 24, 2012 8:50 am

      John, It is Aram’s statement, not Dick’s.

      • February 24, 2012 10:35 am

        You are right about the attribution (and I apologize too for the unmatched quoting). To the extent that there is any accessible textbook on these renormalization/divergence issues, Tony Zee’s well-reviewed Quantum Field Theory in a Nutshell is perhaps that book. And in Zee’s book we read:

        I believe that any self-respecting physicist should learn about the history of physics, and the history of quantum field theory is among the most fascinating …[skip 461 pages!] … In all previous revolutions in physics, a formerly cherished concept has to be jettisoned. If we are poised before another conceptual shift, something else might have to go.

        Quantum computing research in general, and the Kalai conjectures in particular, are valuable partly for their Zee-style hints regarding “what has to go.” Thank you for posting them, Gil! 🙂

      • February 24, 2012 3:53 pm

        “…is among the most fascinating …[skip 461 pages!] … In all previous” John, one reason for skepticism about “open science” of various kind, and scientific blog discussions etc. is that we cannot skip these 461 pages if we wish to seriously study the matter. 🙂

      • February 24, 2012 4:40 pm

        Gil, it was never my intention to imply that one should skip the middle pages of Zee’s Quantum Field Theory in a Nutshell … the intent was rather to link Zee’s introductory themes to his concluding themes.

        In fact, among many hundreds of field theory texts, Zee’s is acknowledged to stand out for the well-crafted unity of its math-and-physics narrative, such that Zee’s central chapters are those that one is *least* well-advised to skip! 🙂

        A specific example of particular relevance to quantum computing is Zee’s integrative presentation of the non-perturbative dynamical origins of Anderson localization, which provides a starting point for reading the topical QIT literature like Wootton and Pachos “Bringing order through disorder: Localization of errors in topological quantum memories” (arXiv:1101.5900, 2011) … not to mention 2,824 more arxiv preprints, spanning many disciplines, that similarly discuss Anderson localization.

        Does this foretell a blurring of the boundaries between QM/QC/QIT and disciplines like condensed matter physics and elementary particle physics? The Magic Eight Ball seems to answer “signs point to yes“!   🙂

      • February 25, 2012 9:12 am

        John, no critique intended, and your reference to Zee’s book is surely appreciated; I was just reminded how difficult it is to read 461 pages (or even 46 pages) in a text-book, even an excellent one, especially in an unfamiliar area.

  20. February 24, 2012 8:04 am

    Gil, also relevant to Conjecture 1 is Dick’s statement:

    The basic interactions in physics involve two or three particles, for instance when a photon bounces off an electron. … To get interactions with more particles, you need to go to higher levels of perturbation theory, but as you do so, amplitudes go down exponentially.

    Here emphasis has been added to to the assertion that in higher orders of perturbation theory “amplitudes go down exponentially.” Generically speaking, Dick’s assertion is simply not true, and it would be more fair to say: “In higher orders of quantum perturbation theory amplitudes generically increase exponentially.”

    Noise models for which FTQC is feasible typically are associated to microscopic physics for which perturbative expansions diverge; this includes (for example) ohmic baths, unit-efficiency single-photon sources, and unit-efficiency photon detectors … one wonders whether a more rigorous noise & renormalization analysis would tend to affirm Conjecture I?

    More broadly, one of the virtues of Kalai Conjecture 1 (it seems to me) is that it brings a fresh perspective to broad class of long-standing open problems in quantum physics. These problems center upon the non-perturbative quantum dynamics that for decades has obstructed our understanding of the vacuum considered as a zero-temperature bath.

    Perhaps Conjecture 1 motivates us to conceive the divergences associated to the physical vacuum as Nature’s fundamental mechanism for obstructing the physical universe from violating the (extended) Church-Turing thesis. And from a pragmatic point-of-view, all of the technological roadmaps to practical QC that ever have been proposed, in pushing against quantum limits to spatial localization, thermal isolation, and signal detection, require a firmer grasp of nonperturbative quantum dynamics than we presently possess.

  21. February 24, 2012 12:57 pm

    Klas wrote: “Gil, I don’t think this enough information to be very meaningful. A simple example of a noise which satisfies your description is that with probability 1/10,0000 we flip all qubits simultaneously. This gives completely correlated errors. However it is very easy to correct this particular noise.
    As Aram has already pointed out, correlated errors are easy to correct if the noise structure is known.”

    Klas, perhaps the short answer is this: we talk about errors which amount to information leaks or measurements. In this context it is a reasonable assumption (especially in the quantum case) that if a large fraction of your memory is arased, e.g. replaced by a random data, you will not be able to correct it. So for example, a noise for which with some small probability all qubits (or bits) of your computer will be replaced by random qubits (bits) is assumed not to be correctable. You are right that (both in the classic and quantum case) this assumption is not always correct (especially if you put the noise yourself and keep track of it), but it is a reasonable assumption.

    • March 1, 2012 7:10 am

      Well if you replace all qubits by random ones there will certainly be a problem, but it looks to me like even this must be tuned in a nontrivial way in order to stop qc from being efficient.

      One of the effects of having highly correlated errors is that if the error rate for the individual qubits is fixed and we increase the correlation, then this means that whenever we have one error we are in fact likely to have many of them, as you have pointed out. However since the individual error rate is fixed this also means that most of the time we must have a lot fever, perhaps even no, errors, since otherwise the expected number of errors would contradict the individual error rates, by the linearity of expectations.

      So if we have no errors most of the time it looks like we could simply repeat the computation and compensate for the errors by taking the majority answer.

  22. February 25, 2012 7:01 pm

    One thing I am curious about is the predictions (based on perturbation methods, I believe) regarding Kitaev’s 4D model. (I believe that both John Preskill and Robert Alicki who participated in our discussions studied this model.) If I understand correctly, the behavior described by perturbative computations suggest that you will reach a pure codeword up to a small level of standard (independent) noise. Is this indeed the case, and what is involved in this computation? My Conjecture 1 (which is supposed to hold in every dimension) asserts that you can only reach a mixture of pure states near a single intended pure state.

    • February 25, 2012 7:07 pm

      The predictions for the 4D toric code are qualitatively like what you’d see in the 2-D classical Ising model with no applied field. That is, consider bits in a plane such to dynamics where at each time step, a bit flips with probability 0.99 * eps to equal the value of majority of its neighbors, and with probability 0.01*eps to equal the opposite value.

      The resulting state would be no means be pure, but a bit would still be protected for an amount of time exponential in the size of the system. This because the errors that would be created in this process would w.h.p. always be correctable.

    • Gil Kalai permalink
      February 25, 2012 7:47 pm

      Thanks Aram, the question is if the mixed state will be of the form:

      (*) A pure codeword (or something extremely close to it as the size of the system grows ) + independent excitations

      or as Conjecture 1 predicts:

      (**) A substantial mixture of codewords (no matter how large the system is) + independent excitations.

      And also, I was asking about the physics calculations that leads to (*).

    • February 25, 2012 8:04 pm

      I think the references for the thermal stability of the 4-d toric code are
      http://arxiv.org/abs/quant-ph/0110143
      http://arxiv.org/abs/0811.0033

    • February 26, 2012 10:16 am

      Aram, thank you for these references! A recent preprint that extents these themes is Bravyi and Haah “Analytic and numerical demonstration of quantum self-correction in the 3D Cubic Code” (arXiv:1112.3252, 2011), which includes numerical calculations that largely affirm the theoretical predictions of the articles you suggested.

      Supposing that we seek to reconcile the (wonderful!) results of these articles with the (similarly wonderful!) Kalai Conjectures, how can we proceed concretely?

      One avenue is to show that the idealized noise models of the FTQC literature depart substantially and essentially from real-world quantum noise. And here it seems to me that the FTQC literature does not reflect all that much progress during the decade 2001–2011 (that is, arXiv:quant-ph/0110143–arXiv:1112.3252). Bravyi and Haah’s article succinctly states the consensus precedent as of 2011: “[Ohmic noise] was adopted as a model of the thermal dynamics in most of the previous works with a rigorous analysis of quantum self-correction,” and with no further comment Bravyi and Haah adopt the same noise model. Implicit in this precedent is the folk-assumption that a rigorous microscopic theory of quantum noise would yield an effectively Markovian model.

      To strengthen the justification for this precedent, the QM/QC/QIT community would benefit from (no doubt difficult, lengthy, and even tedious) analyses that grapple concretely with the tough mathematical issues attendant to FTQC folk-assumptions, with special regard to the divergences of field theory (particularly in regard to near-Ohmic noise in condensed matter physics) and also in regard to the divergences of cavity QED (particularly near-unit-efficiency sensors and photon sources).

      To the extent that there already exist such analyses, then hopefully GLL readers will post links to them!

      A Kalaist point of view is that in-depth noise-and-detector analyses may yield surprises having fundamental significance for QM/QC/QIT … and of course this possibility is a sufficient justification for attempting them. More broadly, from a systems engineering point of view, the more that we understand about quantum noise and sensing, and the better-integrated our knowledge of these phenomena, and the more widely disseminated that knowledge, the faster and more confidently we can launch the great quantum enterprises of the 21st century.

      More references, please!   🙂

  23. February 27, 2012 1:14 am

    As the genius’ proceeds in this conversation, a dog under the table would appreciate any diversion in terms of general description regarding ohmic noise and toric code. I know that sounds ignorant, but dissipation describes itself to me in my gut while this other stuff is pretty heady… i.e., beyond me.

    Who is Amir Caldeira? Why do path integrals enter the discussion here? How is the Hilbert Space reduced?

    yours truly, etc. etc.

  24. February 28, 2012 12:20 pm

    Three remarks: 1) Aram wrote: “There are many reasons why quantum computers may never be built,” so let me emphasize that Conjectures 1-4 are offered to represent the nature of decoherence for quantum computers/emulators/evolutions that do not enact quantum fault tolerance, whatever the cause for that may be.

    2)2) A brief explanation why error synchronization goes hand in hand with scaling up of the error rate in terms of qubit errors. The relevant way to measure error-rate for quantum computers for the purpose of quantum error correction is in terms of qubit-errors per computer cycle. On the other hand, error-rate in terms of trace distance per a tiny unit of time is more natural because this is a measure which remain fixed under unitary evolutions. The trace distance (if I am not mistaken) does not distinguish between changing a single qubit to a maximal mixed state qubit with probability epsilon and leaving other qubits unchanged, or changing all the qubits to a maximal mixed state with probability epsilon. In terms of qubits errors the later case represent n times higher error rate. When you do not correct errors and let them propagate under complicated unitary evolutions a single qubit error will quickly transform to a n-qubit error so this will lead to qubit-errors scaling up linearly. If your computer leaves few qubits errors “under the fault tolerance radar” these errors, propagated, will be devastating.

    3) We talked about classical fault tolerance demonstrated by digital computers but we did not touch much the analogy with noise and denoising in classical systems. Aram mentioned the stock market behavior so let me give one remark based on this example. We can consider two types of errors or noise when describing values of stocks:

    1) The errors in transmitting over noisy channels today’s stock market values,

    2) The errors in predicting tomorrow’s stock market values.

    It will be reasonable to conjecture that errors of the second kind, will satisfy (even, in principle,) similar properties to Conjectures 3 and 4. Namely, that if the values of two stocks are correlated then making an error in predicting their values will be positively correlated, and that there is a substantial probability for making errors in the prediction of a large number of stocks simultaneously. Of course, this will not be the case for errors of the first king. Basing the distinction between errors of these two kinds on formal grounds will not be easy.

  25. February 28, 2012 6:10 pm

    Dear all, I have a secreterial request: Please, please continue here the discussion about thermodynamics (and not where it started). Also please start a new thread so that the comments will not be painfully narrow (in the graphic sense).

  26. February 28, 2012 7:05 pm

    Dear Gil. I am one of the old-post offenders, and will continue my thermodynamical remarks here. I am drafting a GLL post (hopefully for next week) that will specify a concrete class of Hamiltonian/Lindbladian dynamical models that:

    (1) are thermodynamically valid on Hilbert space, and
    (2) remain valid upon pullback onto non-Hilbert spaces, and
    (3) are mathematically fun and useful in practical engineering, and
    (4) come with no warranty that Nature uses them, and even
    (4) no warranty that we can feasibly test whether Nature uses them.

    To extend Dave Bacon’s perspicacious remarks in the above (older) GLL thread, the engineering analysis of nonequilibrium dynamics commonly is associated to the relaxation of “small-to-moderate” disequilibria. Whereas FTQC roadmaps are some sense ambitious to generate (by gate logic or adiabatic evolution) and sustain indefinitely (by various error-correction strategies) states of “non-classically large” quantum disequilibrium.

    How does one naturally distinguish “small-to-moderate” disequilibria from “non-classically large” disequilibria? Don’t ask me! Fortunately, Charles Bennett has a very interesting post on Quantum Pontiff, titled “What increases when a self-organizing system organizes itself? Logical depth to the rescue”, that might suggest informatically natural measures of the quantum disequilibrium that is associated to FTQC.

  27. Luca permalink
    February 29, 2012 7:14 am

    Hi Gil! I have no formal background in quantum computers, quantum mechanics and so on but i would like to ask: if your conjectures are correct or if it will be difficult to scale up qc (up to the point that it will not be possible at all) for other reasons, does this render false what some researchers are saying about our universe, that is, that the universe itself could be a qc?!?
    Thx!

    • Gil Kalai permalink
      March 1, 2012 8:51 am

      Hi Luca, I think that something similar will come up in Aram’s next post.

    • March 8, 2012 12:35 am

      Luca, I am not familiar with those works claiming that the universe itself is a QC and to where the researchers take this claim. Generally speaking, I dont think these works refer to computational complexity issues, or to quantum fault tolerance, so I would guess that my conjectures are not relevant to these claims. Specific works where the discussion is based on states that required quantum fault tolerance/quantum error-correction are indeed in tension with my conjectures.

  28. March 1, 2012 6:59 am

    Aram, the recognition that “correlated [quantum] noise would cast doubt on … the idea of reductionism” surely is a Great Truth. And for its apposing Great Truth we have Tony Zee’s remark in his Quantum Field Theory in a Nutshell:

    In all previous revolutions in physics, a formerly cherished concept has to be jettisoned. If we are poised before another conceptual shift, something else might have to go.

    Although people put forth various no-go arguments to the effect that the geometry of quantum state-spaces necessarily is Hilbert, these arguments are reminiscent of Kant’s cherished yet utterly mistaken faith that the geometry of Newtonian state-space necessarily is Euclidean. In this regard we have two more celebrated aphorisms to guide our thinking:

    The virtue of a logical proof is not that it compels belief but that it suggests doubts. The proof tells us where to concentrate our doubts.

          Henry George Forder
          Foundations of Euclidean Geometry

    ———————

    My conviction that we cannot thoroughly demonstrate geometry a priori is, if possible, more strongly confirmed than ever. It will take a long time for me to bring myself to the point of working out and making public my very extensive investigations on this subject, and possibly this will not be done during my life, inasmuch as I stand in dread of the clamors of the Boeotians, which would be certain to arise if I should ever give full expression to my views.

          Carl Friedrich Gauss
          Letter to Friedrich Bessel

    With specific regard to thermodynamical issues in quantum computing, we are all familiar with von Neumann’s expression for the entropy of a quantum system. And when we localize and geometrize the von Neumann entropy, with a view toward its natural pullback onto post-Hilbert state-spaces, we encounter logarithmic divergences that strikingly resemble the logarithmic divergences that are generically associated with renormalization in quantum field theory.

    Field theory teaches us to rest easy: logarithmic divergences mean that we should regard field theory not as a fundamental theory, but rather as an effective field theory. Therefore, by similar reasoning applied to the thermodynamical aspects of quantum computing, perhaps we should regard Hilbert state-spaces not as fundamental state-spaces, but rather as effective state-spaces.

    When we travel this geometrically naturalized post-Hilbert path, it is natural to ask ourselves Tony Zee’s happy question: “What formerly cherished concept has to be jettisoned?” Gil Kalai’s conjectures implicitly suggest to us that spatial localization and thermal isolation are the concepts that perhaps we may learn to fruitfully regard not as absolute, global, and static (in the algebraic sense of Euclid-Kant-Cauchy), but as approximate, local, and dynamic (in the geometric sense of Gauss-Riemann-Einstein).

    A great virtue of quantum computing research is that it inspires us to a thorough investigation (both mathematical and experimental) of the Kantian ideals of localization, isolation, and thermalization. And we are not surprised to find that — as with prior Kantian ideals in physics and mathematics and indeed pretty much all else — Nature’s physical realizations are more subtle than was originally conceived.

    To state this point starkly, perhaps Nature does not support the ideals of localization, isolation, and thermalization, any more than she supports the ideals of Euclidean geometry. This (for me) is one main implication of the Kalai Conjectures.

    Obviously it is neither feasible, nor necessary, nor even desirable, that everyone think alike in these regards. For every algebraist who stands entranced by the crystalline perfection of the Euclidean plane, there is a geometer who, upon “jettisoning that cherished perfection” (in Tony Zee’s phrase) finds that

    It was as if I had fled the harsh arid steppes to find myself suddenly transported to a kind of ‘promised land’ of superabundant richness, multiplying out to infinity wherever I placed my hand on it, either to search or to gather …

          Alexander Grothendieck
          Recoltes et Semailles

    It is a hopeful sign that at the beginning of our shared 21st century quantum journey, we are finding interesting new mathematics. Now we stand urgently in need of new classes of quantum experiments, whose meaning is illuminated by this new mathematics, and new classes of applications, that will help create new enterprises, that will address the urgent needs of our 21st century.

    All of which is good. And for these inspirations, please accept this appreciation and thanks, Aram and Gil and GLL! 🙂

  29. Gil Kalai permalink
    March 1, 2012 12:17 pm

    Some answers to Boaz Barak:

    Dear Boaz,

    1) Boaz: “I know very little about quantum noise models and error correction, so am using my (probably flawed) intuition from classical computation that in principle one should be able to reduce the noise to an arbitrarily low level which is some function of the budget you have to spend. I guess this is in sharp contrast to your Conjecture 1, and I’m very curious to understand this.”

    Several intuitions regarding classical computers should be carefully examined when they are “transferred” to the quantum case. Indeed, Conjecture 1 is in sharp contrast with the intuition that a large “budget” allows to reduce the error (for a single encoded qubit) to an arbitrary low-level. (See below for four such intuitions.)

    2) Boaz: “What’s somewhat still confuses me is the following. You could have simply conjectured that for some fundamental reason, the noise rate will always stay above, say, 45% or whatever the number is for the threshold theorem to fail. But you seem to assume that it would be physically possible to reduce the expected number of errors to an arbitrarily low amount, but somehow correlation will stay high.”

    This is a very good question. The purpose of the two-qubit conjecture is to identify the simplest possible distinction (that I could think about) between quantum computers that allow (or enact) fault tolerance and quantum computers that don’t allow (or enact) fault tolerance. The conjecture is not about the entire evolution, or about the behavior in one computer cycle, but about a single “snapshot:” you compare for two qubits the intended pure joint state and the actual noisy state.

    I do not assume at all that it will be possible to reduce the expected number of errors to an arbitrary low amount. Already the standard noise models assume, to the contrary, that the error rate will be above some constant, so two qubits seem necessary to present a “clean” distinction between standard noise models and mine.

    As I said already, e.g. in this comment, the conjecture that for very complicated quantum states (highly entangled) the errors will be substantially correlated is closely related to the rate of error scaling up for complicated quantum states.

    3) Boaz: “So I guess my question is whether in quantum computing also, if I didn’t want a machine that runs forever, but only one that can handle T computational cycles, shouldn’t I be able to build one by spending f(T) dollars, for some function f()? This is of course a rephrasing of the question I asked Gil before, to which he promised an answer,”

    This is also an excellent question, and again I don’t think the intuition from classical computers is correct. This is related to some planned later discussions with Aram but a short reply is that I speculate that for general quantum circuits f(T) will be infinite already for some bounded (even rather small) value of T.

    4) Boaz: “I can’t say I fully understand this ‘noisy qubit assumption’ ”

    I think I referred to the question that came up in our conversation if Conjecture 1 alone suffices to reduce quantum computers to BPP. The question is still not formally described, and it is a rather wild speculation to expect a yes answer.

    There were other questions that you asked, Boaz, some related to computational complexity that we may return to.

    Finally, let me “try on you” some proposed distinctions between digital computers and quantum computers. The first three we already mentioned.

    1) Can an essentially noiseless encoded qubit be constructed?

    My conjecture: no, FTQC: yes.

    2) Is there a systematic relation between the state and the noise?

    My conjecture: yes, FTQC: no.

    3) Are there general-purpose quantum emulators, or perhaps every type of quantum state/evolution requires a special machine?

    My conjecture: the later, FTQC: the former.

    A fourth question that I plan to discuss later is:

    4) Can you hear the shape of a quantum computer?

    For digital computers we know (and this looks completely obvious) that we can implement any computer program on part of the computer memory of arbitrary geometry. If universal quantum computers can be built this will hold also for quantum computers. I conjecture that in the quantum case this is no longer true.

    • Boaz Barak permalink
      March 2, 2012 10:58 am

      Thank you Gil for your thoughtful responses, I think I understand things better now. One minor comment is regarding your second answer: is it really the case that standard noise models *assume* that the error can’t be brought below a certain absolute constant, or that they are just happy that they have the threshold theorem, but if instead of threshold an absolute constant the threshold would be 1/(log n) or maybe even 1/(sqrt(n)) they wouldn’t say it was physically impossible to realize.

      • March 2, 2012 1:39 pm

        There are some examples in nature where some forms of noise scale like 1/n. For example, the Mossbauer effect:
        http://en.wikipedia.org/wiki/M%C3%B6ssbauer_effect

        The idea is that a photon is emitted from one atom in a lattice of n atoms, and the recoil is divided among the entire lattice. This reduces one type of noise (the shift of frequency of the photon caused by the recoil) so it scales like 1/n (or maybe 1/sqrt(n)), and usually n is like 10^20, so this is generally considered negligible.

    • March 2, 2012 7:05 pm

      Gil writes:

      2) Is there a systematic relation between the state and the noise?
      My conjecture: yes, FTQC: no.

      Actually, in the theory of FTQC with independent noise, one expects there to be a relation between the state and the noise. This statement is a little vague, so let me put it differently: at intermediate times, the difference between the actual computation and the ideal computation will consist of an error pattern that is *not* i.i.d., but has a complicated relationship with the computation.

      However, the proofs still work. There are deviations from i.i.d. but everything is rigorously controlled.

      One other thing. John Sidles suggests that our inability to tame noise in the lab may be a sign of ugly correlated decoherence. But in many cases the difficulty comes from other things, like addressability (e.g. in optical lattices, or NMR in many cases). This makes it hard to imagine a unified theory of “quantum computers won’t work.”

      • March 2, 2012 7:08 pm

        To clarify: what I said refers to the theory, when we assume a noise model that is proven correctable.

        In general, my belief that the noise models of nature are generally correctable (once the single-qubit rates are low enough) even in cases where no one has proved it. But this is my own guess, and what I said above about the theory is a more objective fact. 🙂

      • March 2, 2012 8:00 pm

        Aram, it seems (to me) that it is similarly difficult to imagine “a unified theory of ‘how quantum computers won’t work’” as it is to imagine a “a feasible design of ‘how quantum computers will work’”.

        In the latter regard we have this week’s preprint by Fujii, Yamamoto, Koashi, and Imoto, titled “A distributed architecture for scalable quantum computation with realistically noisy devices.” (arXiv:1202.6588v1).

        In using the word “scalable” in their title, these authors aren’t joking: their reference FTQC factors 1024-bit integers via a processor of 2\times10^{21} gates. Thus far in the debate, there hasn’t been much discussion of concrete FTQC design issues, or of how many state-space dimensions an FTQC will really require, so perhaps this article will inspire some.

        More broadly, it seems (to me) that a useful lesson of Bill Murray’s Groundhog Day is that it’s a mistake to stay “stuck” on a binary quest to find either:

        • “a unified theory of ‘how quantum computers won’t work’”, versus

        • “a feasible design of ‘how quantum computers will work’”.

        In essence, the QM/QIT/QC community’s best hope for a revitalizing escape from Groundhog Day may be to conceive a third path forward.

      • Gil Kalai permalink
        March 4, 2012 10:30 am

        Aram: “Actually, in the theory of FTQC with independent noise, one expects there to be a relation between the state and the noise.”

        Dear Aram, This is correct, but the relation betwen the state and the noise under FTQC is rather mild and it essentially expresses just the last several rounds of computation. So it is possible under FTQC to create on some subset of all qubits every feasible state (a state a noiseless computer can reach) up to essentially i.i.d. errors.

  30. Gil Kalai permalink
    March 3, 2012 2:01 pm

    Aram: “..in many cases the difficulty comes from other things, like addressability (e.g. in optical lattices, or NMR in many cases).”

    Aram, can you elaborate a little on the addressability problem?

  31. Gil Kalai permalink
    March 3, 2012 7:00 pm

    Dave Bacon made a very interesting comment on thermodynamics, so let me repeat some of what Dave said, and make a few comments of my own.

    After a short description of equilibrium thermodynamics Dave’s first assertion was:

    “This means that one needs to study thermodynamics of out of equilibrium systems.”

    I completely agree with this statement and I don’t think it is in dispute. (I should note, though, that there was an interesting approach by Robert Alicki to apply insights/theorems from the study of equilibrium thermodynamics to the study of meta-stable states.)

    “In that case there are metastable as well as perfectly stable classical memories (2D Ising an example of the first and Gacs CA as an example of the second.) For storage of quantum information, it certainly seems like the 4D toric code is an example of the first, and spatially local QECC is an example of the second.”

    (Small questions: Dave, what perfectly stable means in this context? and what “Gacs CA” and “local QECC” refer to?)

    “The thermodynamics and out of equilibrium dynamics of these systems seems well-studied, for specific noise models.”

    Ok, this is a place where my conjectures give a different picture for these well-studied systems. And indeed they are based on rejecting the specific noise models that were used in these studies. Actually I am quite interested in as many more cases of well-studied models where my conjectures propose a different answer. And also I would like to gather (if not to understand) different methods that are used, and I am especially curious about the perturbative methods that were mentioned earlier. This is also related to Joe Fitzsimons’s comment and point 16 is the long list of issues. Joe said

    “..The second is the implicit assumption [by Gil] that we don’t know what the noise should look like, and hence are free to conjecture whatever we please about it so long as it allows for classical computation. This, however, is not the case. The fact of the matter is that we know in excruciating detail the physics of how low energy particles interact with one another, and can use this to say a lot about the type of noise that a given quantum computer is subject to.”

    It goes without saying that I will be very interested to see places where my conjectures are in confrontation with what can be said about the type of noise that a given quantum computer is subject to based on our detailed knowledge of how low energy particles interact with one another.

    Dave wrote

    “If you’re going to take a crack at breaking quantum fault tolerance, I think pinning your hope on a more refined thermodynamics is a bit of a longshot and really it would have to be tied to the specific physics of the system. So it seems to me that your more traditional argument, Gil, is more likely to succeed then some magical development in thermodynamics”

    Well, my “traditional argument” is what I try to examine and advance but it looks to me that my specific conjectures might be related to thermodynamics. One possible connection that I was puzzled about, that was mentioned in remarks (like this one) by John Sidles, is to Onsager’s regression hypothesis. Conjecture 1 says something in the direction that “the noise mimic the signal (and forget the computational basis)”. Onsager’s hypothesis (which is about out-of equilibrium dynamics) also says something in the direction that the laws for the noise are the same as the laws for the signal. (Unfortunately, I don’t really understand what O’s hypothesis says.)

  32. March 4, 2012 7:52 pm

    I would like to suggest an in-depth and essentially optimistic reading to Dave Bacon’s assertion:

    “We know in excruciating detail the physics of how low energy particles interact with one another, and can use this to say a lot about the type of noise that a given quantum computer is subject to.”

    First to inject a note of humility, please let me recommend notes 18.1 and 18.2 of Howard Carmichael’s 2007 text Statistical Methods in Quantum Optics 2: Nonclassical Fields (page 411ff, see google books), followed by a reading of Hayrynen, Oksanen, and Tulkki’s 2008 preprint Cavity photon counting: ab-initio derivation of the quantum jump superoperators and comparison of the existing models (arXiv:0808.1660v1), followed by a reading of Braungardt, Rodriguez, Glauber, and Lewenstein’s 2011 preprint Particle counting statistics of time and space dependent fields (arXiv:1110.4250v1).

    Yes, this is the same Howard Carmichael who conceived the term “quantum unraveling”, and the same Roy Glauber who perceived the relevance of quantum coherent states (and won a Nobel for it).

    Dave Bacon has articulated a Great Truth when it comes the quasi-continua of quantum states that are associated to photon/phonon sources and photon/phonon sinks, and Carmichael and Glauber and their colleagues have articulated its complementary Great Truth:

    “The time development of electromagnetic fields in closed cavities under continuous detection of photons continues to be a subject of confusing controversy.”

    The GLL debate associated to QM/QIT/QC/FTQC can hardly be resolved until these fundamental problems are better understood. Conversely, we may hope the GLL debate will bring welcome new mathematical tools, theoretical insights, and experimental tests to these long-standing problems in quantum optics.

    In this regard, an excellent proving-ground — that has not yet been mentioned in the Harrow/Kalai debate — is the celebrated (justly!) class of optical scattering experiments that has been proposed by Scott Aaronson and Arkhipov. There is a reasonably strong consensus that the following are individually feasible:

        • near-unit-efficiency photon sources, and

        • near-perfect emission synchrony, and

        • near-unitary optical scatterers, and

        • near-unit-efficiency photon detectors, and

        • nearly independent detector noise.

    It is natural to inquire: What are the quantum mechanical obstructions — both practical and fundamental — to achieving these objectives not merely individually, but simultaneously?

    More broadly, it hardly seems likely that FTQC can ever be achieved, until we can confidently answer the many hard questions in this class, via a Bacon-esque “excruciatingly detailed” quantum theory (hopefully just one of them) that has been thoroughly validated by careful experiments.

    The notorious resistance of physics questions in this class to easy answers, and the immense difficulty of the associated mathematics, and the high caliber of researchers who tackling these questions, and the many precedents of Nature preparing astounding surprises for us, all suggest that (1) these are exceptionally worthy problems to work on, yet (2) we had best not be too confident that solving them will be easy or quick, and (3) the answers we find may not be ones we expected, and may even be utterly astounding to us.

    Summary: A thorough understanding of the quasi-continuum quantum physics of low-energy open-system dynamics is essential to FTQC, and yet there is a broad class of questions associated to it that are both fundamental and unanswered.

    As for Onsager-type relations in FTQC, that will have to be a separate post! 🙂

    • March 5, 2012 8:32 am

      As an historical aside, the above quantum optics textbook and articles are … let’s face it … not so easy to read. It is fair to ask, where’s the mathematical naturality? Where’s the simplicity? Where’s the universality? Where’s the beauty?

      History offers us the following guidance. In the early 19th century, non-euclidean geometry was a subject that working-class sailors studied (via Bowditch’s The American practical navigator: an epitome of navigation, for example). It required considerable creative genius on the part of Gauss and Riemann to abstract from Bowditch’s practical calculations the natural, simple, universal, beautiful elements of geometry. And determination was required too; as Gauss wrote to Bessel: “I stand in dread of the clamors of the Boeotians.”

      In the twentieth century this pattern repeated itself. As Wolfgang Pauli wrote to Rudolf Peierls (in 1931): “Der Restwiderstand ist ein Dreckeffekt, und im Dreck soll man nicht w\”{o}hlen”(“Resistance is a dirt effect, and in dirt we should not wallow”), and “On semiconductors one should not work; that’s a pigsty (schweinerei). Who knows if semiconductors exist at all?” Yet needless to say, practical 20th century investigations into the multitude of Dreckeffekts of solid-state physics has uncovered a paradise of natural, simple, universal, beautiful phenomena, that were waiting for mathematicians and physicist to discern.

      Now in the 21st century, perhaps the great enterprise of building quantum computers can help us discern naturality, simplicity, universality, and beauty, even in the seemingly mundane Dreckeffekts that are associated to the practical physics and engineering of these quantum devices. Almost by definition, the new mathematics and physics of the 21st century will include “burning arrows” that we do not presently anticipate. And so we can reasonably foresee that, as Tony Zee puts it, “formerly cherished concepts will have to be jettisoned” … just the 19th and 20th centuries successfully jettisoned them.

  33. March 15, 2012 3:27 pm

    No malice intended

    If you think about it, a noise which satisfies conjecture 1 is not malicious. On the contrary, it has nice properties for running the world nicely and even for our ability to explore nature. For example, when we study physical phenomena at one scale we often can go a long way even without understanding the smaller scales, and this is related to the fact that the noise satisfies the same large-grain structure as the signal. We have to thank noise accumulation and Conjecture 1 for that.

    When we similate on a universal quantum computer a multiscale phenomenon, the noise will still be based on the computational basis. In nature (fortunately!) things are better and the noise inherits the same multiscale structure as the “signal”.

Trackbacks

  1. Updates, Boolean Functions Conference, and a Surprising Application to Polytope Theory | Combinatorics and more
  2. The Quantum Super-PAC « Gödel’s Lost Letter and P=NP
  3. Flying dragons of the 3rd millennium? | Are You Shura?
  4. The Quantum Fault-Tolerance Debate Updates | Combinatorics and more
  5. RT.COM -> Это происходит, на каком языке? -> WHAT R U –PEOPLES DOING?זה קורה ב איזה שפה?זה קורה באיזה שפה?זה קורה באיזה שפה?זה קורה באיזה שפה– BIBI ?Dit geb
  6. Can You Hear the Shape of a Quantum Computer? « Gödel’s Lost Letter and P=NP
  7. Area 51 « Gödel’s Lost Letter and P=NP
  8. Lucky 13 paper dance! | The Quantum Pontiff
  9. Quantum Supremacy or Classical Control? « Gödel’s Lost Letter and P=NP
  10. The Quantum Debate is Over! (and other Updates) | Combinatorics and more
  11. My Quantum Debate with Aram II | Combinatorics and more
  12. Predictions For 2019 | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading