Skip to content

Quantum Repetition

September 16, 2012


Aram Harrow and Gil Kalai debate “Conjecture 1”

William Wootters and Wojciech Zurek were office-mates in 1979 as graduate students at U.T. Austin in John Wheeler’s group. A paper by Nick Herbert of the Fundamental Fysiks Group crossed their desks with an idea toward harnessing quantum mechanics for faster-than-light communication. Fascinated though skeptical, they realized the scheme made an implicit assumption. It assumed the ability to make a perfect copy of an arbitrary quantum state {a|0\rangle + b|1\rangle}, i.e., a qubit—except Wootters didn’t coin that term until 1995. Independently of Dennis Dieks in 1982, they proved that only special—not general—qubit states can be cloned. The name “No-Cloning Theorem” was supplied by Wheeler.

Today we continue the debate between Gil Kalai and Aram Harrow on scalable quantum computing, focusing on how ideas about repetition impact Gil’s “Conjecture 1” from the series’ initial post.

The debate was expected to conclude last spring, but ideas and material to sustain it have developed in preprints with refutations, new queries, copious comments sometimes preceded by private exchanges, and further discussions on the participants’ own fora. We note especially that John Preskill has posted a full paper, “Sufficient condition on noise correlations for scalable quantum computing,” extending work we previously referenced. Today’s post was to be the conclusion, but as it passed 20 LaTeX pages we declared this part of it a fourth “round” instead. We apologize that some of it is necessarily “entangled” with the yet-to-appear conclusion, which will follow shortly.

In his first response post, Aram described a classical error-correction test, and then began an approach against Gil’s Conjecture 1 using repetition codes. In this post they discuss these two matters further. We first present three different ways of repeating things, the last a thought experiment by Gil that set the initial terms of Aram’s analysis. Then comes Aram’s detailed modeling and critique of Conjecture 1, followed by a response by Gil. Finally we present Gil’s summation on the issue of why classical computers are possible.

Rep-A, Rep-B, and Rep-C

Given an object {q}, the simplest fault-tolerance idea is to make {n} independent copies: {q,q,\dots,q}. That way if fewer than {n/2} of them are corrupted, the majority vote of the others will preserve the original. Let’s call this building-block idea Rep-A.

When the object {q} represents a probability distribution (classical or quantum) then things become more involved. If {q} is a classical biased coin {p\mathbf{0}+(1-p)\mathbf{1}} or a qubit {a|0\rangle + b|1\rangle} what does it mean to repeat {q}? What is the precise object we are repeating? Here, the no-cloning theorem (which also has a classical counterpart) is of some relevance.

One can do Rep-B: Suppose {q} is a qubit with unknown value {a|0\rangle + b|1\rangle}. We can use entanglement to create

\displaystyle |q^{(n)}\rangle = a|0^n\rangle + b|1^n\rangle.

Nature allows this as a general quantum operation. This idea extends to more complicated classical and quantum error correcting codes.

In Gil’s thought experiment {q} is a classical biased coin {p\mathbf{0}+(1-p)\mathbf{1}}. The Rep-C idea is to represent this by {(p\mathbf{0}+(1-p)\mathbf{1})^{\otimes n}}. The “decoding” of a state of {n} bits gives to `1′ its expected weight over the {n} bits. Aram will subsequently discuss the quantum version.

We will see how these ideas play out in arguments over Gil’s Conjecture 1, which was originally stated as follows:

Conjecture 1 (No quantum error-correction): In every implementation of quantum error-correcting codes with one encoded qubit, the probability of not getting the intended qubit is at least some {\delta > 0}, independently of the number of qubits used for encoding.

Gil: a classical analog of a quantum computer

Aram’s analysis of Conjecture 1 builds on this thought experiment offered by Gil. This is an example to show that had we lived in a classical noisy world where the classical analog of Conjecture 1 holds for the noisy microscopic scale, noiseless classical memory and computation were nevertheless possible.

The classical analog of Conjecture 1 (let’s call it Conjecture 1′) asserts that when we use Rep-C to encode a classical random bit {p\mathbf{0}+(1-p)\mathbf{1}} we get, with a substantial probability that is bounded away from 0, states that decode into codewords of the form {q\mathbf{0}+(1-q)\mathbf{1}} where {q} is distributed in a neighborhood of {p}. (Conjecture 1′ is not true in general, but what we see in a minute is that it is not harmful to the repetition code.)

The point is that Rep-C, even under a noise that satisfies Conjecture 1′, allows noiseless classical information and classical computation. For that you represent `0′ by {p\mathbf{0} +(1-p)\mathbf{1}} when {p} is greater than {1/2} and `1′ by {p\mathbf{0}+(1-p)\mathbf{1}} when {p} is smaller than {1/2}.

Aram: Conjecture 1 and the repetition code

A central part of Gil’s skepticism is his conjecture 1, which essentially asserts that quantum error-correcting codes cannot work as advertised. Here is my restatement of that conjecture.

There exist a universal constant {\delta>0} with the following properties. Let {{\cal E}} be an encoding map (specifically, a quantum operation) that maps one qubit to {n} qubits. Let {|\psi\rangle} be a pure single-qubit state (the “intended state”) and {{\cal P}} a procedure that, in the absence of noise, would prepare the state {{\cal E}(|\psi\rangle\langle \psi|)}. Let {\tilde{\cal P}} be the effect of {{\cal P}} in the presence of physically realistic noise. Then there exists a distribution {\mu_{{\cal P},\psi}} satisfying

\displaystyle  \mathbb{E}_{|\tilde{\psi}\rangle \sim \mu_{{\cal P},\psi}} \|\, |\psi\rangle - |\tilde\psi\rangle\| \geq \delta.

and a noise process {{\cal N}} such that {\tilde{\cal P}} instead prepares a state close to

\displaystyle  {\cal N}\left( \mathbb{E}_{|\tilde{\psi}\rangle \sim \mu_{{\cal P},\psi}} [{\cal E}(|\tilde\psi\rangle \langle\tilde\psi|)] \right).

There are a number of questions about this conjecture that one might raise:

  • Why the noise process {{\cal N}}? I added this to make the conjecture more realistic. As we will see in the example, it may be that {{\cal E}} outputs only states that are extremely unlikely to be physically realized (even without any extra skeptical assumptions), so without {{\cal N}}, the conjecture would not have a chance.

  • What is {{\cal N}}? If {{\cal N}} is completely general (and can depend arbitrarily on {|\psi\rangle} and {{\cal P}}) then the statement of the conjecture becomes vacuously true. What I have in mind is that {{\cal N}} will be independent de-polarizing noise; that is, each qubit is replaced with the maximally mixed state {I/2} with probability 1/100 and left alone with probability 99/100. This roughly corresponds to the sort of noise model that optimists (like me) posit, and that standard proofs of the FT Threshold Theorem assume.

  • Why is {{\cal E}} a quantum operation? Such codes are the only ones used in proofs of the Threshold Theorem, and it is hard to imagine FTQC without them (although see below for some discussions of relaxed versions of this). Still this restriction should only make the conjecture easier to prove/harder to refute.

  • What right do I have to put words into Gil’s mouth? This is a fair point. 🙂 Gil might prefer to formulate the details of his conjecture differently. Of course, if he does, I’ll respond to that, and round and round we’ll go.


One consequence of {\cal E} being a quantum operation is that it commutes with taking the expectation over {|\tilde\psi\rangle}. If we then define the map {{\cal N}'} (which itself could plausibly be a quantum operation, such as de-polarizing noise with strength {\delta}) by

\displaystyle  {\cal N}'(|\psi\rangle\langle\psi|):= \mathbb{E}_{|\tilde{\psi}\rangle \sim \mu_{{\cal P},\psi}} |\tilde\psi\rangle \langle\tilde\psi|),

then the state asserted in Conjecture 1 can be rewritten more simply as

\displaystyle {\cal N}({\cal E}({\cal N}'(|\psi\rangle\langle\psi|))).

By contrast, the conventional/optimistic view is that states of the form {{\cal N}({\cal E}(|\psi\rangle\langle\psi|))} can be prepared, without {{\cal N}'}. Thus Conjecture 1 is tantamount to asserting that noise will inevitably strike before a state is safely encoded. Of course this would be true if we followed the naive strategy of preparing {|\psi\rangle} outside of an error-correcting code, and then performing the encoding map; since we cannot perform this encoding arbitrarily quickly, there will always be a constant amount of noise that occurs before the state is protected. (Note that FT computing, whether classical or quantum, does not use this strategy, which is a point that is overlooked by Robert Alicki’s paper quant-ph/0411008.) But Conjecture 1 asserts that the same effect will plague any protocol {\cal P}.

Two repetition codes

Let us attempt to apply Conjecture 1 to the simplest code: the {1\rightarrow n} repetition code. We have to be careful because in fact there are at least two ways to define the repetition code as a quantum code. The one that I believe makes the most sense is to add {n-1} qubits in the {|0\rangle} state, and perform controlled-NOT gates from the input qubit onto each of the new qubits. This will map

\displaystyle a|0\rangle + b|1\rangle \mapsto a |0^n\rangle + b|1^n\rangle,

where {|0^n\rangle := |0\rangle^{\otimes n}} and {|1^n\rangle := |1\rangle^{\otimes n}}. Extending this to a quantum operation, we obtain

\displaystyle  \begin{array}{rcl}  &&{\cal E}_{\text{Rep-B}}( a_{00} |0\rangle\langle 0| + a_{01} |0\rangle\langle 1| + a_{10} |1\rangle\langle 0| + a_{11} |1\rangle\langle 1|) \\  &&\\ &=& a_{00} |0^n\rangle\langle 0^n| + a_{01} |0^n\rangle\langle 1^n| + a_{10} |1^n\rangle\langle 0^n| + a_{11} |1^n\rangle\langle 1^n|. \end{array}

This “Rep-B” idea is only one interpretation of the repetition code. It can correct up to {\frac{n-1}{2}} bit-flip errors, but is vulnerable to even a single phase error. (A standard theorem of quantum error correction states that it suffices to consider only these two types of errors.) (source for image)



The other idea, cloning, is simply to copy the input state {n} times, viz.:

\displaystyle {\cal E}_{\text{Rep-C}}(\rho) = \rho^{\otimes n}.

This cloning map is not used in any FT scheme, classical or quantum, probably because it does not lend itself well to error-correction. For any error-correcting code, we would like a procedure to project a state (i.e. density matrix) onto the linear subspace of valid code states. Operationally this corresponds to measuring whether a (detectable) error has occurred. For Rep-B, this corresponds to checking whether our state is in {\text{span}\{|0^n\rangle, |1^n\rangle\}}.

For Rep-C, this operation is trivialized: since {{\cal E}_{\text{Rep-C}}(I/2)} is the maximally mixed state on {n} qubits, then any state is compatible with this. Thus, we can never look at an {n}-qubit state and say that it is not a valid encoding of some state under Rep-C. This weakness stems in part from the fact that Rep-C is a nonlinear map, while decoding or error-detecting is necessarily linear. As a result, I believe that (contrary to Gil’s arguments) Rep-C does not make sense even for classical FT, unless it is redefined until it essentially becomes Rep-B (e.g. the domain is restricted to be the states {|0\rangle\langle 0|, |1\rangle\langle 1|} and it is extended linearly to all states).

Classical error correction

It seems worth reviewing here how classical FT computing is workable using Rep-B.

  1. Each logical bit is represented by {n} physical bits which are ideally in the state {{\cal E}(|0\rangle\langle 0|)} or {{\cal E}(|1\rangle\langle 1|)}, but in practice will merely be close (in typical Hamming distance) to these states.

  2. Fresh logical bits are introduced to the computation by initializing blocks of {n} physical bits to the state {{\cal E}(|0\rangle\langle 0|)}.

  3. If the computation is randomized, then instead we can initialize a block of {n} physical bits to the state {{\cal E}(I/2)}.

  4. Gates are performed transversally, to minimize the propagation of errors. That is, to store the AND of logical bits {x} and {y} in a logical bit {z}, we assume that these bits are stored in physical bits {x_1,\ldots,x_n,y_1,\ldots,y_n,z_1,\ldots,z_n}, and we then set {z_i = x_i \cdot y_i} for each {i}. In this way, errors will be magnified by at most a constant multiple.

  5. In-between each gate, we will do error correction to attempt to restore each bit to a valid code state. Physically, this is achieved by a bistable flip-flop (in RAM) or a ferromagnetic material (in a hard drive). The former is “active” in that it requires a continuous supply of power, while the latter is “passive” in that it will correct errors simply by being in contact with a room-temperature thermal bath (and samples of magnetite from the sea floor have demonstrated that this sort of memory can last for at least millions of years).


(For Rep-C, steps 1, 2 and 4 are the same, but steps 3 and 5 fail.) One not-so-obvious point is that {\cal E} is never performed. Rather, we develop schemes to directly prepare states such as {{\cal E}(|0\rangle\langle 0|)}.

FT quantum computing works in essentially the same way as the above strategy, except that the repetition code is replaced by a code that can correct a large number of both bit-flip and phase errors.

Applying conjecture 1 to the repetition code

What happens if we apply Conjecture 1 to Rep-B? First, observe that after encoding, the states become extremely sensitive to phase errors, since a single phase error on any qubit will affect the phase of the encoded qubit. Thus, outputs of {{\cal E}_{\text{Rep-B}}} can effectively be assumed to be classical mixtures of {|0^n\rangle\langle 0^n|} and {|1^n\rangle\langle 1^n|}. Indeed, applying {\cal N} to the output will also ensure this.

But what if we have “pre-encoding” noise {{\cal N}'} that is, say, a de-polarizing channel of strength {\delta}? Then attempting to prepare {{\cal E}(|0\rangle\langle 0|)} will inevitably result in a state of the form

\displaystyle (1-\delta)|0^n\rangle\langle 0^n| + \delta |1^n\rangle\langle 1^n|.

This means that if we attempt to initialize a bit in a RAM or a hard drive to a ‘0’ state then there will be a probability {\delta} that it instead yields ‘1’. The situation is even more absurd when one considers other versions of a repetition code. Consider an abacus with beads consisting of {10^{23}} atoms, each of which can be either slid to the left or the right. Conjecture 1 would imply that if we attempt to slide a bead on the abacus to the left using our best possible technology, then we will inevitably accidentally place it on the right with probability {\delta}.

This should be enough to dispose of Conjecture 1. After all, it is stated as applying to all codes. If an exception is built in for Rep-B (which is the key code behind classical computing), then it starts to look rather camel-shaped.

Indeed, the conjecture cannot even be repaired by saying that “for all {\cal E}, there exists a {|\psi\rangle} for which {{\cal N}({\cal E}(|\psi\rangle\langle \psi|))} is hard to prepare.” Because for the repetition code, preparing the encoded state is simply a matter of flipping a coin with a very precise bias, and then preparing many qubits in a way controlled by this coin flip. Again this is something that could have already been achieved with an abacus.

Trying Rep-C

What about applying Conjecture 1 to Rep-C? This is the situation that Gil discusses when he says that Conjecture 1 is compatible with classical computing. Indeed for Rep-C, noise before encoding behaves pretty much the same way as noise after encoding. So, Conjecture 1 is compatible with {{\cal E}_{\text{Rep-C}}}. Can we say that if Conjecture 1 doesn’t hold for every code, it at least holds for some code? Perhaps, but I don’t view Rep-C as particularly representative of error-correcting codes, because if we use any code other than the repetition code, then only the “-B” framework makes sense. Moreover, given that Rep-C cannot distinguish between pre-encoding noise and i.i.d. post-encoding noise, it seems hard to use it as evidence in favor of Conjecture 1.

I believe that Conjecture 1 is likely to fail for most interesting maps {\cal E}. In general, if a system has a two-dimensional topologically protected subspace, then mixing within this subspace is exponentially suppressed. Like Conjecture C, I believe it is too broad. But there may well be more specific classes of states that noise effectively prevents us from creating.

Gil: Conjectures 1 and Aram’s classical error correction test

A review of Conjecture 1

Conjecture 1 asserts that encoded quantum qubits are noisy. What do we mean by a noiseless encoded qubit? If {f(\psi)} is an encoding of {\psi} with {n} qubits we require that {\psi} can be determined from {{\cal N}\left (f(\psi )\right )} up to an error that we cannot distinguish from zero in probability that we cannot distinguish from 1. Quantum error correction achieves this (for every {\psi}) when we let the number of qubits used in the encoding grows. (For this formulation you may need to rely also on the Kitaev-Solovay theorem.) Conjecture 1 asserts that this is not possible.

The case of the repetition code

In the context of quantum fault tolerance and the threshold theorem the only repetition code which is relevant is Rep-B. I always regarded Conjecture 1 for the repetition code as obviously true. Indeed, it is correct that when {\psi=|0\rangle} or {\psi=|1\rangle} {{\cal N}(f(\psi))} determines {\psi} uniquely, but for any other value of {\psi} there is a whole (one-dimensional) “cloud” of states that we cannot distinguish from {\psi}. So the repetition code supports having a stable ground state and a stable excited state but does not support noiseless superposition of these states. This example indeed shows (and I thank Aram for clarifying it) that when I say “the probability” in Conjecture 1 you have either to consider the worst case {\psi} or to take also {\psi} at random in computing the probability. Aram claims that Conjecture 1 is violated by Rep-B for every {\psi}. Let’s look at that.

Aram’s formalization of Conjecture 1

A large part of my work with many backtracks and difficulties was to translate conjectures from English to mathematics. This is not a simple matter at all. (This is an opportunity to mention Greg Kuperberg who found various loopholes in some of my early attempts.) So it feels nice to see Aram trying to formalize Conjecture 1, and overall, Aram mathematical formulation of Conjecture 1 is reasonable except that there is one term which is left in English:

“… {\tilde{\cal P}} instead prepares a state close to {{\cal N}\left( \mathbb{E}_{|\tilde{\psi}\rangle \sim \mu_{{\cal P},\psi}} [{\cal E}(|\tilde\psi\rangle \langle\tilde\psi|)] \right).}

What does “prepare instead” mean? The way I would formalize this English part, which agrees with what Conjecture 1 is all about, is that with probability larger than {\delta} the encoding for {\psi} after the noise and the encoding of {\mu_{P,\psi}} after the noise cannot be distinguished. This is not Aram’s interpretation. Aram’s interpretation of Conjecture 1 is simply incorrect. If I understand correctly what Aram says he would regard also the trivial code which encodes every {\psi} by {|0^n\rangle} as a counterexample to Conjecture 1. Again, this is incorrect.

Let me respond to Aram’s challenge and try myself to formulate Conjecture 1 (it can either stand alone or complement the missing part in Aram’s formulation).

Conjecture 1 (no quantum error-correction): Denote by {d} the trace-distance. There exists {\delta > 0} with the following property. Consider a realistic device which encode a single qubit with {n} qubits. Suppose that {F(\psi)} denotes the encoded state of the single-qubit state {\psi}. Then there are two states {\psi} and {\phi} so that {d(\psi,\phi) >\delta} and

\displaystyle d(F(\psi),F(\phi))<(1-\delta)d(\psi,\phi) .

Conjecture 2

The strong principle of noise (Conjecture 2), which is more general than Conjecture 1, asserts that you cannot find a noiseless quantum evolution as a subsystem of a realistic noisy quantum evolution. Both of these conjectures require some formal description but are interesting and lead to fruitful insights even without such a description. Let me mention Aram’s second thought experiment which was a very original attempt at a counterexample for conjecture 2. Rather than trying to hide a noiseless evolution inside a large noisy one, Aram suggested to enlarge a noisy evolution into a larger noiseless system. Looking carefully at this suggestion the conclusion (see this comment by Aram concluding a long discussion) was that it cannot work.

Conjectures 1 and 2 do not describe noise models but can be seen as requirements from yet hypothetical realistic noise model. In the next post, I will describe a Hamiltonian noise model which is meant to express formally the idea of “no quantum FT” and to support Conjectures 1-4. We will also mention a suggestion regarding the rate of noise which is relevant to both Conjectures 1 and 2. When we talk about “what is a qubit” we assume that we can manipulate the state of the qubit in a fairly general way and we need to relate the rate of noise with the qubit’s evolution.

Aram’s classical error-correction test

Why do we have (essentially) noiseless classical memory? Why are classical computers possible? This is indeed the heart of the matter. Here is my point of view.

We start with the strong principle of noise that says that you cannot find a noiseless quantum evolution as a subsystem of a realistic noisy quantum evolution. A special case of this general principle is Conjecture 1 which asserts that qubits must be noisy; For any encoded qubit we have a substantial mixture of the intended codeword with a “cloud” of unintended codewords.

How does Conjecture 1 allow for noiseless bits? We now need the following repetition principle:

Let {D_1} and {D_2} be two distinct probability distributions. Represent ‘0’ by a sequence of {n} samples from {D_1} and ‘1’ by a sequence of {n} independent draws from {D_2}. If {n} is large enough then ‘0’ and ‘1’ are noiseless bits. (This is true even if in the presence of additional noise of various kind and even if the independence assumption on the samples is relaxed in various ways.)

It follows that a device able to create large samples from two distinct probability distributions enables noiseless bits, and and when we (or nature) control such a device then noiseless classical computation is possible.

Aram’s analogy with 2-SAT and 3-SAT

Aram started his three-post reply last winter with the following analogy: “If you want to prove that 3-SAT requires exponential time, then you need an argument that somehow doesn’t apply to 2-SAT or XOR-SAT. If you want to prove that the permanent requires super-polynomial circuits, you need an argument that doesn’t apply to the determinant. And if you want to disprove fault-tolerant quantum computing, you need an argument that doesn’t also refute fault-tolerant classical computing.”

This is correct. Proving that 3-SAT needs exponential time requires an argument that does not apply to 2-SAT and XOR-SAT. Today, after many years, we still do not have such an argument. But this does not mean that we should believe that 3-SAT behaves like 2-SAT. Indeed, while the proof is a still far, we have strong reasons to believe that 3-SAT is hard; a world where 3-SAT is easy presents a very different reality than ours. Similarly, proving that fault-tolerance quantum computing is impossible will require more experimental efforts and theoretical progress. But the fundamental differences, as those described in the previous debate post, between a world with fault-tolerant quantum computing and our present reality give fair reasons to believe and certainly to explore the possibility that quantum fault tolerance is not possible.

The distinction between classical and quantum error-correction is a major issue whether you are skeptical about the feasibility of universal quantum computers or not. In my opinion, the repetition principle in a reality that obeys Conjectures 1 and 2 draws an appealing picture for the distinction between classical error-correction and quantum error-correction. Aram proposed a different explanation which relies on both the repetition code and on the difficulty to find in 3D quantum forms of passive memories and some commentators offered other explanations. I find these explanations less satisfactory.

How to model the memory of digital computers?

Is Rep-B or Rep-C more appropriate?

Here is an interesting question that came up as a by-product of my discussion above with Aram on the repetition code. Using Aram’s terminology the question is, “Is the microscopic structure of digital memory better described by Rep-B or by something like Rep-C?” When we think about the memory of a digital computer as a vast array of electrons with up spins and down spins where `0′ is represented by (say) 0.01 fraction of up spins and `1′ is represented by a 0.99 fraction of up spins then the Rep-B description suggests a very tight fluctuation for the number of up spins. The Rep-C description is more relaxed and allows the fraction to fluctuate in an entire interval like [0.009,0.012]. Of course, the digital memory implement a repetition of a specific state {\psi} and not of a general state, so the question is if we should consider the state of the digital computer repressenting ‘0’ as {|0^n\rangle} subject to small independent noise or as {|\psi\rangle^{\otimes n}} where {\psi} is some fixed state near {|0\rangle} which is itself subject to noise.

There is a more general issue here. I expect that in various stochastic/Hamiltonian models for noise we witness Gaussian concentration of the number of errors. Of course, such a concentration is not required for quantum error-correction. I tend regard this feature as unrealistic and therefore such models, in my opinion, leave out a crucial aspect of noise which may or may not allow fault tolerance. See this comment. It will be interesting to understand whether Preskill’s recent assumptions on correlation imply concentration for the number of errors.

Open Problems

Do the assumptions on correlated noise in the models in this paper by Preskill in 2006 with Dorit Aharonov and Alexei Kitaev, and by Preskill now, imply concentration of the number of errors?

Is the microscopic structure of digital memory better described by Rep-B or by Rep-C? Do the fluctuations in the number of up spins in digital computers memory behave like the square root of the number of spins? Does the no-cloning theorem have any bearing on these questions?

[amended formula for revised Conjecture 1]

89 Comments leave one →
  1. September 16, 2012 8:00 am

    I first read of the Fundamental Fysiks Group in the book How the Hippies Saved Physics (review) given me as a gift. The opening lines are:

    “To most residents of Vienna, April 21, 2004, probably seemed like any other spring day in the Austrian capital. Students mulled over books in cafes; tourists delighted in the Hapsburg-era gardens, museums, and opera houses; and businesspeople scurried through their appointments. Amid the bustle, however, something magical happened. The city’s mayor and the director of one of the city’s largest banks collaborated on a breathtaking experiment. Working with physicists from the University of Vienna and a spin-off company, the mayor and banker performed the first electronic bank transfer using quantum cryptography.”

  2. John Sidles permalink
    September 16, 2012 12:11 pm

    In tribute to both Aram Harrow and Gil Kalai, here is a Great Truth perspective upon their debate, presented solely via quotations and philology, and founded upon the defining principle that the opposite of a Great Truth is another Great Truth.

    As a statement of Aram Harrow’s Great Truth, we embrace this passage from Nielsen and Chuang’s Quantum Computation and Quantum Information:

    Quantum Computers: Physical Realization  “Much of the allure of quantum computing and communication machines is certainly their potential economic ramifications, as novel information technologies. But as we have seen in this chapter, quantum computation and quantum information also motivate new questions about physical systems, and provide different ways to understand their properties. These new ideas embody a need to move away from traditional many-body, statistical, and thermodynamic studies of physical systems, all the way from atoms to condensed matter systems. They represent a new opportunity to focus instead on dynamical properties of physical systems at the single quantum level. Hopefully, by giving a flavor of the richness of this approach, this chapter will motivate you to continue ‘thinking algorithmically’ about Physics.”

    Similarly, our assessment of Gil Kalai’s Great Truth descends from

    “If one finds a difficulty in a calculation which is otherwise quite convincing, one should not push the difficulty away; one should rather try to make it the centre of the whole thing.”   — Werner Heisenberg (1971)

    Now we adapt the medical maxim “Physicians are defined by the tools at their disposal” to dual maxim “Physicists are defined by the tools at their disposal”, and we consider both the mathematical toolset of Nielsen and Chuang’s Quantum Computation and its (absent) Heisenberg dual toolset.

    Examples are readily discerned: in Nielsen and Chuange we find Lie algebras but not Lie derivatives; operator commutators but not Poisson brackets; unitary rotations but not symplectic flows; decoherence but not localization; communication but not transport; fluctuation but not dissipation, ground state but not vacuum; entropy in its informatic sense but not in its level-set sense; and (remarkably) renormalization is not found in Nielsen and Chuang at all.

    These mathematical lacuna help us to appreciate Paul Dirac’s self-criticism of his own 1930 textbook The Principles of Quantum Mechanics

    It’s not a bad book, but the first chapter is missing.

    Even today, 82 years later, we quantum systems engineers are still struggling — quite concretely and publicly, for example on the Physics StackExchange — to conceive Dirac’s missing first chapter.

    Perhaps with regard to quantum mechanics, we are all of us in the same position in respect to Nielsen and Chuang’s textbook, that Vladimir Arnold was in regard to classical mechanics as expounded by Landau and Lifshitz. As (Fields Medalist) Sergei Novikov tells it:

    Arnold told me that, after reading this book [Landau and Lifshitz’ Mechanics] he finally understood what mechanics was, and, after that, he understood how bad the book was.

    Conclusion  It is natural for our rational sympathies to reside with Aram, and our aesthetic sympathies to reside with Gil, on the grounds that if quantum computing proves to be infeasible, we are likely to appreciate this infeasibility via mathematical tools that are not deployed in the Nielsen and Chuang toolset … and everyone likes to learn new mathematical tools.

    Summary  Aram’s arguments help us to appreciate the Great Truth that the 20th century has blessed us with many Great Textbooks about quantum dynamics that provide is with good reason for confidence that quantum computers are feasible … whereas Gil’s arguments help us to appreciate the dual Great Truth that no Great Textbooks have yet been written about quantum dynamics, which of course is good reason for skepticism that quantum computers are feasible!   🙂

  3. John Sidles permalink
    September 16, 2012 12:44 pm

    To the above might be added, that the Dutch Genootschap Voor Meetkunde En Kwantumtheorie (Fellowship of Geometry and Quantum Theory, GQT) concretely presents (as it seems to me) a creative fusion of Aram’s thesis and Gil’s thesis, that has been outstandingly productive of mathematical, scientific, and even engineering innovation.

  4. September 16, 2012 3:18 pm

    I made not very clear comment about repetition codes and exotic qubits some time ago. More later I wrote about that a short introductory text and currently it is here. May be it still has some relevance in present context?

  5. John Sidles permalink
    September 17, 2012 7:25 am

    Now let’s try to give a concrete answer to Aram’s question:

    Aram asks  “What is [the noise process] \cal N? If \cal N is completely general (and can depend arbitrarily on \psi\rangle and \cal P) then the statement of the conjecture becomes vacuously true. What I have in mind is that \cal P will be independent de-polarizing noise.”

    It’s not so easy to steer between the Scylla of Gil’s conjecture being vacuously true and the Charybdis of it being vacuously false.

    One reasonable and well-known answer is “Noise processes \cal N associated to vacuum interactions are problematic for quantum computation.” This class of Scylla-versus-Charybdis answers is illustrated by Scott Aaronson’s Physics StackExchange question Reversing gravitational decoherence and the various answers to it, including especially John Preskill’s highly rated answer.

    Before wading deeply into technicalities — which in any case are better covered in the various answers to Scott’s question on Physics StackExchange — it is well to reflect that if Nature *did* want to raise a mathematically subtle obstruction to quantum computation, then she could hardly do better than to incorporate into Hilbert space a continuum of states possessing not mere global Newtonian invariance, but rather locally relativistic invariance, together with bosonic gauge interactions that coherently amplify emission/absorption into that vacuum.

    Isidor Rabi famously asked of the muon [a heavy version of the electron] “Who ordered that?” Nowadays in the 21st century, perhaps we ought to be asking of the vacuum state of general relativity, the question that Scott Aaronson (in effect) was asking: “Who ordered that??”

    Perhaps the answer may be: “Nature created the gauge-invariant relativistic vacuum as an entropically necessary obstruction to (among other things) scalable quantum computation.”

    Now, a very natural aesthetic reaction is that vacuum interactions pollute the 20th century’s beautiful edifice of quantum information theory with (in Pauli’s 1931 phrase) Dreckeffects [dirt physics] … “und in Dreck sol man nicht wöllen” as Pauli famously advised young scientists.

    With the benefit of eighty-one years of hindsight, we nowadays appreciate that Pauli’s scorn for quantum Dreckeffects expressed a Great Truth, whose opposite also is a Great Truth. That is why 21st century quantum systems engineers “wöllen in quantum Dreck” with great enthusiasm and considerable practical success, and this happy experience accounts for engineering sympathies with Gil’s thesis!

    • John Sidles permalink
      September 17, 2012 12:39 pm

      As a followup-up, the above Pauli quotation apparently derives from a passage in a 1931 letter from Wolfgang Pauli to Rudolf Peierls that more completely and accurately reads “Der Restwiderstand ist ein Dreckeffekt und im Dreck soll man nicht wühlen” (of which my uncertain translation is “Residual resistance is a dirt effect, and one should not dig in dirt”).

      To the extent that a full appreciation of Gil Kalai’s conjecture requires that we “dig in dirt,” a natural question is (to borrow Voltaire’s happy phrase) “What quantum garden shall we cultivate?” Here it seems to me that a quantum garden of manageably modest acreage, that is well-bounded and that encompasses the key elements that motivate the Kalai conjecture, is the scalable production/observation of photon number states (as motivated by Aaronson and Arkhipov’s arXiv:1011.3245, for example).

      So perhaps in the next few days I will post some vacuum-physics observations relating to the field-theoretic dynamics of this relatively small, and much-studied, yet still-mysterious quantum garden.

      This is deep yet fertile humus, eh? Thank you, Aram and Gil, for inspiring us to dig in it! 🙂

    • September 17, 2012 2:07 pm

      I think the relativistic vacuum is a barrier to our calculations being simple, but I don’t see how it follows that it’s a barrier to scalable quantum computing. If it were, wouldn’t we have seen some kind of anomalous decoherence, or even energy shifts, somewhere along the way? You can’t have a multi-qubit error term that only shows up to decohere quantum computers, and doesn’t have any physical effects when you do other physics experiments.

    • John Sidles permalink
      September 17, 2012 3:01 pm

      Aram asks  “Wouldn’t we have seen some kind of anomalous decoherence, or even energy shifts”

      Aram, in the context even of single-qubit cavity QED experiments, anomalous decoherence and energy shifts are ubiquitously observed.

      Reasonable grounds for Kalai-style skepticism arises, when we reflect that scalable quantum computation requires the simultaneous optimization of multiple figures-of-merit among multiple interacting qubits — frequency stability, photon-emission timing, photon-detection efficiency, and high-Q cavity dynamics, for example — that historically have been optimized only independently.

      Experience teaches that we can individually seek to approach perfect emission timing, and perfect detection efficiency, within QED cavities of arbitrary low energy losses, for arbitrarily large numbers of photon-sources and photon-sinks … yet it is less obviously evident (to me) that the innumerable field-theoretic Drekeffekts, that are known to be associated to made-of-atoms dielectrics, metals, and semiconductors, permit us to optimize all of these figures-of-merit simultaneously … even in principle.

      After all, it took many decades to appreciate that we can measure position to arbitrary accuracy, and momentum to arbitrary accuracy, and yet (for deep reasons) we now appreciate that we cannot measure both to arbitrary accuracy. It is far from self-evident (to me) that Nature’s field-theoretic dynamics has no further surprising relations in store; hence my personal sympathy for Kalai-style conjectures.

      • September 19, 2012 12:46 pm

        The “anomalous” magnetic dipole moment is only an anomaly with respect to the classical theory; it fits the standard predictions of QM perfectly, with no extra terms required in the Lagrangian. I was referring to the possibility of new terms in the Lagrangian that would decohere quantum computers, which I find not so likely.

        And I agree that simultaneously optimizing many experimental components will be a tremendous engineering challenge, but it is hard to imagine any in-principle difficulty. Take optics: yes it is hard to optimize photon-emission timing as well as photon-detection efficiency. But one of these refers to the photon source, and the other to the detector. It’s hard to imagine how two components, sitting on opposite sides of the lab, could be tunable individually but not jointly.

      • John Sidles permalink
        September 19, 2012 1:44 pm

        Aram&nbso; “It’s hard to imagine how two components, sitting on opposite sides of the lab, could be tunable individually but not jointly.”

        In the context of cavity QED, this intuition fails generically for high-finesse (that is, low-decoherence) cavities.

        AdvLIGO provides a spectacular example: the coherent high-finesse light-beam linking two mirrors, two kilometers apart, exerts an (unstable!) Newtonian spring on the mirrors whose mechanical stiffness exceeds that of an equivalent two-kilometer bar of solid diamond.

        More prosaically, a near-unit-efficiency photon-source diode, and a near-unit-efficiency photon-detector diode, at opposite ends of a high-finesse simple-mode optical fiber, most definitely exhibit strong cross-coupled quantum dynamical effects, that great complicates the quest to simultanously optimize source efficiency and detector efficiency.

        Conclusion  It is far easier to specify mutually decoupled, near-perfect sources, mirrors, cavities, and detectors as abstract mathematical operators, than it is to design physical devices, constructed from real-world periodic-table atoms, that couple to a shared field-theoretic vacuum state.

        Historical example  In communication theory, it is mathematically convenient to specify ideal bandpass filters (perfectly flat, infinite roll-off, zero phase distortion), and early electrical engineers exhibited great ingenuity on seeking to approach this limit. It was not until the mid-1920s that the Kramers–Kronig relations established that the natural mathematical abstraction of ideality was infeasible physically.

  6. John Sidles permalink
    September 18, 2012 8:34 am

    To complete the above cycle of physics quotations relating to the crucial role of Pauli’s “Dreckeffekts” [dirt effects] in quantum computation, let us reflect upon two of Lars Onsager’s maxims:

    “There’s a time to soar like an eagle and a time to burrow like a worm. It takes a pretty sharp cookie to know when to shed the feathers and begin munching the humus!”

    “There are a lot of folks, some quite talented, who arm themselves with methods and then go hunting for vulnerable problems; but to accept a problem on its own terms and then forge your weapons — now that’s real class!”

    Here Onsager — who was an avid gardner — in effect is advising us that “cultivating the garden” of quantum computation requires that we “wühlen im quantenfeldtheorisch Dreckeffekten”, and moreover to efficiently “munch this humus” we must “forge new weapons” (mathematically speaking).

    In regard to Onsager-style mathematical weapons-forging, the Netherlands Genootschap Voor Meetkunde En Kwantumtheorie (Fellowship of Geometry and Quantum Theory, GQT) presents (IMHO) a natural extension of the toolset of Nielsen and Chuang’s Quantum Computation and Quantum Information as follows:

    Traditional QM/QIT/QSE toolset  Lie algebras, operator commutators, unitary rotations, decoherence, communication, fluctuations, ground states, informatic entropy, vectors, spaces, derivatives, and projections.

    GQT’s extended QM/QIT/QSE toolset  Lie derivatives, Poisson brackets, symplectic flows, localization, transport, dissipation, vacuums, level-set entropy, tangents, manifolds, differential forms, pullbacks, isometries and symplectomorphisms, and renormalizations.

    We notice in particular, that the GQT mathematical toolset does not supplant, but rather naturally extends, the now-traditional Nielsen/Chuang toolset for quantum dynamical analysis.

    An appreciation of the extended GQT mathematical toolset ignites within us a similar (very healthy!) dissatisfaction to that which Vladimir Arnold felt for Landau and Lifshitz’ Mechanics. Per Sergei Novikov’s account:

    Arnold told me that, after reading this book [Landau and Lifshitz’ Mechanics] he finally understood what mechanics was, and, after that, he understood how bad the book was.

    Similarly, as we assimilate GQT’s extended mathematical physics toolsets, we become dissatisfied with the initial chapter of Landau and Lifshitz’ Statistical Physics, which summarizes “The Fundamental Principles of Statistical Physics.” We are dissatisfied because this initial chapter awkwardly treats classical and quantum statistical mechanics separately, whereas the Nielsen/Chuang/GQT mathematical toolset naturally describes classical and quantum dynamics as limiting cases of a unified class of dynamical flows. This naturalizing perspective is exceedingly helpful in optimizing engineering algorithms for efficiently simulating quantum dynamics, and of course it is crucial to the development of string theory … or M-theory … or whatever theory emerges in the 21st century as the successor to 20th century quantum mechanics.

    This unifying, clarifying, mathematically naturalizing 21st century perspective motivates an appreciation of Landau and Lifshitz that is similar to Dirac’s appreciation of his own textbook:

    It’s not a bad book, but the first chapter is missing.

    Summary  The Harrow/Kalai debate helps us to appreciate three reasons why progress toward practical quantum computing has been dismayingly slow compared to the quantum computing roadmaps of a decade ago: (1) we have encountered dauntingly many quantenfeldtheorisch Dreckeffekten, (2) progress toward Nielsen/Chuang/GQT mathematical unification has been slow and there are dismayingly few good textbooks, and perhaps (3) our natural human enthusiasm for “eagle-soaring” quantum breakthoughs has been associated to an underappreciation of the virtues of Onsager-style “humus-munching.”

    This enlarged quantum perspective is (IMHO) very encouraging, as it presents to us a unbounded 21st century venue for unprecedented quantum experiments, stronger theories, deeper mathematics, novel technologies, and vast enterprises. In particular, we are authorized and encouraged to write, not appendices to 20th century physics textbooks, but new first chapters. For which, please accept this appreciation and thanks, Gil and Aram and Gödel’s Lost Letter! 🙂

    • Jim Blair permalink
      September 21, 2012 9:01 am

      Great post, John! 🙂

  7. Serge permalink
    September 18, 2012 8:38 am

    Wake me up when your quantum computer is ready! 🙂

  8. September 18, 2012 9:02 pm

    Let me make one further remark regarding Aram’s “classical error correction test”. When it comes to actual modeling of noise I leave this issue aside, and most of my conjectures/models are harmless for classical computing to start with. Conjectures 3 and 4 have rigorous formulations (see the paper “When noise accumulates“) which make sure that they don’t have any baring on classical computation. In the next and last post I will briefly describe my model of smoothed Lindblad evolutions (from the same paper), and, again, it is based on noise which does not harm classical memory and computing from the start. I hope that at a later stage we will be able to show that even when the noise models are enlarged and put classical memory and computation in danger, classical information and computation prevail.

    This has some similarity to the situation for the threshold theorem. To the best of my memory, two of the first three papers (by Knill, Laflamme, and Zurek, and by Kitaev) take classical computation for granted. (Of course, we can take classical computing for granted but letting classical computing emerges from the noisy model of computation speaks for the integrity of the noise model itself. ) The proof by Aharonov and Ben-Or had the advantage of getting both classical and quantum computing “from scratch.”

  9. Robert Alicki permalink
    September 19, 2012 7:14 am

    There is an important missing or hidden assumption in Gill’s Conjecture 1 that both
    initial and fi nal states of the encoded qubit must be stable. While the initial state can be
    a stable thermal equilibrium one, often very close to a pure ground state, the stability of
    the final state is an essence of FTC. The final state of an encoded qubit with the lifetime
    of, say, 10^{-20 } sec is pretty useless for any purposes.
    Only quite recently I realized that the simplest way to understand the main problem with
    QFTC is to notice the fundamental conflict between stability and reversibility of informa-
    tion processing (see for the details Quantum memory as a perpetuum mobile? Stability v.s.
    reversibility of information processing, Robert Alicki arXiv:0901.0811 (last version 2012 to
    appear in Open Sys. and Inf. Dyn.)).(BTW, I think I deserve the 100 000 USD prize of
    Scott Arronson).
    The main argument is the following. We have learned from the numerous examples that
    the mechanism of state protection against the most fundamental and inevitable thermal
    noise is similar for classical and quantum information carriers and is described by the
    presence of free energy barriers separating di fferent states. In classical case this mechanism can be illustrated by the simplest model of mean- field Ising ferromagnet, in quantum domain is much more involved (4D Kitaev model is probably the simplest realization).
    A classical Ising bit can be visualized as a fictitious particle moving in a symmetric double-well potential under the influence of friction and random Langevine force satisfying
    fluctuation-dissipation relation. This friction force leads to energy dissipation during a gate which changes a bit state. This amount of dissipated energy (provided by an external work
    source) is at least of the order kT N (N -number of spins encoding a bit) much larger and
    more realistic than the Landauer bound kT ln2 !
    Classical, strongly dissipative irreversible gates are, of course, exactly those used in our
    computers. Reversible classical computers based e.g. on “billiard-ball models” would
    not work eff ectively due to lack of stability and devastating influence of chaos and noise.
    Adding stabilizing friction would destroy their operation principle. On the other hand, for
    a protected encoded qubit decoherence accompanying energy dissipation destroys unitarity of quantum gate and hence makes scalable QFTC impossible.

    • John Sidles permalink
      September 19, 2012 8:54 am

      Robert Alicki asserts  “The mechanism of state protection against the most fundamental and inevitable thermal noise is similar for classical and quantum information carriers and is described by the presence of free energy barriers separating different states.”

      Robert, please let me say that your post raises many good points. And yet for reasons that arise naturally in the coding of large-scale quantum simulations, no-go arguments based upon free energy have a (potentially) exploitable loophole, that is associated to the intimate interplay between the notions of free energy, entropy, and spatial localization.

      In a nutshell, for strictly computational reasons that are detailed on Physics StackExchange, it can happen — and does happen — that multiple qubits can be encoded in quantum trajectories for which no spatially localized entropy density function is well-defined. In consequence there is no well-defined free energy density function (which is the thermodynamic dual of entropy density), and neither is any other spatially localized thermodynamic potential funtion (e.g, a chemical potential) well-defined.

      For these spatially non-local qubits (and perhaps only for spatially non-local qubits?) it seemingly makes sense to talk of temporally stable qubits. This thermodynamical insight is of course well-known to the quantum computing, and is well-described (for example) in John Preskill’s on-line notes Topological quantum computing for beginners.

      Needless to say, it is tricky to analyze noise processes for non-local qubits, and neither do we have much experimental experience with them, and so it may be that we don’t (as yet) appreciate the noise mechanisms associated to spatially nonlocal qubits all that well.

      Conversely, the noise processes that thermalize classical bits serve also to spatially localize them and to temporally stabilize them; and so in regard to classical qubits there is (AFAICT) not much controversy regarding our quantum dynamical understanding of their stability.

    • September 19, 2012 12:55 pm

      Robert, I find your perpetual-motion arguments fascinating and would like to fully understand them sometime.

      As for what you write here, I agree that dissipation is necessary for the continued functioning of any computer, quantum or classical, and that the amount of dissipation should be proportional to (# of (qu)bits) * time * temperature. However, prescriptions for doing this are part of the standard FTQC approach, which calls for measuring qubits, and constantly bringing in fresh qubits in the |0> state. And yes, the physical gates are not unitary, but thanks to QECCs, the encoded dynamics *are* approximately unitary.

      So it seems to me that dissipation is fully compatible with quantum computing, just as it is with classical computing. This was a surprising new insight in the mid 1990’s when QECCs and the FTQC threshold theorem were discovered, but it seems to me they have now addressed this concern.

      • Robert Alicki permalink
        September 19, 2012 1:28 pm

        “I agree that dissipation is necessary for the continued functioning of any computer, quantum or classical, and that the amount of dissipation should be proportional to (# of (qu)bits) * time * temperature. ”

        Yes Aram, but you believe that a quantum gate restricted to encoded qubit (i.e. the information carrier degree of freedom) can be made arbitrarily close to unitary while for classical computers gates on bits are always very far from the Hamiltonian ones (you do not have continuous in time reversible dynamics on discrete classical spaces).
        The illusion of QECC and FTQC is based on:
        a) too weak (finite norm assumption) or too classical (exponential decay of correlations) models of noise,
        b) the idea of “fresh qubit supply” which was never demonstrated using first-principle models and, I think, is incompatible with Hamiltonian models of open quantum systems.

      • September 19, 2012 1:34 pm

        For both FT classical and quantum computers, the encoded gates are not continuous. I guess this is clear for classical computers; you fliip a bit from 0 to 1, and there’s nothing continuous about it. But it’s also true for FTQC; you can do a certain set of encoded gates, like Cliffords and pi/8, and NOT a continuous set of gates.

        As for (a) and (b):
        (a) If noise correlations are so high, then why isn’t this a problem for classical computers? Also, the infinite norm correlations are a technicality, since we never actually have an infinite number of photons in any mode.

        (b) What about optical pumping? T1? In any system where we observe Rabi oscillations, we have to somehow initialize some polarization. The only difference in FTQC is that this happens in the middle of the computation. I don’t understand why it should be incompatible with Hamiltonian models of open quatnum systems, and if it is incompatible, then that seems to be a sign that if anything it is the models that are wrong.

      • Robert Alicki permalink
        September 19, 2012 4:18 pm

        For both FT classical and quantum computers, the encoded gates are not continuous. I guess this is clear for classical computers; you fliip a bit from 0 to 1, and there’s nothing continuous about it. But it’s also true for FTQC; you can do a certain set of encoded gates, like Cliffords and pi/8, and NOT a continuous set of gates.

        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

        The idea of discrete gates is an idealization, both quantum and classical dynamics are continuous in time. This idealization is particularly missleading when termodynamical arguments are used.
        Thermodynamical quantities are PATH DEPENDENT. Bit flip is described in a physical theory by a continuous stochastic evolution during which entropy is produced, heat and work are exchanged between the information bearing degrees of freedom and environment.
        This is completely hidden in a discrete gate model.

        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
        As for (a) and (b):
        (a) If noise correlations are so high, then why isn’t this a problem for classical computers?
        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

        As I wrote, strong dissipation does not harm classical computation based on irreversible gates. On the contrary it is necessary to mantain stability and speeds-up computations.

        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

        Also, the infinite norm correlations are a technicality, since we never actually have an infinite number of photons in any mode.

        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

        You do not need even a single photon to have infinite amplitude quantum field fluctuations which cause e.g. spontaneous emision in vacuum. Even if they are not strictly infinite they are cutoff- dependent i.e. determined by a very high energy scale where quantum electrodynamics breaks down and other interactions must be taken into account. You cannot treat a very large parameter as a small constant in any reasonable physical model.

        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
        (b) What about optical pumping? T1? In any system where we observe Rabi oscillations, we have to somehow initialize some polarization. The only difference in FTQC is that this happens in the middle of the computation.

        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

        I do not understand the relation between Rabi oscillations and fresh qubit supply.

        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%55

        I don’t understand why it should be incompatible with Hamiltonian models of open quatnum systems, and if it is incompatible, then that seems to be a sign that if anything it is the models that are wrong.

        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

        Try to construct a Hamiltonian model of fresh almost pure qubit supply. Firstly, you have to preserve them in a thermal environment using a protecting Hamiltonian which produces a large energy gap (>> kT) between a ground state and excited one. Secondly, to integrate this almost pure qubit with the computer you have to switch-off the protecting Hamiltonian. Whatever you do, slow or fast process, the qubit becomes dirty.

  10. John Sidles permalink
    September 19, 2012 5:04 pm

    Robert Alicki challenges: “Try to construct a Hamiltonian model of fresh almost pure qubit supply.”

    Continuous interferometric observation-with-feedback yields a single-qubit thermal density matrix that approaches an equilibrium temperature of ±0 (that is, perfect |0\rangle and |1\rangle states) … in the utopian limit of unit photon detection efficiency and optimal feedback gain, that is.

    Indeed, in regarding a nanoscale cantilever as (effectively) a large-j qudit, this is the process by which we experimentalists dynamically synthesize ground-states, and it is routine to achieve qudit noise temperatures that are small fractions of ambient temperatures.

    Conclusion The on-demand dynamical synthesis of initialized ancillary qubits/qudits is not (AFAICT) an insurmountable obstacle to large-scale FTQC … provided that the “FT” part works as-designed! 🙂

    • September 19, 2012 5:26 pm

      In fact, even classical computers achieve this, albeit at high per-bit noise rates.

    • Robert Alicki permalink
      September 20, 2012 12:41 am

      Yes, in a utopian world!

  11. September 19, 2012 6:47 pm

    Here is another interesting issue which is related to the discussion while not having a clear baring on the debate.

    Classical evolutions that do not support computation (or, in other words, how to define classical non -FT evolutions?)

    While both Aram and I are in agreement that classical information and computation are possible, a question that I find fascinating is how to formally describe non fault-tolerance classical evolutions. A related question is: Given a class of classical evolutions (perhaps also with certain restrictions on the initial conditions) when can we say that this class supports universal classical computing. Some versions of my conjectures/models may be relevant here, and for this purpose we want to fail the classical errror-correcting test! Why this can be important? I would think (ok, speculate is a better word,) that when a class of processes allows universal classical computation and supports fault-tolerance then this can potentially be used to demonstrate various types of self-defeating behavior, Maxwell daemons, and perhaps be relevant to issues of well-posedness and regularity.

    • September 19, 2012 7:32 pm

      I can imagine error models that do not support FTCC, but it’s hard to imagine a formal definition of “classical computation without fault tolerance.” As I mentioned in an earlier post, we can’t define precisely “classical computation without the FFT” (fast Fourier transform), and the same should go for any technique that we want to exclude.

      On the other hand, analog computing is a good intuitive picture of non-FT classical computing. I just doubt whether this can be made rigorous.

      • September 20, 2012 5:24 am

        Aram, what is the problem with the formal definition? Let’s consider some Turing complete model represented as a discrete deterministic system with transition between some set states. After that extend the system with adding “erroneous transitions” with some probabilities. Etc., etc.

      • September 21, 2012 10:50 am

        When it comes to non-FT quantum evolutions, a reason I am optimistic that a formal description is possible is that I have concrete proposals for such a description (that we discuss here over the last months). The classic case indeed looks harder but it also looks possible. There are many cases where we can put on formal grounds various restricted classes of evolutions or computations. We can model “computation without backtracking” via monotone circuits, we can model “quantum computation without cooling” via irreversible noisy circuits, we can model computations without exotic bit operations by arithmetic circuits, and there are many more example. As for computing without FFT (and here I am not sure if you want, Aram, to exclude specifically FFT or any form of DFT) this does not look like a good analogy for our discussion, but a cute question otherwise.

      • John Sidles permalink
        September 21, 2012 11:25 am

        To agree with Aram (as I have understood his posts), it is considerably less easy than one might at first sight imagine, to rigorously distinguish (classical) analog memory/computation processes from (classical) digital memory/computation processes.

        For example, we have modern flash memories that contrive to store multiple error-corrected binary bits in multiple charge levels on individual capacitors. These flash memories belong to the same technological family as charge-coupled optical sensing arrays that store quasi-analog images as in the quasi-continuum of charge levels if similar capacitors. Where in this family of devices is the rigorous dividing-line between digital flash bits versus analog CCD images? Don’t ask me!  🙂

        In computation too, this same blurring of analog-versus-digital exist, per the family of computers described in US Patent 2,815,488 Non-Linear Capacitance or Inductance Switching, Amplifying and Memory Organs (1957), whose inventor was … John von Neumann himself! And yes, these quasi-analog, quasi-digital computation devices were for a time commercialized as parametron computers.

        Among the seminal articles that discuss parametronic computation are two by Rolf Landauer (“Dissipation and noise immunity in computation and communication”, Nature 1988, and the follow-up article “Dissipation and noise immunity in computation, measurement, and communication”, Journal of Statistical Physics, 1989), which in turn was generalized to the quantum domain by Seth Lloyd’s “Any nonlinear gate, with linear gates, suffices for computation” (Physics Letters A, 1992).

        Summary  In distinguishing “analog” from “digital”, perhaps it is well to keep in mind the legal maxim:

        “Justice is the tolerable accommodation of the conflicting interests of society, and I don’t believe there is any royal road to attain such accommodation concretely.”

        — Judge Learned Hand

        Both in the classroom and in the courtroom it is convenient to distinguish between analog and digital sensing, switching, memory, and computational technologies … and yet all parties should keep in mind that the mathematical and physical foundations for this distinction are highly imperfect.

      • September 21, 2012 11:51 am

        I also agree here. I never understood the distinction between “analog” and “digital” even on the intuitive level.

      • Jim Blair permalink
        September 22, 2012 6:33 am

        Gil,

        I am not sure how useful this idea is in terms of the current discussion, but it might be useful on an intuitive level.

        Find a old fashioned slide rule and use it to solve a couple of problems.

      • John Sidles permalink
        September 22, 2012 3:52 pm

        LOL … and yet, a slide-rule in which the physical motions are mediated by gears becomes (in essence) a digital device! 🙂

        This illustrates how distinctions that are algebraically natural and mathematically rigorous, sometimes are neither physically natural nor practically unambiguous.

      • Jim Blair permalink
        September 27, 2012 11:23 pm

        John,
        John,

        In general I agree with you. The distinction between analog and digital computing devices is not as simple as a mechanical/electrical dichotomy.

        I am not sure you are 100% correct in viewing all the fire control devices as being digital.

        If the gears are used as counters for example, then the device is essentially digital.

        If the gears transmit mechanical motion to a geometrically designed cam, then I think an argument can be made that the device is essentially analog.

        Whatever the case, the amount of ingenuity that went into designing those devices was remarkable.

        When you consider that the big guns on those old battleships could fire a shell that weighted as much as a small car and hit a target miles away, those devices are even more amazing.

      • Jim Blair permalink
        September 27, 2012 11:26 pm

        Sorry abou the extra John. Not sure how it got there.

  12. Robert Alicki permalink
    September 20, 2012 12:58 am

    “On the other hand, analog computing is a good intuitive picture of non-FT classical computing. I just doubt whether this can be made rigorous.”

    Why? Using the notions of “chaos theory” (Lyapunov exponents, K-S entropy) it should be possible and very helpful to understand the analogical problems in FTQC.

    BTW for those who do not like the thermodynamical arguments there is another signature of failure of the idea of efficient QC.
    It is known that some hard problems are equivalent to findig the lowest eigenvalue of a certain Sturm-Liouville operator (i.e. a ground state energy of a certain quantum system).
    Assume, we want to measure N first digits of this energy (analog quantum computation equivalent to determination of the ground state). It means that the acuracy of the measurement is 2^{-N} and hence due to the Heisenberg relation we need 2^{N} time for such a measurement. Again bad scaling!

    • September 21, 2012 12:59 am

      I think a lot of these arguments apply to various attempts at analog computing, classical or quantum. And I agree that this setting is unlikely to scale well.

      But the reason why I doubt it can be made rigorous is that digital computing can be embedded in analog computing (i.e. I can build a pentium out of transistors). So it is simply not true that analog computers can never do anything FT; they can at the very least perform FT simulations of digital computers.

      • Robert Alicki permalink
        September 21, 2012 2:54 am

        “But the reason why I doubt it can be made rigorous is that digital computing can be embedded in analog computing (i.e. I can build a pentium out of transistors). So it is simply not true that analog computers can never do anything FT; they can at the very least perform FT simulations of digital computers.”

        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

        The situation is much more interesting and, I am sure, could and should be formalized using the notions of dynamical systems theory. There are essentially two types of classical dynamical systems: a) conservative, volume (measure) preserving , reversible (e.g. Hamiltonian) , b) non-conservative, dissipative, irreversible

        a) Conservative systems are pretty useless for FT computation because of chaos (reversible computers must be strongly nonlinear, because for linear systems always 1+1= 2 and we need Boolean 1+1 = 0).

        b) Dissipative systems possess stable attractors (e.g. fixed points, limit circles) which define their basins of attraction. Hence a phase space is decomposed into discrete set of basins of attraction which corresponds to a discrete set of states of a digital computer. Dissipative dynamical systems are physical models for the FT digital computation. The price for FT is digitalization.

        The FTQC optimists want to have their cake and eat it. They want stabilization by dissipation but want also to preserve continuity in terms of superpositions of states.

  13. September 20, 2012 7:13 pm

    Maybe I was too quick to mix two issues which are not the same so let me try to discuss them separately and raise a few questions.

    Given a class of classical evolutions and perhaps also some restriction on initial conditions is it possible to talk about the computational power expressed by this class?

    For example, I know that Conway’s game of life supports universal classical computing. Is it possible to say (or better, was  already said) something of this kind for evolutions described by certain dynamical systems?, PDEs?

    As I said, it looks (naively, perhaps) that when your class of evolutions is large enough to support computation then this can be used to describe self-defeating behavior, Maxwell daemons, ill-poised examples etc. (Again, this may be naive as well as a familiar thought/idea/approach and if so I will be happy to hear about it.)

    The second related question which requires some stochastic ingredient and a notion of noise is if it is possible to formally describe the class (or a reasonable class) of noisy classical evolutions that does not allow classical error-correction and classical fault-tolerance. I agree that this might be hard (perhaps even harder than the quantum case). How can we prevent the creation of a single noiseless bit of information (and still do something interesting)? Aram wrote that analog computing is a good intuitive picture of non-FT classical computing.   I will be happy to hear more on the intuitive level what Aram means here.

    • September 21, 2012 12:01 am

      By analog computing, I don’t mean anything deep. I am thinking about something that might be used in analog audio processing. Say each wire holds a real number: x, y, z, etc., represented by a voltage. Then we can hook them up in ways to set z = x + y, or z = x * y, or z = 2 * x, etc. (The true functions will be more complicated, and maybe time-dependence could be part of it, but let’s ignore this for now.) This isn’t FT, because if x is replaced by x +- eps, then nothing in the circuit can say that this is wrong, and so errors will just build and build (even apart from the fact that nonlinearity can amplify them.)

      Of course, this model includes the option of setting things up so that the only stable voltages are 0V and 5V, and thus it can simulate digital computation. In this case, it probably will be FT. So that’s why I don’t think there’s any precise way to define “non-FT analog computing.”

    • September 23, 2012 10:44 am

      The question of describing the “computation power” of a specific class of evolutions (and initial conditions) seems interesting.   I mentioned Conway’s game of life which supports universal Turing machines and this can be taken as some sort of “role model”. However, putting this question on formal grounds does not look easy especially for continuous or noisy evolutions. (Of course, in the study of dynamical systems there are various other notions of complexity which may apply and it is not clear that computational complexity notions are useful.)

      Regarding classical non-FT evolutions, one possible suggestion for non-FT evolution is an evolution which, on any scale, admits a bounded-depth approximation. Another possibility would be to consider the classical analogs of my conjectures about correlation of noise (these analogs do not generally hold for noisy classical evolutions) as a basis for a definition of non-FT classical noisy evolution. It will be interesting to offer a notion of stochastic non-FT versions for some continuous evolutions which are based on standard PDE-s. Another point of view might be to regard non FT-evolutions as a class of evolutions where we cannot violate rules of thermodynamics at any scale. E.g., we cannot create Maxwell Daemons.

      Getting a little wild,  and picking on this comment by John Sidles we can consider the Navier Stokes equations. What kind of computation is supported by NS equations in two dimensions? in three dimensions? (See this post by Tao for a discussion of this notorious problem.)

      I am aware of a paper by Andy Yao “Classical physics and the Curch-Turing Thesis” on somewhat similar ideas. (Like many other authors, Yao’s heart seem to be in the direction of achieving superior computational power rather than exploring and using inferior computational power.)

      • September 24, 2012 6:00 am

        Not very long time ago it was found an universal cellular automaton on Penrose’s tilings http://arxiv.org/abs/1208.2771 It was result of recent interest to analogues of Conway’s Life in such an irregular (“noisy”) environment.

      • September 24, 2012 9:55 am

        That’s interesting, Alexander. I also looked and found several relevant references to the question about computation (and computability) power of PDEs. Here is a MathOverflow question entitled Simulating Turing machines by ODEs and PDEs by Mariano Suárez-Alvarez. The question refers also to an earlier MO question on general theory of PDEs and to a blog post by Terry Tao on the “non self-defeating argument” and especially a discussion there on the comment thread.

        Of course, if we consider continuous evolutions we should clearly specify the kind of approximation we allow.  One thing that I still did not find is an answer to a question like: “What is the computational power of evolutions based on 2D NS equations,” and if Turing universality is related to classical questions regarding regularity. Also I did not find in this context issues of fault tolerance and error correction come to play.

      • September 28, 2012 8:32 am

        May be classical analogue “self-organisation structures” may be compared with error correction?

  14. Serge permalink
    September 21, 2012 10:14 am

    Whether large-scale quantum computers are feasible looks very much like the P=NP problem to me: both questions have a definite answer but it’s out of our reach. In other terms, why not replace the questions:

    “Is there an efficient algorithm for SAT?”
    by
    “Can a thinking being find an efficient algorithm for SAT?”

    and

    “Can quantum computers work on a large scale?”
    by
    “Can living beings build a large-scale quantum computer?”

  15. September 21, 2012 11:48 am

    I would like to discuss another matter that I raised in the post and to look critically at John Preskill’s paper. A good place to start is this comment by John:

    In any case, Gil believes that Hamiltonian noise models like the one discussed in my notes are not physically realizable; that could well be, though I would like to understand better his reasons for holding that opinion. And it is a big leap from there to the statement that large-scale quantum computing is impossible.

    I would like to explain why I regard the correlation assumptions offered by John as unrealistic and I think that it can be useful to clarify this issue. Ironically, if I am correct it will also show how large is the leap that John talks about in the last sentence and how much wider are the classes of noise that support FTQC are. So this is a technical matter that skeptics and believers can take both ways.

    I expect that the mild conditions on correlation in John’s paper (as well as the earlier paper by Aharonov-Kitaev-Preskill) imply a central limit theorem for the number of errors. In particular, the number of errors will decay exponentially above and below their expectation, and the standard deviation of the number of errors behaves like square root of the number of qubits.

    I regard this feature as unrealistic.  I would expect that for a realistic noise the standard deviation for the number of errors is linear in the number of qubits. (I even guess that this is the case for digital memories, but on this point both Aram and Robert Alicki disagree.) As I said, you do not need such square-root n concentration of the number of errors in order to have FTQC. Quantum fault-tolerance only requires that the probability for the number of errors to go over the threshold decay fast enough. (And polynomial decay is fast enough.) But this also suggests that John’s (as well as earlier) modeling miss the first-order effect regarding the behavior of the number of errors and that there is something basic about noise missing from John’s model.

    • September 23, 2012 3:59 am

      For digital RAM the error rate has been studied empirically at Google. Here is the abstract, and there is a PDF link as well
      http://research.google.com/pubs/pub35162.html

      In section 5 they look at the scaling for the number of errors when the RAM size is doubled. For several platforms they find no increase in the number of non-correctable errors. For other platforms they find an increase by more than a factor of 2.
      This is probably explained by the latter RAMs being based on a large number of small memory chips and the stable ones being built from a smaller number of large chips.

      • September 23, 2012 7:34 am

        Thanks for the comment, Klas. I referred in my comment to the microscopic structure of a single digital bit and not to the number of bit errors in RAM.

        If a ‘0’ bit such a bit is represented by N=10^{10} electrons where 0.01 of which have up spins, Aram expect that the standard deviation of the number of up-spin electrons will be like \sqrt N=10^5 and I expect that the standard deviation will behave like a small constant times N=10^{10}. (Referring to up-spins electrons as “errors” is based on Aram’s rep-B interpretation.)

      • September 23, 2012 8:46 am

        So you were not talking about the way actual digital memories work when you said that your suspect that their number of errors is linear in the number of bits?

        In a standard digital memory, either on a processor or in a memory chip, the bits are not encoded by electron spins. Instead the state of a single bit is given by the amount of charge in a capacitor-transistor pair. A RAM chip is built by simply having many capacitor-transistor pairs implemented on the same piece of semiconductor.
        http://en.wikipedia.org/wiki/Dynamic_random_access_memory
        I’m sure there are some numbers available on the charge stability of a single capacitor-transistor pair as well, even though I don’t have them at hand.

      • September 23, 2012 9:52 am

        Dear Klas,

        Indeed I was not talking (In the post and the comment above) about entire digital memories with large number of bits. I thought that this was clear from the context but thanks for clarifying it. When we discuss the repetition code in the context of digital memory the issue is how a single bit is represented.

        Aram and I were talking about the microscopic representation of a single bit by a vast number of microscopic elements. I suppose that the charge you mention translates to some microscopic behavior of a vast number of particles representing a single bit, but indeed thinking about electrons and up-spins may well be an abstraction (or a very special implementation).

        If the microscopic properties of the physical device representing a single bit depends on the stability of some charge then this will overall support my point of view.

      • John Sidles permalink
        September 23, 2012 10:57 am

        A conversation with Aram led to the following postulate:

        Postulate (Roadhouse of the Larger State-Space)  Any fault-tolerant computation — whether classical, quantum, or hybrid — that is conducted within a bit/qubit memory of fixed finite size, via time-dependent Hamiltonians, projective measurements, and conditional error-correction, can be simulated to arbitrary accuracy upon a larger-dimension state-space as a Hamiltonian/Lindbladian trajectory unravelling, in which the Hamiltonian/Lindbladian generators are strictly time-independent, no measurements are projective, and there is no external conditional logic.

        This postulate — of which I am about 85% confident (definitely not 100%) — gives me great respect for John Preskill’s FTQC theorems.

        Conversely, this postulate focuses (my own) residual skepticism regarding the feasibility of FTQC onto two topics: (1) the physical limits to the spatial and temporal localizability of Lindblad generators, given that these generators are subject to the quantenfeldtheoretisch Dreckeffekten mentioned above, and (2) the non-symplectomorphic nature of the unravelled Hamiltonian/Lindbladian (stochastic) dynamical flow, which generically acts to compress dynamical trajectories onto effective quantum state-space manifolds of reduced dimensionality and generically negative curvature (relative to the large-dimension flat Hilbert state-spaces that are postulated in the 20th century’s canonical algebra-oriented texts, e.g. Nielsen and Chuang).

        Skepticism of quantum computing arises (for me) because the above two considerations render it less-than-obvious that the effective state-space geometry of real-world FTQC computers has sufficiently many dimensions, and sufficient geometric robustness in the presence of the above-mentioned Dreckeffekten, as to support the large-scale FTQC processes that are necessary to complexity classes like BQP and/or QMA.

        Provisional conclusion  The same Preskill-style FTQC postulates that are plausible and natural in algebraic descriptions of projectively error-corrected computational processes on large-dimension flat Hilbert state-spaces, seem less plausible and less natural when these same computational processes are geometrically unravelled as Hamiltonian/Lindbladian (stochastic) dynamical flows on low-dimension non-flat effective state-spaces.

        That is why (IMHO) we won’t really be confident that FTQC is feasible, until we unravel its theoretical foundations both backwards and forwards, both algebraically and geometrically … and then verify this naturalized and universalized quantum dynamical understanding by designing and operating working FTQC devices.

    • aramharrow permalink
      September 23, 2012 9:52 am

      The spin interpretation is more appropriate to hard drives, where my understanding is that the stability is mostly due to ferromagnetism.

      As for RAM, what we have been calling the number of bits refers to the microscopic sizes of the memory, so to resolve this point we’d want to know how the voltage fluctuations scale with the size of the wire (i.e. amount of current) or potential differences of the individual electrons (i.e. voltage).

      • Robert Alicki permalink
        September 23, 2012 10:12 am

        It is always the same, normal fluctuations proportional to a square of a number of elements (electrons, spins, fotons, atoms , etc). The only exception is the point of phase transition where larger, abnormal fluctuations appear. This follows from the additivity of energy which assures stability of matter (at a reasonable scale, because due to gravity the energy of
        the Universe is non-additive and Universe is unstable)

      • September 23, 2012 6:39 pm

        Another case to check if Rep-B or Rep-C are more realistic is to consider current devices that can create a large number of cat-states on pairs of qubits. If \psi stands for an ideal cat state, Rep-C will predict that we will get something like N(\psi'^{\otimes n}) where \psi' itself is a certain noisy version of \psi, and N is some standard independent noise. Rep-B will predict something like N(\psi^{\otimes n}).

        (Aram’s text hints that we should reject the rep-C description because the cloning map is nonlinear. But this does not sound correct to me. )

      • September 24, 2012 12:37 am

        Sorry, I probably shouldn’t have brought up the fact that the cloning map is nonlinear. You’re right that it’s not the most relevant objection. And of course, in a sense it’s strange to “object” to a mathematical function. But let me try to say more clearly why I think that Rep-B is a better map to think about in this case.

        If you explain textbook FTQC, the story goes something like this
        “Let E be a QECC, like concatenated Steane. Prepare a bunch of copies of E(|0>). Then do encoded operations on them. Between each step, do error correction. At the end, do a FT measurement.”

        If you replace E with “Rep-B” then this exactly describes classical computing. If you replace E with “Rep-C” then various pieces of this don’t go through (as I mentioned in my post). So in order to discuss FTCC, I think Rep-B is the code to focus on. And FTCC is highly relevant to the discussion because I believe that the empirical demonstration of FTCC is strong evidence in favor of FTQC being possible in principle.

      • September 24, 2012 8:19 pm

        Well, I think the nonlinearity of the cloning map is relevant and it is good that you mentioned it, and overall how classical robust information (and computation) emerges from the noisy quantum “mess” is a bit of a mystery.

        When you say: prepare a buch of copies of E(|0>). the question is what does it mean. To start with I suppose that |0> had no special role in the Bloch sphere (or else it represents already a robust classical entity).

        When you prepar these copies do you prepare independent noisy copies of the same state |0> or independent noisy copies of a noisy version of |0>? The first possibility (which resembles rep-B) does not look realistic unless your initial |0> is robust to start with. The seond possibility which resembles rep-C looks more realistic to me but still the fact that the cloning map is nonlinear is distubing in this context.

        I think we share the idea that “repetition” is the mechanism we know for how robust classical information (that also enables computation) emerges. But it is not even clear that talking on the repetition “code” is relevant. (Of course, once you have robust classical information, classical ECC are very relevant and realistic.)

      • September 24, 2012 8:33 pm

        I guess I was again unclear.
        By “a bunch” I mean “k”, where k is the # of logical bits your program needs. But then the encoding map E(.) sends one qubit to n qubits.

        For example, a 100GB hard drive might have k = 10^12 logical bits, but each logical bit is encoded in n = 10^8 spins, so altogether there are 10^20 spins.

        There is nothing particularly special about |0> in this picture: it’s more that FT computing requires three FT ingredients: preparation, manipulation and readout. If a FT NOT gate is included, then it suffices to consider only preparation of E(|0>), and you don’t need to bother with E(|1>), or (in the case of quantum computing) with E(|psi>) for other |psi>.

        Also, I’m not sure what you mean about |0> being robust to start with. Of course no single-qubit state is ever robust. As far as I can tell, information is only robust when it’s encoded in an error-correcting code. But we should be able to avoid thinking about the robustness of |0> by attempting to prepare E(|0>) “directly”, i.e. _not_ by first creating |0> and then applying the map E.

      • September 24, 2012 9:57 pm

        Ok , lets talk about 10^12 logical bits each composed of 10^8 microscopic elements. We can think about these microscopic elements as noisy spins or we can also think about them as noisy qubits. Now we try to think about the statistical properties of the microscopic elements for a single logical bit which represents ‘0’.

        Lets suppose that the microscopic elements are themselves qubits. Ideally ‘0’ is represented by (i.e. E(‘0’) is) a whole bunch of qubits at the same specific state \phi. As you said, we dont need to create a robust \phi but to go ahead and create only E(\phi ). So the way you see it is that we have a machine that creates E(\phi) which is 10^8 copies of a specific \phi. You think that this can be achieved up to independent errors on the 10^8 qubits.

        The way I look at it is that every such a machine will actually create a mixture between copies of various \psi where \psi is a noisy version of \phi to start with (neither \phi neither \psi need to be explicitly created). On top of this you have also independent noise. So the machine I am thinking about is much worse than your machine. Nevertheless my machine enables robust logical bits.
        The statistical properties of the microscopic structure of the 10^12 logical bits are more permissive in my machine.

        My machine looks to me more realistic than yours. You correctly claim that your description fits nicer to the theory of classical error correcting codes that extends nicely to the theory of quantum error correcting code. I agree to that but so what?

      • September 25, 2012 1:39 am

        Let’s say that E(|0>) means I write down “0” on a piece of paper, and E(|1>) means I write down “1” on a piece of paper. I think that yes, I can choose to write down “0” or “1” with very low probability of mistakenly writing the wrong number, although there will be small fluctuations in the shape of my pen stroke. (If you don’t want to think about free will, imagine that “I” am a well-trained scribe taking dictation.)

        If you think that I’ll accidentally prepare E(sqrt{0.99}|0> + sqrt{0.01}|1>), (i.e. E(psi), for psi a noisy version of |0>) then this is essentially the same as me having a 1% chance of accidentally writing down “1”. Worse, since the conjecture applies to EVERY error-correcting code, if I imagine myself encoding 0 as 00000000 (again written in pen) and 1 as 11111111, then there will be a 1% chance that I will want to write 00000000, but instead write 11111111 (after all, you posit that the pre-encoding noise rate is a universal constant, independent of the code used). Additionally, if I imagine that a 1 is encoded as 10101010, then I have a 1% chance of writing this string instead, so the behavior cannot be separated from my intentions.

        Put it all together, and you get why I think the conjecture is unworkable.

      • September 25, 2012 6:29 am

        Aram, What you describe is classical error correction after robust classical information already emerges. I am not sure what conjecture you refer to at the end. Here, we are not talking about Conjecture 1 but about the mechanism that classical information emerges from quantum noisy mess. (Regarding Conjecture 1 I agree that rep-B is relevant, and I explained what was the gap in your formulation and why Conjexture 1 does work for the repetition code.)

  16. Serge permalink
    September 23, 2012 5:54 pm

    > “I never understood the distinction between “analog” and “digital” even on the intuitive level.” (Gil Kalai)

    OK, but this has a big consequence: computer science and its algorithms should be replaced by a wider, more general “theory of problem solving” that includes all kinds of methods – whether digital or analog.

    • September 25, 2012 8:19 am

      “If you think that I’ll accidentally prepare E(sqrt{0.99}|0> + sqrt{0.01}|1>), (i.e. E(psi), for psi a noisy version of |0>) then this is essentially the same as me having a 1% chance of accidentally writing down “1″. Worse, since the conjecture applies to EVERY error-correcting code, if I imagine myself encoding 0 as 00000000 (again written in pen) and 1 as 11111111, then there will be a 1% chance that I will want to write 00000000, but instead write 11111111 (after all, you posit that the pre-encoding noise rate is a universal constant, independent of the code used). ”

      Huh, I see another potential problem with your argument, Aram. When you say “essentially the same” don’t you implicitly assume that measurements commute with the evolution? When you accidentally prepare E(sqrt{0.99}|0> + sqrt{0.01}|1>), and I measure to detect the string of bits it represents I will not get a string of 1s.

      • September 28, 2012 7:37 am

        No, my mistake, there is no problem of this kind with Aram’s argument above.

    • Jim Blair permalink
      October 4, 2012 9:23 am

      Serge,

      Like you I thought Gil”s remark was strange. I imagined the analog wave/digital particle dichotomy was built into quantum mechanics.

      That model is a bit anti-intuitive for most folks and I am not sure how you jump over that conceptual barrier.

      On the other hand; it might be useful to escape the surly bonds of that dichotomy on a very abstract, theoretical level. The distinction may not be relevant.

      When you goggle the terms analog and digital, the definitions range from the mundane mechanical/electrical distinction to the more difficult continuum/discrete.

      One of the more interesting dichotomies goes something like this:

      Smoke signals and symbols are digital with discrete elements. Any computation using symbols is basically digital and requires a language. Analog computing does not require a language.

      At the risk of over simplification:

      Theoretical guys use deductive logic and manipulate symbols to “prove” a boson exists. Analog guys build super colliders and manipulate things to demonstrate the boson exists.

      Einstein is a curious example of a theoretical guy who looked for analog counter examples to quantum mechanics.

      • Serge permalink
        October 4, 2012 10:09 am

        Maybe Gil meant that all discrete codes rely on the variation of a continuous magnitude, which is in turn discrete at the quantum level… For technical purposes the distinction digital/analog makes sense but it’s probably less useful in a discussion of quantum computing.

      • October 4, 2012 11:51 am

        Serge and Blair, what I meant was that I did not understand how to put the distinction between analog and digital on formal grounds. It is often claimed that analog computers cannot work as well as digital computers, and again, I did not understand what precisely this means and how to formalize it.

      • Jim Blair permalink
        October 8, 2012 8:54 am

        Thanks for the clarification, Gil.

        It might be amusing to try and develop a formal distinction between analog and digital computations by focusing on the reliability of each type of computing.

        John Sidles’ metaphor about the scales of justice reminded me of Archimedes’ “Eureka” moment. If the king is worried about being cheated out of gold by the crown maker, you can balance the weight of the crown with x amount of gold and then balance the weight of the water displaced by the crown with the amount of water displaced by the x amount of gold.

        An all analog exercise, no digits required.

        For the sake of a philosophical argument, it might be interesting to think of the other extreme and view a deductive proof as a digital calculation that assigns only one of two values to a theory; for example, 1 if it can be proven to be true, 0 if it is proven to be false.

        Each step of the proof can be verified; 1 if the step is valid, 0 if it is not. A digital exercise, no analog computing required.

        View Heisenberg’s Uncertainty Principle as a possible limiting factor on pure analog computing.

        View Gödel’s Incompleteness Theorem as a possible limiting factor on pure digital computing.

        Do the Red Queen thing and think about some of the possibilities and impossibilities before breakfast or the second cup of coffee, whichever comes first.

  17. September 26, 2012 8:05 pm

    Maybe it can be useful to clearly describe the various (related) questions that we discuss in the post itself and the comments.

    1) The classical error correction test.

    Aram devoted his first post to this issue that we both regard as important. In this post, I gave my response to this point.

    2) Conjecture 1.

    Aram described a new detailed mathematical description of my conjecture and explained how Conjecture 1 fails even for the repetition code. I explained that Aram’s formulation is incomplete and that his interpretation of the conjecture is incorrect, and offered a formal version of my own.

    3) My critique on Preskill’s model: the distribution of the number of errors.

    I expect that the kind of weak correlation in Preskill’s recent model (as earlier ones) implies Gaussian distribution, and square-root n fluctuations for the number of errors. This suggests, in my opinion, that Preskill’a correlation assumptions are not realistic, regardless of the issue of quantum fault tolerance.

    4) How to describe the repetition process required for classical robust information to emerge in a quantum noisy world.

    I expect that Rep-C is a better description compared to Rep-B and Aram disagrees.

    5) The microscopic description of a single bit in digital computers.

    I guess that square-root fluctuation for the number of up-spins is not realistic even for description of digital memory. (Here we are talking about the microscopic description of a single bit.) Aram and Robert disagree.

    • October 3, 2012 11:11 am

      I forgot

      6) Classical non-FT evolutions

      How can we define and study (if at all) classical evolutions that do not exhibit classical FT, and how can we measure the computational power of specific class of classical evolutions.
      In particular, Do 2-dimensional Navier-Stokes evolutions support classic FT? Can such evolutions be approximated (in all scales) by bounded depth (probabilistic) circuits?

    • John Sidles permalink
      October 3, 2012 1:35 pm

      Gil, as a further addendum, please let me commend the recent quantum research by Harvard’s Alán Aspuru-Guzik and colleagues, which (IMHO) substantially extend your classification of FTQC-relevant research issues.

      In increasing order of practical specificity, three recent works (all open-source) by this group are:


      Computational complexity in electronic structure
      &nbsp (arXiv:1208.3334, 2012)  A readable summary of standard computational complexity theory, as viewed through the lens of quantum chemistry.

      Quantum computing without wavefunctions: time-dependent density functional theory for universal quantum computation&nbsp (Scientific Reports, 2012)  This article presents a novel view of quantum computation as Kohn-Sham dynamics.

      Finding low-energy conformations of lattice protein models by quantum annealing&nbsp (Scientific Reports, 2012) THis article applies the above computational framework to practical problems in protein-folding … using D-Wave computers! 🙂

      An appealing aspect of this research (IMHO) is that the same algorithms that are proving to be heuristically efficient also are proving to be geometrically natural. Very plausibly, Nature is thereby sending a message to us … but what is that message? ❓

      It was disappointing (to me) that (apparently?) little or none of this groundbreaking work was featured at this year’s QIS Workshop in Computer and Natural Sciences: 2012 … almost certainly this lack will be remedied in years to come, as these physical and mathematical ideas gain traction.

      And in particular, this work contains ideas that (as it seems to me) are profoundly relevant to issues that are at the beating heart of the Kalai/Harrow debate.

  18. October 3, 2012 6:07 pm

    My first blog discussion about quantum error correction and QC skepticism was on the quantum pontiff in 2006. This was my first (belated) encounter with blogs in general, and about a year after I got actively interested in quantum fault-tolerance. I was surprised to learn that Aram started his blog debates with QC skeptics even earlier! Over his own blog at the time “Scrofulous,” (what does it mean??) Aram criticized a 2004 paper by Robert Alicki, in a post entitled “Quantum error correction fails only when you don’t use it.”

  19. Gil Kalai permalink
    October 4, 2012 2:15 pm

    Regarding this comment:

    5) The microscopic description of a single bit in digital computers.

    I guess that square-root fluctuation for the number of up-spins is not realistic even for description of digital memory. (Here we are talking about the microscopic description of a single bit.) Aram and Robert disagree.

    Here, I must admit that I am quite curious about how things go and there are reasons to believe that the outcome willl go either way. The description that Aram and I discussed was of a logical bit in a classical memory (hard drive, say) which is based on many electrons each having an up or down spin. Note that this is different from the question raised on item 4 about classical information emerging from noisy quantum world, in that the electrons themselves already represent probabilistic classical information. (I.e., we expect that every electron is represented by spin up or spin down but not by a noisy superposition of the two.) So this is a scenario where we can believe more easily that normal distribution and square root fluctuations are the rule.

    Nevertheless, I wonder if we cannot expect additional fluctuations which are proportional (with a small constant) to the number of electrons. Note that robust logical classical bits remain possible even under such fluctuations.

  20. February 8, 2014 11:53 am

    Terry Tao posted a very intriguing post on the Navier-Stokes equation, based on a recently uploaded paper Finite time blowup for an averaged three-dimensional Navier-Stokes equation.

    The paper proves a finite time blowup for an averaged Navier-Stokes equation. These equations come quite close in various aspects to the NS equation itself. The proofs of the finite-time blow up theorem uses some computational-looking gadget, and also some form of fault-tolerance. A plan for implementing similar ideas for the NS equation itself is mentioned.

    • John Sidles permalink
      February 8, 2014 5:49 pm

      Gil, please let me commend to Gödel’s Lost Letter readers your own (wonderful) observations regarding the (many) parallels of Naver-Stokes dynamics versus quantum dynamics, that are posted on your website Combinatorics and More as your essay Navier-Stokes Fluid Computers

      I agree that Terry Tao’s findings are mathematically rich, scientifically interesting … and even (in the long run) may turn out to be of considerable practical importance to engineers and medical researchers (like me). Plus they are fun!

      Aside  Students (especially) who wish to learn about dynamical singularities may enjoy Donald Saari’s and Zhihong “Jeff” Xia’s lively and accessible introduction to this topic, which is available and free-as-in-freedom from Notices of the AMSas the article “Off to Infinity in Finite Time” (1995).

    • February 9, 2014 12:05 am

      Thanks, John. One of the motivation for some of my comments/questions above for the computational power of various classes of evolutions is that the principle of “no fault-tolerance” can be of importance also in the classical case. (But perhaps it will be harder to put it on formal grounds in the classical case.) Such a principle has a thermodynamical flavour.

      • John Sidles permalink
        February 9, 2014 12:28 am

        Gil, please let me agree with you that remarkably many open problems in quantum mathematics, science, engineering — and medicine especially! — have (in your excellent phrase and spelling) “A thermodynamical flavour.”

        Obviously a great many clinical challenges in medicine — regenerative healing especially — are contingent upon a local entropic decrease. And it’s regrettable that practical thermodynamical challenges are not commonly phrased in language congenial to mathematicians. As Vladimir Arnol’d memorably said

        Every mathematician knows that it is impossible to understand any elementary course in thermodynamics.

        It’s regrettable too that a very considerable portion of 20th century work by great scientists (Dirac, von Neumann, von Kármán, Onsager, and even Feynman too) remains unpublished and/or classified and/or held as trade secrets.

        For me as a medical researcher, a very great practical advantage of the Kalai/Harrow quantum debate — an advantage too of the class of quantum challenges that includes FTQC and BosonSampling — is that state-secrets and/or trade-secrets do not dominate the mathematical and scientific discourse.

        In this regard the 21st century is (so far) proving to be wiser than the 20th. Hopefully this will continue!

  21. January 13, 2015 3:18 pm

    A remarkable recent relevant progress by the Martini’s group showing certain encoded states based on 9 physical qubits which are order-of-magnitude (factor 8.4, to be precise,) more stable than the “raw” qubits used for creating them !!

    Here is a link to the paper:  State preservation by repetitive error detection in a superconducting quantum circuit,

    (While we were debating I mentioned an earlier relevant progress toward quantum error-correction by the Yale’s group http://rjlipton.wordpress.com/2012/03/05/the-quantum-super-pac/#comment-19685 . )

    • February 11, 2015 12:00 pm

      Regarding the 9-qubit achievemnt of the Martinis group, while quantum error-correction was used, the information that was being protected was classical, so in a sense it achieved Aram’s proposal in the beginning of the post but did not provide a counterexample to my Conjecture 1 as explained later in the post. There is a roadmap of seven milestones described in a recent Science paper by Schoelkopf and Devoret on their scale (that goes back to a paper by David DiVincenzo) the milestone crossed should be considered the “3.5th.” Crossing the 4th milestone will refute my Conjecture 1. This matter is discussed here: http://www.scottaaronson.com/blog/?p=2155

Trackbacks

  1. Quantum Repetition « Gödel's Lost Letter and P=NP | NanoTechnology Revolution | Scoop.it
  2. Lucky 13 paper dance! | The Quantum Pontiff
  3. What Is The Object? « Gödel’s Lost Letter and P=NP
  4. Quantum Supremacy or Classical Control? « Gödel’s Lost Letter and P=NP
  5. The Quantum Debate is Over! (and other Updates) | Combinatorics and more
  6. No-Go Theorems | Gödel's Lost Letter and P=NP
  7. My Quantum Debate with Aram Harrow: Timeline, Non-technical Highlights, and Flashbacks I | Combinatorics and more
  8. My Quantum Debate with Aram III | Combinatorics and more
  9. Theorems From Physics? | Gödel's Lost Letter and P=NP
  10. Navier-Stokes Fluid Computers | Combinatorics and more
  11. Quantum computing: achievable reality or unrealistic dream | Combinatorics and more

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading