Factoring Is In BQP
Further discussions of the proof that factoring is in BQP
Scott Aaronson is a, if not the, world expert on quantum computation. He writes, as you all know, one of the best blogs on quantum and many more things.
Today I and Ken want to make a short summary on the last discussion: is factoring really in . Yes it is. But I want to explain some more of the issues behind the discussion.
Scott’s Comment
Here is Scott’s comment I received late last night—I have fixed some minor things to make it typeset in LaTeX.
Scott’s comment can be redacted to its gist as follows:
“[You feel that as a matter of pedagogy, expositions of Shor’s proof should directly show] that some decision problem related to factoring has polynomial-size quantum circuits, rather than giving poly-size circuits for the factoring search problem and then appealing to well-known closure properties of
(as shown, for example, in the BBBV paper) to get that the decision version is also in
![]()
I strongly disagree
I think a fairer standard is that something is `proved’ when the gaps are small enough that any expert (or enterprising student), working in good faith, can fill them without difficulty. And that standard is more than met by Shor’s algorithm, as evidenced (for example) by the thread above.”
Why We Made The Post
I am not for being pedantic nor for excessive formalism. I have written papers arguing against that for years—see my paper on social processes, with Rich DeMillo and Alan Perlis. The reason for the post was to make a couple of points: (I) that factoring is in is a very important theorem; (II) as such it should have a proof that is on the web and available to all. The “A” answer that no one made was: “here is a url to the proof in X’s textbook/paper/wiki etc.” The answer that we got is: “yes the simple argument that is sometimes made is not the real one, but here is the correct one.”
I think that there is a big difference between being pedantic and simply wanting to see a sketch of what is arguably one of the landmark results in theory. I do agree the comments together have answered the issue, but I would humbly suggest that someone write down a short sketch of the proof so all can see it. Scott is the expert of experts in quantum computing, and I meant no disrespect to Scott or anyone else. I just wanted to see how this fundamental result is proved. So yes it is true. I hope you agree that having a good pointer for this critical result is not being pedantic.
We meant no disrespect to Scott or any of the experts in quantum computing, and we are grateful that several including Michael Nielsen, co-author of the signature quantum-computing textbook, provided answers. We do agree the comments together have answered the issue, and we will summarize the most important points here. However, I humbly suggest that it be reflected in prominent current materials so that memory in the field is kept fresh rather than going back 15–17 years (not “moves”) as my introduction had alluded.
The Points
We perceived gaps in both directions of the standard polynomial equivalence between search problems and decision problems. Going from decision to search, the issue was, can we decide whether without using Shor to calculate the prime factors of
in a way that overtly requires measuring many qubits? The answer is yes, and the idea was given with some examples in comments by Nielsen beginning here. We summarize as follows:
Think first of a classical algorithm that uses Shor’s factoring algorithm as an oracle.
runs and in the end produces one output bit
as its answer. Computations by
submit queries
and receive multiple bits
from the oracle. We may regard these bits as having come directly from a large-scale measurement. The point is that we can now transform
itself into a quantum algorithm
that keeps those bits in superposition, without measuring. The output bit
of
then becomes a quantum bit in
. This is all we need to measure, so
meets the formal definition of a
algorithm that we were targeting. This works seamlessly because the classical body of
can be coded with reversible gates that apply equally well in the quantum circuit for
because everything is linear. This also proves that
.
In brief, the point is that if all you desire is a 1-bit answer, then the “Principle of Deferred Measurement” extends to say that one can avoid the postponed measurements altogether, except for one bit.
The other direction, reducing search to decision, still holds more than pedagogic interest for us.
A Mathematical Problem?
Before all the answers, I had started to think about the following question: Suppose that I had a quantum computer that could just execute the quantum part of Shor’s algorithm. But for some physical limitation at the last step I could only measure and therefore get one bit of information. Can a classical algorithm use just this quantum subroutine to still find the factors? More precisely, at the end there is a large register that Shor’s algorithm measures all of the bits. Suppose now you can choose which bit to measure, but only that one. Can you still use this subroutine to factor?
The reason I thought it might be interesting is this: perhaps it is easier to build such a device, perhaps not. In any event trying to factor with this limited subroutine strikes me as an interesting challenge. Since Shor’s algorithm puts different random values in the register each time it is run, it is not obvious to me that this can be made to work.
Why Proofs?
Note the plural on proofs. A proof does much more than check off that something is true. This holds whether the proof is barely outlined, sketched, or even stated in quite precise detail. Proofs do more than prove.
Michael Atiyah, a winner of the Fields Medal in 1966, has made a number of interesting comments on the role of proof. One of his great results is the so called Atiyah-Singer Index Theorem. Atiyah says:
I think it is said that Gauss had ten different proofs for the law of quadratic reciprocity. Any good theorem should have several proofs, the more the better. For two reasons: usually, different proofs have different strengths and weaknesses, and they generalize in different directions; they are not just repetitions of each other. And that is certainly the case with the proofs that we came up with. There are different reasons for the proofs, they have different histories and backgrounds. Some of them are good for this application, some are good for that application. They all shed light on the area. If you cannot look at a problem from different directions, it is probably not very interesting; the more perspectives, the better!
We agree. We believe that the fact that factoring is in is super important, and deserves to have as many different proofs as possible.
Open Problems
We guess the first open problem today is “was it pedantic to ask for a full proof that factoring is in ?”
But to ask another: In what sampling situations where we compute a predicate of strings
output by a black-box random generator
, can we still compute
whp. in
time given only one (selected) bit of each answer
?



I can add some further relevant facts: A regular 17-gon is constructible with straightedge and compass. Any planar map can be 4-colored. No sufficiently rich axiomatized and consistent formal system can prove its own consistency. For n > 2, x^n + y^n = z^n is not solvable in positive integers. Nothing sporadic is bigger than the Monster. Who knew? 🙂
A somewhat related result I know of is http://arxiv.org/abs/1005.1407 which shows, that for the restricted set of quantum circuits with commuting gates, sampling a single (or up to log(n)) output bits can be simulated classically efficiently, while simulating sampling of all n output bits classically would collapse PH.
Please let me say that, from a practical engineering point-of-view, this whole discussion has been terrific, for reasons that reflect an accelerating change in the mathematical methods that systems engineers use to simulate complex systems. Pretty much the same considerations apply whether classical or quantum dynamical systems are being simulated, and so I’ll put the [quantum] aspects in square brackets. Heck, for that matter I’ll put {informatic} aspects (like gene sequences or textual ngrams) in curly brackets … and I’ll ruthlessly over-simplify and generalize.
Nowadays pretty much any large research group devotes most of its computational resources to computing terabytes of (classical) [quantum] {informatic} simulated trajectories—essentially these are sampling simulations. Commonly the simulated trajectories are shared, but seldom are are they published in toto … they’re just too big. What gets published instead are various ensemble averages, compressed representions, and trajectory end points (like ground states). A good example of how this works in (classical) [quantum] {informatic} [quantum] practice is the Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction, commonly known as CASP … almost all top-ranking CASP predictions arise from of hybridized (classical) [quantum] {informatic} trajectory simulations.
Obviously we would like to understand CASP from a complexity-theoretic point of view … for this we need models of CASP-style sampling simulations that are realistic enough to be interesting, yet simplified enough to make theorem-proving feasible. This is relevant to the Gödel’s List Letter and P=NP discussion in at least two respects. First, these trajectory simulations generically evade the “curse of dimensionality” by pulling-back onto lower-dimension state-spaces, so that the geometry of these reduced-dimension state-spaces becomes a central mathematical consideration. Ofttimes this pullback is guided by ad hoc (classical) [quantum] {informatic} ansatzes, but in the case of quantum dynamics we have that powerful law of nature called in Nielsen and Chuang’s textbook (and in the previous thread) the “Principles of Deferred and Implicit Measurement”.
The immense power associated to the “Principles of Deferred and Implicit Measurement” is why a first-principles theoretical analysis of the computational complexity of sampling simulations has substantially greater mathematical naturality in the quantum domain than in the classical or informatic domains … and why it is very desirable for students to acquire a solid grasp of the Principles from many points of view (algebraic, geometric, discrete, continuous, informatic, stochastic, etc.) … not only because the Principles can “sit in our brains” in very many wonderful mathematical ways, but also because these same Principles are embodied in today’s simulation codes in equally many wonderful computational ways.
The broader point is that thanks largely to our increasing grasp of these Principles, for many decades quantum simulation codes have been gaining in power at a faster-than-Moore’s Law pace. These codes now have acquired immense economic and scientific importance … and there is every reason to expect these more-than-Moore gains to continue (or even accelerate) in coming years. Good! 🙂
The preceding amounts to an eminently practical reason to ask for rigor in defining membership in BQP, that reason being, to illuminate our understanding of the quantum sampling Principles. But I would like to point out a second practical reason, which is associated to the “A” in “CASP”, that is, to the assessment of sampling simulations. As background, for the sake of proving theorems, it often is tempting to introduce unrealizable computational elements (oracles and post-selection, for example). Sometimes this is the only way to make progress, but for engineering purposes, theorems have broader applicability when they do not postulate unrealizable computational elements. Similarly, when a theorem refers to “exact” or even “approximate” sampling simulations, it is desirable to state explicitly how the quality of the sampling is to be assessed, and in particular to state explicitly whether that assessment would require infeasibly large and/or unphysically oracular computational resources.
Especially for the benefit of the many students who are struggling to grasp these ideas—because a lot of simulationist jobs are being created nowadays—it is very desirable that these fundamental concepts associated to the quantum “Principles of Deferred and Implicit Measurement”, and equally, to fundamental principles of critical assessment of trajectory samples, are not routinely regarded as “well-known to the well-knowers” (in Gil’s happy phrase), but rather are elucidated as patiently, clearly, broadly, and kindly as we know how. For which my thanks and appreciation are extended to this wonderful weblog, as usual! 🙂
I don’t think it was pedantic to ask for a full proof. But it seems a little strict to define BQP in a way that 1) only measures one qubit and 2) is limited to decision problems.
Was there ever any uncertainty on your part that Factoring as a function problem, i.e., producing the prime factorization of the input N, can be done with high probability by a quantum computer in a polynomial amount of time? Keeping in mind, of course, that quantum computers, being a superset of classical computers and having access to random bits, can also carry out classical probabilistic operations.
So I think it’s nice that we can solve decision versions of factoring, like “Is there a factor less than x”, with a quantum algorithm that only measures one qubit. But I’m not sure how important it is. It seems like a nice technical fact, but focusing on it seems like a symptom of using an overly strict definition of BQP — and one which isn’t really consonant with how we think about quantum computers, or how we will use them if we ever get our hands on one.
Cris
Well, that’s how P, BPP, NP, etc. are defined. People are just comfortable enough with the difference between P and Function-P that they don’t worry so much. It’s only quantum where people are more afraid to trust their intuition, since the results there are so often counter-intuitive.
The question was far from idle. We all “knew” what Linear Programming means, yet it turned out that once we had to fill in the details properly (i.e. obtaining an approximation of the optimum to a given finite precision) the problem fell apart and easily (relatively speaking) be done.
Because of examples like those is why we need to cross our t’s and dot our i’s at least once for every important problem. To be sure, 99 times out 100 things check out and we move on. In the remaning case sometimes the proof falls, sometimes holds but an important insight which can be used elsewhere pops out.
Dick has made a good point that no such source existed for Factoring in BQP, so he was right to ask.
It is not exactly the same. Although Shor’s (amazing) paper only showed that quantum computers can factor, and didn’t technically show that Factoring is in BQP, the proof that Factoring is in BQP is in the standard textbook Nielsen and Chuang, from 1999.
Anon 8:36pm, please compare to the comments by Michael Nielsen himself in the comment thread of the previous post beginning here.
I don’t understand about quantum computing, but the more proofs the better. And it is event better when important theorems are proved with simpler (or clearer) arguments.
The dicussion about proofs reminds of two great previous articles of this very same blog that I have read recently. I link them below for the casual reader or for those who wants to reread:
http://rjlipton.wordpress.com/2010/08/18/proofs-proofs-who-needs-proofs/
(“Proofs, Proofs, Who Needs Proofs?”)
http://rjlipton.wordpress.com/2009/04/24/the-four-color-theorem/
(“The Four Color Theorem”. A citation: “proofs were not to check that something is true. Rather the role of proof is to give insight into the problem at hand.”)
The second open problem can be clarified to read, “…can we still compute R whp. in m’ = poly(m) samples and time given only one (selected) bit of each answer g_i?” I.e. you can do more samples than the original algorithm which received the whole strings.
Here is a closely related problem, which is easy to state but (for me at least) not-so-easy to answer, that (potentially) has both intrinsic mathematical interest and practical engineering interest: Does BQP^P = BQP? That is, is P low for BQP?
Here we have in mind a logic-gate instantiation of BQP, that is, a polynomial-time uniform family of quantum circuits (that is, a Nielsen-and-Chuang-type quantum computer), whereas for P we have in mind an ordinary single-tape (classical) Turing machine. So in practical terms, the proposition BQP^P =?= BQP is about the feasiblity of compiling algorithms in P into polynomially-many reversible logic gates.
These problems arise commonly in engineering practice, in the guise of compiling a procedural algorithm (written in C and instantiated in a thread for example) into a circuit diagram (written in Verilog and instantiated on an FPGA for example). And in practice, they are not so easy to solve … particularly when there is no formal proof that the procedural algorithm is in P, but rather, only the empirical (ie, oracular) observation that is does so. In practical engineering as in the computational complexity theory, procedural algorithms are in general not accompanied by documentation of how they work … perhaps it is infeasible even in principle to describe how all the algorithms in P work … and obviously this makes it harder to compile algorithms into gates.
The point is that the proof machinery for showing that BQP^P = BQP (or does it?) is interesting from multiple points of view … some practical, some fundamental … and thus, the more proofs we have, and the clearer these proofs are, the better-off we all are. As Saunders Mac Lane has written:
It is this “struggle for clarity” that I most appreciate and admire about Gödel’s Lost Letter and P=NP, because Dick and Ken conduct this struggle openly, patiently, broadly, and kindly. Surely this is the highest academic tradition! 🙂
There is another quotation, from Nigel Hitchin’s AMS article What is … a Gerbe? (2002) that from a purely technical point-of-view is perhaps even more applicable:
As with algebraic topology, so too with quantum information theory. What “prevents us from doing something” is of course the noise and imprecision that prevent us from realizing (perhaps forever?) all of the wonderful quantum algorithms that this thread has been discussing. The “structure that allows us to see” is of course the previously-mentioned “Quantum Principles of Deferred and Implicit Measurement” … which of course appear in the literature under dozens of other names and guises. And what mathematicians call “the struggle for clarity” is for engineers the professional struggle to conceive the designs, reduce the risks, and speed the pace, for the great enterprises that are required of the 21st century … because from a systems engineering point-of-view, there are multiple enterprise opportunities arising in quantum information theory, whose aggregate scope is more vast than simple quantum computing.
Wonderful weblogs like Gödel’s Lost Letter and P=NP show us how to conduct “the struggle for clarity” openly, patiently, broadly, and kindly … they show us how to “breathe life into obstructions” in Hitchin’s wonderful phrase that captures the very essence of both mathematical abstraction and systems engineering … for which ongoing effort, Dick and Ken greatly deserve all of our appreciation and thanks. 🙂
In response to Scott Aaronson’s very sensible suggestion, I have posted the question “Does BQP^P = BQP ? … and what proof machinery is available?” on MathOverflow, and on Scott’s own weblog Shtetl Optimized, I have invited responses from Scott’s students in his just-completed graduate course “Quantum Complexity Theory.”
The point is that this is exactly the kind of question that theory students (in my opinion) can benefit from tackling; moreover Scott is absolutely right (in my opinion) that sites like MathOverflow provided a structured, searchable, curated, arxived environment within which good answers can be preserved.
After one hour, this particular question has 40 views … 1 upvote … but (so far) zero answers. So perhaps Dick and Ken are right … this class of question is not so easy to answer … who wants to be the first?
Yes, BQP^P = BQP. I posted an answer on mathoverflow.
For basically all of these questions, BQP acts like BPP. A nice paper about the subtleties of when this breaks down is ” One complexity theorist’s view of quantum computing” by Lance Fortnow:
http://www.cs.uchicago.edu/~fortnow/papers/quantview.pdf
Aram, thank you for your post on MathOverFlow … which needless to say, it will take awhile to digest!… it is interesting that you and I have independently recommended Lance Fortnow’s fine article … it was only in systematically working through about these various computational subtleties, that (for me) are mainly associated to sampling and validation of large-scale quantum trajectory simulations, that the thoughtful craftsmanship and comprehensive scope of Lance’s analysis became apparent … no doubt many other folks appreciated this sooner than me … so thank you, Lance.
Is it fair to say that part of the complexity problem is not only to decide a given problem such as ‘”factoring is in BQP” but also how to raise the level of complexity of a simplest problem completely impossible to solve?
S. H. Banihashemi , upon parsing your question carefully, for systems engineers the answer is “yes, definitely.”
For example, noisy and/or thermal systems (both classical and quantum) were once viewed as generically infeasible to simulate … now, not so much.
Instead, we believe nowadays, that provided a dynamical system (whether classical or quantum, discrete or continuous) is sufficiently noisy that it can reasonably be assumed that its dynamics are not approximating a Turing-complete computation, then there is (in general) no fundamental obstruction to simulating that system with polynomial computational resources.
That is how an engineer parses your question … and the answer is responsible for a *major* transformation in 21st century engineering capabilities … hence the burgeoning interest of engineers (of every variety) in computational complexity.
It seems to me that you are moving in the reverse direction. I rather meant that when a major math problem such as those in the first commnent of Ken Regan is solved a kind of “automatic” automata occurs to build upon them. Everybody assumes that there is a proof and the problem itself is solved. This in itself might cause an obstruction to understand further the computational complexities. A huge amount of complex computations are collapsed due to overcoming the problem and no one wants to go back to scrutinize the proofs again. It is a relief that the problem is solved. You might say that I am oversimplifying the problem!
Banihashemi, it is true that errors have been and no doubt will be found, even in major proofs. And yet, recent advances in automated proof-checking show us that the overall proof-checking system works amazingly well.
So it is very unlikely that we will ever discover that quantum factoring is not in BQP , or more generally, that BQP^P is not the same complexity class as BQP.
However, it is worthwhile to work through the details, for a reason that has been mentioned in several Gödel’s Lost Letter and P=NP posts. Dick and Ken have called it the “Flaming Arrow Reason.” A “flaming arrow” is a piece of proof machinery that causes the reader to exclaim “Are they allowed to do that?”
The machinery of complexity theory contains many flaming-arrow methods … this is why its conclusions often are received with skepticism. In particular, when we work through the (many!) steps in the reduction of BQP^P to BQP, we find that some steps in the reduction require EXP resources to carry through, or are ridiculously wasteful of gates, or (most unrealistically of all, in engineering eyes) blithely assume that no independent verification or validation of the individual steps is needed.
By the rules of complexity theory, these “flaming arrows” are permitted … and once we accept that these arrows are admissible elements of the game, then we are reasonably assured—as assured as we ever can be, of any branch of mathematics—that the main results of complexity theory are not only valid, but very beautiful.
I totally agree with you. Valid and beautiful are the main results of complexity theory. And surely automated proof checking system works amazingly well. And yet what I am trying to convey is that suppose one day P=NP problem is settled like Fermat’s last theorem (FLT) was settled. To hand down and purport the solution of the problem into other serious newer problems is of a complexity that can involve both computer and human serious efforts instead of classifying the problem in some old shelf. And until the original problem is made as easy as a comercialized comodity the complexity of the new problem might be beyond our reach. Just imagine Riemann Hypothesis.
On the other hand, when one views an ordinary math problem from the view point of say FLT proof (here I am to a large extent thinking wishfully) then one might see new insights that would not have been possible without it. Unfortunately this requires mastering FLT proof and few people in the world might be that well-versed there.
I have to read Lance’s and BBBV papers soon. Apparently, I don’t know a lot about complexity theory.
Of course I agree that we need careful definitions of complexity classes! But I am still confused about how this question got rolling. As Lance (and the Zookeeper) pointed out, we’ve known since 1997 that BQP^BQP = BQP, and this includes BPP^BQP.
So there’s no problem using Shor’s algorithm as part of a classical randomized algorithm, and we can solve the function problem Factoring with high probability in (quantum) polynomial time. We can then sort the factors and see if there is a small one, or one ending in 7, or whatever, thus solving decision versions of Factoring as well.
The original question seems to have stemmed from whether we can reduce the quantum part of the entire algorithm to measuring a single qubit. Again, I think it’s nice that we can. But this seems more to me like showing that
“measuring one qubit BQP” = BQP.
Perhaps my confusion stems from the fact that no model of quantum computation currently in use in the community is limited to measuring a single qubit… and I don’t see this limitation in e.g. the definition of BQP in the Complexity Zoo.
What am I missing?
– Cris
I would like to join Aram Harrow in commending Lance Fortnow’s One Complexity Theorist’s View of Quantum Computing as a good source of answers to the questions that are associated to “Is factoring in BQP?”
It is natural to ask what reductions in classical complexity theory carry over to quantum complexity theory, and also to ask, how the quantum and classical proof machineries differ. Very commonly, quantum proof machinery in particular has what Dick and Ken call a “flaming arrow” aspect … pieces of quantum proof machinery that cause the reader to exclaim “Are they allowed to do that?”
I’m not sure what particular “flaming arrows” were confusing to Dick and Ken … on Shtetl Optimized I have some listed “flaming arrows” that are confusing to engineers … and so perhaps one answer to the question you raise, namely “What am I missing?”, is simply “a thorough appreciation of other folks’ flaming arrows.”
That is why (IMHO) questions like Dick’s and Ken’s, and articles like Lance’s, and weblogs like Gödel’s Lost Letter, Shtetl Optimized, and Computational Complexity, are serving a very useful and natural purpose, in helping complexity theory (both classical and quantum) to find widespread appreciation and application.
Thanks, Cris. The issue at least on my end was that several sources, including to all appearances the Wikipedia article on BQP do give the “measure-one-qubit” definition, which is what I think a beginning student analogizing from BPP might expect. Whereas I wasn’t sure that BQP^BQP = BQP worked with it—I had overlooked the BBBV paper while following Bernstein-Vazirani and a bunch of others. You can see this in my reaction to initial comments in the other post.
Is there a good paper focused on search-versus-decision in the quantum context?
My next “pedagogical” query is: this seems to obviate Levin’s objection to precision of measurement, no? OK, I’d already put in the first post alongside Scott’s “Shor/Sure” paper my understanding that Levin’s issue of precision translates into whether the quantum states involved—ostensibly those resulting from deferring the measurement—are really feasible.
There seem to be so many writeups of Shor’s algorithm on the Internet that
tabulating the differences (in emphasis if nothing else) among them might make a very satisfying pedagogical study. One thing that still baffles me after examining more than 12 such papers–the algorithm involves a quantity usually called Q which must be a power of 2, bigger than N squared, and smaller than twice N squared. (It’s a minor amusement that some sources seem to think you would have a choice of values with these properties!) This Q is used as the size of the first quantum register. When I calculate I conclude that it would take a register of size 256 qubits to factorize 15.
Then I check IBM’s NMR implementation of a quantum computer and find they were able to get away with only 3 qubits for this register! There’s some explanation of how they were able to shave off a qubit or two but the argument doesn’t support elimination of more than 200 qubits, at least not to the extent that I can see.
So that’s how I know that I don’t understand the algorithm yet, even though I’ve been trying hard. Luckily the literature does inform me that the nontrivial factors of 15 are 3 and 5, so at least I’m not losing sleep over it.
Okay, one more night’s sleep and my freshman mistake reveals itself in all its ugliness: the value Q which lies between N squared and twice N squared is not the number of qubits one solicits for the first register, but rather the set of values it needs to represent. So the actual size of the first register is log Q and I don’t need 256 qubits but only 8 qubits in that register to factorize 15. Whew.
IBM used only 3 but that’s now just 5 missing qubits I need to understand.
Now of course everyone who (unlike me) already understands Shor’s algorithm understood this trivial point already. In the spirit of improving pedagogy I went back to see how I might have slipped the tracks. To my shame, every single reference I re-checked appears to be utterly correct precise and unmistakable on this point. I cannot retrace my derailment. Would that learning were implemented with reversible gates!
In retrospect I might have done better to slog through a single paper rather than reading 12 in superposition.
Another night’s sleep and what seemed to me a mystery is solved! The mystery is
how IBM was able to use 3 qubits for the first quantum register when all the writeups seem to imply 8 qubits are needed.
Reading 12 papers in superposition actually pays off, as if I had chosen a paper at random I might not have picked “Shor’s Algorithm for Factoring Large Integers” by Lavor, Manssur, and Portugal.
They inform me that whereas t is generally chosen so that 2 raised to the t is between N squared and twice N squared, as an exception, “if the order r is a power of 2, then it is enough to take t=n” where n here is log N. That gets us down from 8 qubits to only 4 qubits in the first register. And LM&P observe that the authors of the IBM paper used one fewer qubit than you would expect as they were “bypassing part of the algorithm”. Sure, the IBM folks said as much in their paper. Done!
Meanwhile I tried to share part of this with my seven year old son. I drew 3 rows of 5 columns of strokes and asked him to count them so I could demonstrate that 15 does factor nontrivially. He thought it sounded too much like third grade math (he’s in second grade) and refused. Oh well, he did read up on the differences between crocodiles and alligators and gavials and caimans.
“Scott Aaronson is a, if not the, world expert on quantum computation. ”
No way! The problem with Scott is that he has too much hair, as evinced by the photo at the beginning of this blog post. How can you trust someone with such a good head of hair? Now look at Peter Shor’s picture in the previous blog post. Now there’s a man you can trust.
Hilarious comment 😉
I’m not sure how many readers this thread still has, but Dick and Ken’s questions stimulated a question that I’ve just posted on MathOverflow, namely Do runtimes for P require EXP resources to upper-bound? … are concrete examples known? The question is posed such that several different types of answers are acceptable … so hopefully, some good answers will appear.
Posting the same question on TCS StackExchange has is drawing a very lively response.
Although there is no consensus answer yet, already it is clear: (1) When Dick and Ken asked “Is Factoring Really In BQP? Really?” they posed a question that has some highly nontrivial aspects … for which they deserve our thanks. (2) When Scott Aaronson recommended MathOverflow and TCS StackExchange, he gave us good advice … for which Scott too deserves our thanks.
These questions have turned out to be nontrivial even for high-status users of TCS StackExchange and MathOverflow, and so when the dust settles, I’ll summarize their consensus answer(s) here too.
Here is the promised followup …
Scott’s thoughtful suggestion to post on MathOverflow and TCS StackExchange has catalyzed a whole string of questions and answers on culminating in Emanuele Viola’s highly-rated theorem/answer “Are runtime bounds in P decidable? (answer: no)”. Moreover, Viola’s theorem suggests many more questions and answers, that will be of great interest to both mathematicians, scientists, *and* engineers.
So thank you, Scott … and thank you also, Gödel’s Lost Letter!
I got what you mean,saved to bookmarks, very nice
web site.
Dear issuu.com:
Thanks for kind comment. We try. Also happy to see reading older posts.
Best
Dick