Skip to content

A Little Noise Makes Quantum Factoring Fail

June 14, 2023

Jin-Yi Cai is one of the top theory experts in the world. Both Ken and I have had the pleasure to work with him and interact with him over the years. We have discussed some of his previous work here and here.


Today we will talk about his new work on quantum computing.

Quantum Factoring

Peter Shor invented the quantum algorithm for finding the prime factors of an integer in 1994.



This is one of the great algorithms of all time. It shows at least in theory that quantum algorithms can be much more efficient than classical algorithms. The algorithm shows that the integer factorization problem can be efficiently solved on an idealized quantum computer and is consequently in the complexity class {\mathsf{BQP}}. This is almost exponentially faster than the most efficient known classical factoring algorithm.

Quantum Factoring Possible?

Is it practically feasible to use Shor’s factoring method to break RSA? This leads to a major question:

Can cryptography survive quantum methods?

A paper by Daniel Bernstein, Nadia Heninger, Paul Lou, and Luke Valenta titled “Post-Quantum RSA” is a key one. They consider further systems including elliptic curve cryptography (ECC) and say:

The conventional wisdom among researchers in post-quantum cryptography is that quantum computers will kill RSA and ECC but will not kill hash-based cryptography, code-based cryptography, lattice-based cryptography, or multivariate- quadratic-equations cryptography.

Shor’s algorithm easily breaks RSA as used on the Internet today. The question is whether RSA parameters can be adjusted so that all known quantum attack algorithms are infeasible while encryption and decryption remain feasible.

See also this. A 2019 paper by Craig Gidney and Martin Ekerå argues that implementations of Shor on 2,048-bit integers {N} is within reach of current technology using noisy qubits—needing only some millions of them. However, this presumes an error-free implementation of the Quantum Fourier Transform (QFT). They say:

Note furthermore that when we analyze the success probabilities of Shor’s algorithms, and the various derivatives, we assume the use of an ideal QFT even though the implemented QFT is technically an approximation.

[Added 6/19: This quotation is taken somewhat out of context, because the paper’s main concern is optimizing and dealing with the much greater noise and precision issues in the superposed modular exponentiation step. See Craig Gidney’s comment below for more information on that and on how the QFT step is executed.]

Quantum Factoring Impossible?

Now enter Jin-Yi. He has a new paper that says:

We consider Shor’s quantum factoring algorithm in the setting of noisy quantum gates. Under a generic model of random noise for rotation gates, we prove that the algorithm does not factor integers of the form {pq} when the noise exceeds a vanishingly small level in terms of {n} (the number of bits of the integer to be factored), where {p} and {q} are chosen from a set of primes of positive density.

Jin-Yi essentially is saying that quantum algorithms fail to break RSA in the presence of noisy gates. He argues that they will not be able to work when quantum gates are not perfect.

This seems to contradict the previous section. Can it be that quantum algorithms break RSA in theory, but are not practically realizable? See these three recent discussions.

To our knowledge, this is the first hard-and-fast negative result about Shor’s algorithm. Let’s take a closer look.

Angles on Shor’s Algorithm

Given {N = pq} to factor, Shor’s algorithm starts by choosing {a} relatively prime to {N}. The algorithm extends the domain of the function {f(x) = a^x \pmod{N}} to all {x < M} where {M = 2^m}, {m = 2n}, and {2^n} is the next power of {2} after {N}, so that {M \approx N^2}. The quantum engine of Shor’s algorithm has just two main components:

  1. A routine that computes the quantum state

    \displaystyle  \Phi = \frac{1}{\sqrt{M}}\sum_x |x\rangle|f(x)\rangle.

    Put another way without the Dirac angle-bracket notation, {\Phi} is a state of {m+n} qubits that has equal nonzero amplitude only on those components {xy} where {y = f(x)}.

  2. The QFT (or its inverse) on {m} qubits.

Quantum gates of the form {R_k = \begin{bmatrix} 1 & 0 \\ 0 & \omega_k \end{bmatrix}} where {\omega_k = \exp(\frac{2\pi i}{2^k})}, when controlled from another qubit, are used in the “textbook” way to compute the QFT. The diagram with {m=4} suffices for the general pattern:

source

For all but a few small values of {k}, the rotation angle in {R_k} is tinier than theoretical minimum units of space, let alone the smallest precision of angular or spatial resolution we have achieved in experiments such as LIGO. Call a circuit family using {R_k} for unbounded {k} “idealistic.”

Donald Coppersmith showed that Shor’s algorithm still works if {R_k} is replaced by the identity operator for {k > b}, where the threshold {b} equals {c\log_2 n} for a constant {c} slightly above {1}. The resulting circuits are still “idealistic” but at least not exponentially so. Coppersmith’s analysis is referenced in Shor’s original paper but not expounded further there.

Jin-Yi shows that Shor’s and Coppersmith’s circuits cannot tolerate a natural kind of noise that operates close to Coppersmith’s level of scaling. It stands concretely against any asymptotic claims of power via Shor’s algorithm that involve idealistic circuits. At the end we will discuss its implications also for circuits that implement Shor’s algorithm without using {R_k} gates.

The Noise

Call {C} a Shor circuit if it uses controlled {R_k} gates to compute the QFT (or its inverse) and can be sampled by a classical procedure {{\cal A}} to infer the period {\rho} of {f(x) = a^x \pmod{N}} in {n^{O(1)}} expected time.

Jin-Yi’s noise operation has parameters {\epsilon} and {b'} and maps a Shor circuit {C} to a distribution of circuits {\tilde{C}} defined as follows: For each controlled {R_k} gate in {C} with {k \geq b'} (alternatively, {k = b'}), replace it by

\displaystyle  \tilde{R_k} = \begin{bmatrix} 1 & 0 \\ 0 & \tilde{\omega} \end{bmatrix}\quad\text{where}\quad \tilde{\omega} = \exp\left(\frac{2\pi i(1 + r\epsilon)}{2^k}\right)

with the same control qubit and with an independent draw of Gaussian noise {r \sim {\cal N}(0,1)}. The echo of Coppersmith’s “{b}” is on purpose, because he establishes the following fact, which we first state loosely:

Provided {b' + \log_2(1/\epsilon) \sim \frac{1}{3}(\log_2 n)}, the circuits {\tilde{C}} lose the Shor property, meaning that {{\cal A}} sampling {\tilde{C}} cannot find {\rho}.

This says that the noise range brushes against the Coppersmith upper bound for the precision needed to implement Shor’s algorithm. Since {k} is exponentiated, one can say that noise on the order of the cube of the precision needed for Shor’s algorithm is enough to destroy it.

The estimates in the paper allow replacing {\frac{1}{3}} by {\frac{1}{2+\delta}} with greater attention to additive constants, so lower noise approaching the square root of the Coppersmith precision suffices to destroy the Shor property. This may be improvable to almost linear. Exactly what does the noise attack? That’s next.

Long Periods

The noise most strongly affects cases {N = pq} where {p-1} and {q-1} have a large prime factor. The most extreme such case is {\frac{p - 1}{2} = p'} being prime. Then {p'} is called a Sophie Germain prime. Ironically, {p = 2p'+1} is called a “safe prime” but those are the most unsafe under Jin-Yi’s noise.

It remains unknown whether infinitely many Sophie Germain primes exist, despite the quest winning a Tony Award and Pulitzer Prize. But a less-heralded property suffices. Étienne Fouvry proved in 1985 that the set of primes {p} for which {p - 1} has a factor {p' > p^{2/3}} is not only infinite, but has positive density in the set of primes. It follows that cases {N = pq} where both {p} and {q} have this “Fouvry property” have positive density among products of two primes. There can be only one prime factor {p' > p^{2/3}}, likewise {q' > q^{2/3}}.

The upshot for such {p} and {q} is that most {a} have exponentially long periods {\rho} modulo {N}. The geometric sums that concentrate amplitudes on multiples of {\rho} in the ideal situation, when the circuit is sampled via quantum measurement, have norm-squared proportional to {\rho}. In the noisy situation, such length maximizes the perturbative effect of the noise so as to level out the amplitude. This destroys the ability to infer {\rho}.

We cut a few corners in the statements of Jin-Yi’s theorems, but they are reasonably close and the paper has full details. They hold also under the {k = b'} variant and with-or-without removing controlled {R_k} gates for {k > b'}.

Theorem 1 Asymptotically as {n \rightarrow \infty}, if {N = pq} is an {n}-bit product of two Fouvry primes, and {b' + \log(1/\epsilon) < \frac{1}{3}\log_2 n - \Theta(1)}, then the probability that {{\cal A}(\tilde{C})} infers {\rho} is exponentially small.

Theorem 2 Asymptotically as {n \rightarrow \infty}, for all but a vanishing fraction of {n}-bit primes and with {b' + \log(1/\epsilon) < \frac{1}{3}\log_2 n - \Theta(1)}, the probability over {a} and noisy {\tilde{C}} that {{\cal A}(\tilde{C})} infers {\rho} is exponentially small.

Theorem 2, whose proof is in the paper’s appendix, says that Shor’s algorithm fails to survive the noise in all but a vanishing fraction of instances. It applies also under certain restrictions of the primes, such as {p} and {q} both being congruent to 3 modulo 4. Theorem 1 gives a substantial explicit set of cases {N = pq} on which the algorithm fails.

How General Is This?

The theorems are carefully stated in terms of the period-inferencing component of Shor’s algorithm. And they are asymptotic. They do not rule out:

  • possible quantum improvements on input sizes in the finite range of conceivable practical crypto;

  • quantum circuits {D} that might factor by other means; nor

  • that error correction might restore the Shor property.

In particular, they do not define a general-purpose noise model that could apply to any quantum circuit {D}.

Now we discuss two means to implement Shor’s algorithm without using gates beyond {R_3}:

  1. The Hadamard gate {H}, the controlled-not gate CNOT, and the gate {T = R_3} form a complete set that (by the Solovay-Kitaev theorem) can feasibly approximate the state produced by any feasible quantum circuit plus QFT. Then the minimum angle of any individual operation is {\pi/8}.

  2. The Hadamard and Toffoli gates form a universal set in the weaker sense of encoding real and imaginary parts of quantum amplitudes separately. This suffices to compute the factoring function via polynomial-size circuits using only real entries.

Idea 1 may only mask the issue, insofar as the resulting circuits must still approximate angles down to Coppersmith’s unboundedly small magnitude {2^{-b}}. Both {H} and {T} are rotations of the Bloch sphere of periods 2 and 8, respectively. As such, each may be exactly physically realizable, along with their controlled versions and CNOT in higher-dimensional Bloch spheres.

However, {H} and {T} together generate an infinite subgroup of SU(2). The group has members that rotate through arbitrarily small angles. Jin-Yi says in his speculative concluding section:

It is true that using a fixed finite set of rotations of reasonable angles such as {\pi/8} along various axes can compose to rotations of arbitrarily small angles. But my view is just that these compositional rules as specified by the group SU(2) must not be exact for physical reality.

Most in particular, let {A = HT}. If {A} can be exactly realized, then any power {AA}, {AAA}, … should be. But the angle of {A} is not a rational multiple of {\pi}, so the powers alone form an infinite state space and include arbitrarily tiny rotations. Please see Jin-Yi’s paper for other context and justifications on these points, plus related contentions by Mikhail Dyakonov.

The circuits in idea 2 cannot approximate any (feasible) quantum state metrically, but they can emulate Shor’s algorithm using only {0} and {\pi} as “angles.” They may, however, still involve quantum states with filigrees beyond physically realizable precision. In the coda to our own textbook, we speculate this already for the deterministic “functional superposition” component of Shor’s algorithm.

All this and more was discussed already twenty-plus years ago in the “Sure/Shor separatordebate. The difference now is having Jin-Yi’s new work as a linchpin for the skeptical side. Non-robustness to noise in the “Coppersmith range” may be a wider phenomenon than his current results show.

In his last paragraph, Jin-Yi argues that quantum computing makes a fundamental departure from Alan Turing’s condition that primitive steps are finite and fixed independent of the data size {n}. He mentions the free use of SU(2) but his point may apply as well to the step of placing a Toffoli gate anywhere in an {n}-qubit quantum circuit. This point is separate from issues of noise models, about which we have heard much from Gil Kalai including recently.

Open Problems

The issue is simple: Can quantum algorithms be made to work in the presence of gates that are making errors at Jin-Yi’s scaling? The obvious interesting open question is: As in classical computation, can we build circuits that can handle errors? See this and this on error-free computation.

This seems to be a wonderful question. Will the new results reshape debates on quantum computing and the polynomial Church-Turing thesis, or are they subsumed in matters already recently much discussed?


[added update about Gidney-Ekerå paper in third section]

15 Comments leave one →
  1. June 14, 2023 12:45 am

    Dick and I thank Jin-Yi for helpful comments.

    Here is a simple attempt to extend the scope of Jin-Yi’s noise operator. First apply the principle that the deterministic part {{\cal A}} of Shor’s algorithm can be “brought inside” to make a quantum circuit {C'} that computes the factoring function with high probability. Let {D} be any other circuit for that function. Then {D^* C'} is the identity except for “ancilla” qubit lines apart from those on which the output is defined. And {D^* C' D} is equivalent to {D}. Now, however, we can apply the noise operator to induce the distribution

    <

    p align=center>\displaystyle  \tilde{D} = D^* \tilde{C'} D.

    This is well-defined because {C'} includes instances of {C} with the QFT. If {\tilde{D}} could factor with non-negligible probability {\gamma}, this would contradict Jin-Yi’s theorems: First, {\tilde{D}D^*} would be the identity with probability {\gamma}. Then {D\tilde{D}D^*} would factor with probability {\gamma}. So {\tilde{C'}} would factor with probability {\gamma}. But {\tilde{C'}} is just the quantum realization of the period-inferencing part of Shor applied to sampling the original noisy circuits {\tilde{C}}. It follows that the period must be inferable from samples of {\tilde{C}}—but this contradicts the analysis in Jin-Yi’s proofs.

  2. June 14, 2023 1:49 am

    Interesting post! I think that the concern that noise will fail Shor’s algorithm were raised in earlier critical papers from the 90s that motivated the discovery of quantum error correcting codes and later the threshold theorem of quantum fault tolerance.

    The paper refers to quantum fault tolerance and write: “These are beautiful mathematical theorems. But they fundamentally assume that the group SU(2) exactly corresponds to operations on a qubit in reality, especially in its composition—that group composition (in its infinite precision defined over C) exactly corresponds to sequential application of realizable quantum operations”.

    This statement is a little vague. Certainly the “threshold theorem” and quantum fault tolerance do not assume that the gates that represent quantum operators on a single or pairs of qubits are perfect. Noise on gates (if it is sufficiently small) still allows quantum fault tolerance to work. In the tolerance theorem “noise” can refer to any sort of imprecision in the description of the quantum computer. In this respect it is not clear what Jin-Yi refers to referring to “infinite precision” requirements.

    If the reference is to the assumption that the (unknown) noise itself can (in principle) precisely be described in terms of quantum operations this is indeed an assumption (that seems very reasonable.) But the threshold theorem does not require precise knowledge or precise modeling of the noise and rely on several assumptions regarding the noise that the central one is that the rate of noise is sufficiently small.

    • Jin-Yi Cai permalink
      June 14, 2023 11:01 am

      Thanks Gil! Yes, quantum fault tolerance allows noise on gates. By that “infinite precision” I was referring to the exactitude of correspondence between sequences of operations (corrected(?) fragments of quantum gates) and group compositions in SU(2). Sec. 4 (“Comments”) of the paper discussed some more.

      • Xiuli permalink
        June 16, 2023 1:11 am

        So, you are nott suree whether the infinite precision is the exactitude of correspondence between sequences of operations and group compositions in SU(2)?

  3. June 14, 2023 4:24 am

    Whether quantum or non-quantum, you already have an assortment of primes-generators and prime-factorizers. Pls search Twitter.com for my collection of them by/under @koitiluv1842 + Pinned Tweet thread and + prime and + factorization.

    • javaid aslam permalink
      June 15, 2023 6:20 am

      How much motivation one must have to search your paper on Twitter.com?

  4. Jin-Yi Cai permalink
    June 14, 2023 10:50 am

    Thanks Dick and Ken for your kind words.

  5. Scott Aaronson permalink
    June 15, 2023 10:00 am

    (reposting from elsewhere because several people asked me about this)

    Jin-Yi Cai is an excellent theorist who I respect greatly. This new paper on Shor’s algorithm, however, is BEGGING to be misinterpreted by those eager to misinterpret, and I unfortunately see that that’s already happening. Granted, I don’t know that anyone before thought to PROVE that Shor’s algorithm indeed fails if you insert a few random errors into it and don’t even try to error-correct them. So I’m glad that Jin-Yi has apparently now done that! But given the unforgiving nature of (say) the modular exponentiation function, this fact is hardly a surprise—our physics and engineering friends might even use terms like “eye-wateringly obvious.” It’s rigorously proving that a particular Swiss watch will probably cease to work if you smash it with a hammer. This is precisely why quantum error-correction has been understood to be crucial for decades!

    What rankles more is that the paper combines its (worthwhile even if 100% expected) technical result, with personal speculations that quantum computing might be impossible because quantum mechanics isn’t exactly true, as if those two things had anything to do with each other. To extend the analogy, this is meticulously proving that a Swiss watch stops working if you smash it with a hammer, AND THEN somehow using that to buttress your argument that maybe quantum gravity and cosmology will limit the accuracy even of Swiss watches that aren’t smashed.

    • Jin-Yi Cai permalink
      June 15, 2023 7:02 pm

      Believing something true is a very different thing than proving it mathematically. I ask people to read the paper and form their own judgement.

  6. June 15, 2023 4:17 pm

    It’s ironic to see the “we ignored noise in the QFT” line from my paper quoted as if this was something we were worried about and ignoring, instead of something so irrelevant to the estimate that we weren’t bothering.

    When factoring a number using Shor’s algorithm, the QFT happens at the end of the circuit and so is performed using a technique called qubit recycling. This reduces the QFT to, essentially, one precise rotation per qubit in the superposed exponent. For a 2048 bit factory this will be around 4000 precise rotations. Because there are 4000 of these rotations, and because quantum mechanics is linear meaning errors accumulate linearly and because we can recognize when an output is bad, it’s sufficient to do these rotations to a tolerance of around 4000’th of a degree.

    Using Selinger et al’s synthesis tool ( https://www.mathstat.dal.ca/~selinger/newsynth/ ), a rotation to that tolerance requires around 20 T gates and H gates. The T gates and H gates must in turn be done to a stricter level of tolerance, since we’re doing many of them to implement the desired rotation. The H gate can be performed to well beyond this tolerance using lattice surgery ( https://arxiv.org/abs/1808.02892 ), using a surface code distance of let’s just say 25. The T gate can be performed to well beyond this tolerance by using only one level of 15-to-1 T state distillation ( https://arxiv.org/abs/1808.06709 ) starting from an injection with a 1% error rate (e.g. using hook injection https://arxiv.org/abs/2302.12292 ) to produce a sufficiently good T magic state and then using lattice surgery to consume the state to perform gate teleportation of a T gate.

    For contrast, a 2048 bit factoring requires THREE BILLION Toffoli gates. A million times more Toffoli gates than the 4000 precise rotations! As a result, the tolerances on the Toffolis are far stricter; the Toffoli-related rotations need be accurate to within a ten billion’th of a degree. This is why we ignore the noise in the QFT. Because achieving the necessary tolerances on the gates implementing the QFT is incredibly trivial compared to achieving the necessary tolerances on the Toffoli gates implementing the modular exponentation. The Toffoli gates require two levels of distillation instead of one (e.g. as in https://arxiv.org/abs/1812.01238 ), and all of the qubits sitting around while those Toffoli gates are being performed need to be stored at high distance just to be sure they survive the wait. The QFT’s cost is totally negligible in comparison.

    That said, when Jin-Yi shows that you need to perform operation to a precision of epsilon ~= 8^-n where n is the number of bits in the number to factor, he’s exactly right. Achieving that level of precision is why the estimate to factor a 2048 bit numbers was 20 million physical qubits instead of 4000 physical qubits. Because we are doing all operations on distance 25+ surface codes, which require 1000-2000 physical qubits per logical qubit, in order to reach the necessary tolerance of 10^-10 .

    In short… Scott Aaronson’s assessment above is dead on.

    • June 19, 2023 9:09 pm

      Craig, first: thanks very much for this comment, which is highly informative at our level. I have been able to take time over the weekend to read into some of your papers and further into some of your references. These have raised a few further questions—ones I haven’t seen answered in the debate of 20-25 years ago when you focus on small angles as opposed to small amplitudes. I intend to say more after resolving a few other questions, but there’s one I can ask here right now: What is the smallest rotation that has been verified as having been executed physically?

      • June 21, 2023 12:35 am

        The best single qubit gate error rates are around 0.01% per gate. Noting that sin(1°)^2 ~= 0.03%, you can see that this implies the best rotations are likely accurate to somewhere between 0.1° and 1°. There are other relevant error mechanisms besides over/under rotation, like decay to the ground state and leakage to the 2 state and cosmic rays and crosstalk and z-tails and and and-, so it’s hard to say exactly what the accuracy of the rotations is.

        (In my last comment I might have misestimated the angle tolerances needed for operations due to forgetting about the need to square when going between small-angle-tolerance and small-chance-of-big-flip tolerance. The angle tolerances I gave might be too tight.)

        You mentioned amplitudes vs rotations. The quantum error correction field pays very little attention to amplitudes. Consider that, during Shor’s algorithm, there could be 2^2048 amplitudes in play. If you pick a random subset of 2^2000 of them, and apply any unitary you want to that subset… you basically can’t do anything relevant that will to prevent the algorithm from succeeding. Whereas flipping over just one single qubit can trash the entire thing. Actually, you can prove that stopping the flips must correct all noise (like accidental measurements and small overrotations). So as a field we mostly focus on stopping flips, because this is sufficient and simple.

  7. Xiuli permalink
    June 16, 2023 1:03 am

    Excellent, please have a look at the url: https://cstheory.stackexchange.com/questions/51851/is-it-proved-that-error-rate-of-quantum-computation-is-bounded-by-constant-rathe

    Someone have suspected for several years about the possibility of implementation and have had a discussion with Prof. Peter Shor

  8. June 18, 2023 2:40 am

    Here are some further comments:

    1) The computational power of NISQ computers.

    In the debate between Aram Harrow and me ten years ago there was a central question that I raised:

    “What can be achieved by (or more formally, what is the computational power of) quantum devices that do not apply quantum fault tolerance?”

    Since then I, and others, have studied various aspects of this question. This question is especially relevant to understand the power of NISQ devices. Here are three specific questions

    a) Does the security of quantum key distribution (QKD) relies on quantum fault tolerance?

    (I did not study this question myself, and I am curious what the answer is.)

    b) Can “quantum supremacy” be achieved without quantum fault-tolerance? Specifically, can it be achieved for NISQ systems?

    For  NISQ computers,  a 2014 paper by Guy Kindler and me offers a negative answer. (Our results refer to boson sampling but our interpretation was that it applied for NISQ computers in general.)  

    c) Can good-quality quantum error correcting codes be constructed by NISQ computers (and more generally by quantum devices without quantum fault-tolerance)?

    Of course, if good quality quantum error correcting codes cannot be constructed at all by NISQ computers (as I argue) then this will largely refute the possibility of building quantum computers.  

    2) The wall (if exists) is near.

     A very reasonable challenge for quantum computers skeptics is that “they’re the ones who need to articulate where and why the progress will stop.” My theory predicts that the wall is quite near: convincing demonstrations of “quantum supremacy” are out of reach, and there would be inherent difficulties to demonstrate distance-3, distance-5 and distance-7 surface codes on NISQ systems.  The claims of the Google 2019 “quantum supremacy” paper that drew a lot of attention were sufficient to refute my theory. However, these claims have largely been refuted since.    

    3) Some physicists like Michel Dyakonov regard the impossibility of quantum computers as an “eye-wateringly obvious” insight of quantum physics. Robert Alicki, who conducted for many years research in the skeptical direction, offered several physical explanations (e.g., from thermodynamics) for why quantum fault tolerance is doomed to fail. Among skeptics there are some (and perhaps Cai is among them) who are excited by the possibility that quantum mechanics itself is not precisely correct.

    4) There is a nice 2017 blog post by Boaz Barak about various forms of quantum computers skepticism.

    https://windowsontheory.org/2017/10/30/the-different-forms-of-quantum-computing-skepticism/

    The thread also contains a distinction between quantum computer’s believers who regard quantum fault-tolerance as the basic reason for the ability to overcome noise and those believer who regard quantum fault tolerance as a “red herring” and think that, in principle, the noise could be reduced to the extent that we could run Shor’s algorithm quantum computers with no need of quantum fault-tolerance.

Trackbacks

  1. 一点噪声就会使量子因式分解失败。 - 偏执的码农

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading