Skip to content

The Amazing Zeta Code

December 4, 2012


But can it encode itself?

182px-S.M.Voronin
src

Sergei Voronin was an expert in analytic number theory, especially the Riemann zeta function. He proved a sense in which the Riemann Hypothesis “ain’t necessarily so.” That is, he showed that certain complex analytic functions {\zeta'}, ones known to obey functional equations similar to the one for {\zeta}, have greater deviations in the frequency of zeroes on the critical line than {\zeta} can have if the hypothesis is true.

Today Ken and I wish to discuss an older result of his—proved in 1975—about the Riemann zeta function that seems to be amazing.

Voronin proved that the zeta function is universal in the precise sense that it encodes all possible other functions, to any desired degree of precision. This seems like a complexity-theory result more than a number-theory result to me, and also seems like it should have complexity consequences. We share this opinion by Matthew Watkins, who maintains a page on number theory and physics:

“This extraordinary 1975 result receives surprisingly little coverage…”

He goes on to say, “…it was difficult to find a clear and accurate statement anywhere on the WWW in March 2004.” Let’s turn now to discuss it in detail.

The Zeta Function

The zeta function is of course defined for complex numbers {s} with real part greater than {1} by the famous equation

\displaystyle  \zeta(s) = \sum_{n=1}^{\infty} \frac{1}{n^s}.

It then can be extended to all complex numbers, except that at {s=1} the function has a simple pole. This means just that the extension near {1} behaves like {\frac{1}{s-1}}.

It has been known for several centuries that the zeta function holds the key to our understanding of the distribution of the primes. The proof that it has no zeroes at {1 +it} where {t} is real is equivalent to the Prime Number Theorem (PNT). This is the statement that the number of primes less than {x}, denoted by {\pi(x)}, is approximately

\displaystyle  \mathsf{Li}(x) = \int_{2}^{x} \frac{1}{\ln t}dt \approx \frac{x}{\ln x}.

The zeta function can be used, and was used, by Leonhard Euler to give a proof that the number of primes must be infinite. This is weaker than the PNT, but a very simple argument. It was later modified and used by Gustav Dirichlet to prove that there are an infinite number of primes in any arithmetic progression

\displaystyle  a+b, a+2b, \dots, a+mb, \dots

provided {a} and {b} have no common factor.

The Amazing Property

Let {U} be any compact set of complex numbers whose real parts {x} satisfy {\frac{1}{2} < x < 1}. Because {U} is closed and bounded, there exists a fixed {r < \frac{1}{4}} such that {\frac{3}{4} - r \leq x \leq \frac{3}{4} + r} for all {z \in U} with real part {x}. The {\frac{3}{4} - r} part can be as close as desired to the critical line {x = \frac{1}{2}}, but it must stay some fixed distance away.

We also need {U} to have no “holes,” i.e., to be homeomorphic to a closed disk of radius {r}. Often {U} is simply taken to be the disk of radius {r} centered on the {x}-axis at {\frac{3}{4}}, but we’ll also think of a square grid. Then Voronin proved:

Theorem 1 Given any analytic function {f(z)} that is non-vanishing on {U}, and any {\epsilon>0}, we can find some real value {t} such that for all {z \in U},

\displaystyle  |\zeta(z+it)-f(z)| < \epsilon.

That is, by translating {U} upward by a distance of {t}, we find a region where {\zeta} produces the same values as {f} does on the original {U}, within a tolerance of {\epsilon}. Moreover, as {\epsilon \rightarrow 0} we can find more and more {t}‘s for which the simulation is better and better. Watkins quotes this from Martin Gutzwiller’s book Chaos in Classical and Quantum Mechanics (Springer-Verlag, 1990):

“Although the Riemann zeta-function is an analytic function with [a] deceptively simple definition, it keeps bouncing around almost randomly without settling down to some regular asymptotic pattern. The Riemann zeta-function displays the essence of chaos in quantum mechanics, analytically smooth, and yet seemingly unpredictable…

In a more intuitive language, the Riemann zeta-function is capable of fitting any arbitrary smooth function over a finite disk with arbitrary accuracy, and it does so with comparative ease, since it repeats the performance like a good actor infinitely many times on a designated set of stages.” [emphasis in original]

That is to say, even if we have a function {g} that is analytic and nonzero on some compact region {U'} well away from the critical strip, or even overlapping the critical line, we can map {U'} conformally to some {U} within the strip, and then {\zeta} can approximate the resulting mapped function {g'}.

A “Concrete” View of the Proof

The reason we think this is relevant to complexity theory comes from a discrete explanation of how the theorem works. Given a {\delta > 0}, take {D = 2\lfloor\frac{r}{\delta}\rfloor}. Consider {U} to be the {D \times D} square grid centered on {\frac{3}{4}} on the {x}-axis. We will use the {D^2}-many centers {u} of each grid square. Note that in the leftmost grid boxes, the real part {x} of {u} is displaced from {\frac{1}{2}} by about {\frac{\delta}{2}} in addition to the fixed distance {\frac{1}{4} - r} which is independent of {\delta}. If we really fix {\delta}, then we can allow the grid to extend all the way to the critical line {x = \frac{1}{2}}.

We also make a discrete set {V} of allowed complex values {v = x + iy} such that {x} and {y} are integral multiples of the target approximation goal {\epsilon}. More precisely we set {E = \lceil\frac{2}{\epsilon}\rceil}. Then {x} and {y} may have values {j\frac{1}{E} + \frac{1}{2E}} for integers {j} such that {-E^2 \leq j \leq E^2 - 1}. Note that these values stay away from zero, this time by an amount that depends on {\epsilon}. To every grid center {u}, assign a value {v_u} from {V}.

ZetaGrid

Now we could use polynomial interpolation over these {D^2}-many values to define an analytic function {f} that has those values at those points of {U}, and apply Voronin’s theorem to {f}. The way it works, however, is really the reverse: Given any {f} that is analytic and non-zero on {U}, we observe two consequences of {U} being compact:

\displaystyle  \begin{array}{rcl}  &&(\exists \epsilon_0 > 0)(\forall z \in U): \epsilon_0 < |f(z)| < \frac{1}{\epsilon_0},\\ &&(\forall \epsilon > 0)(\exists \delta > 0)(\forall z,z' \in U): |z - z'| < \delta \implies |f(z) - f(z')| < \epsilon. \end{array}

Thus for sufficiently small {\epsilon} we can choose {\delta} to make the grid sufficiently fine to give an {\epsilon}-approximation of {f} everywhere on it. Namely, for each box center {u} take a value in {V} that is nearest to {f(u)}. There is always a suitable {v_u} whose real and imaginary parts have magnitude no more than {E}.

The key idea is that Euler’s product formula for {\zeta(s)} enables one to do “Zeta Interpolation” on the resulting finite grid of values {v_u}. Well this could be the idea. The proof sketch given by Wikipedia seems to do the approximation more directly on {f}, using selections of primes and complex phase angles. It might be interesting to work it out as a discrete interpolation.

A little intuition is conveyed by imagining the above colored grid against some extension of Wikipedia’s colored figure on zeta-function universality. Overall this shows how discrete structures emerge out of conditions in continuous mathematics such as compactness—a hallmark of what Ron Graham, Don Knuth, and Oren Patashnik promote as “Concrete Mathematics.”

The Zeta Code

Take any string {w} over some finite alphabet {\Sigma}. We can choose {\epsilon,\delta} in some minimal manner so that the corresponding {D,E,V} satisfy {D \geq |w|} and {|V| \geq |\Sigma|} with values spaced far enough to allow unique decoding from any {\epsilon}-approximation of those values. If {w} is a binary string, we can take {\Sigma = \{0,1\}^b} for some block length {b} that optimizes the relation between {D} and {E}. Then we can plug in the values encoding {w} as the values {v_u} for our grid—actually, for just one row of grid points {u}. It follows that we can find {t} such that the values {\zeta(u + it)} encode the string {w}. That is, rounding those values to the closest elements of {V} decodes {t} to yield {w}.

It is amazing enough that we can do this for just one row, one-dimensionally. That we can do this in two dimensions—say encoding a larger {w} as a matrix and mapping blocks inside {w} to single values {v_u}—is even more amazing. All of this gets encoded by a single numerical value {t}. Let’s call this the Zeta Code.

Is the Zeta Code useful? We cannot expect {t} to be a simple value—it must have Kolmogorov complexity at least as great as the {w} it encodes. But this already speaks a connection to complexity theory. The code’s performance is so good that the working {t}‘s have positive limit density. That is, for any {f} and {U}:

\displaystyle (\forall \epsilon)(\exists \gamma)(\forall^{\infty}T)\frac{1}{T}\mu(\{t \leq T: (\forall z \in U) |\zeta(z+it) - f(z)| \leq \epsilon\}) \geq \gamma,

where {\mu} is Lebesgue measure. We can define {\gamma = \gamma_w} in terms of {w} alone by maximizing over {b,\epsilon,\delta} satisfying the above constraints. Then {\gamma_w} can be called the “zeta-density” of the string {w}. What might be its significance?

Can Zeta Simulate Itself?

Another way complex analysis could be like computational complexity is self-reference. The {\zeta} function is analytic away from the unique pole at {s = 1}. It is non-zero away from the trivial zeroes on the negative {x}-axis, and away from the non-trivial zeroes which the Riemann Hypothesis asserts all have real part {\frac{1}{2}.}

Thus Voronin’s theorem says that the {\zeta}-function on the critical strip can approximate the {\zeta}-function on disk-like regions sufficiently away from the critical line, to any degree of approximation. In that sense it is “fractal.” But can it simulate itself via the theorem within the strip, getting arbitrarily close to the critical line?

The question is equivalent to the Riemann Hypothesis itself. For suppose the hypothesis were false. Then there would be a zero {\frac{1}{2} + e_0 + i t_0} for some fixed {e > 0}. Now fix {r} such that {\frac{1}{4} - e_0 < r < \frac{1}{4}}, and take {U} of width {r} and height {t_0}. This includes the zero, so the condition of Voronin’s theorem does not apply. Thus if the condition applies for {r} arbitrarily close to {\frac{1}{4}}, then Riemann must be true. The converse also holds: if Riemann is true then {\zeta} is analytic arbitrarily close, and we can apply the theorem to produce infinitely many regions where {\zeta} replicates itself within any desired precision.

This picture has been sharpened just this past summer in a paper by Johan Andersson, titled “Non universality on the critical line.” He shows that {\zeta} definitely does not have analogous universal behavior on the line itself. Thus ideas of universal simulation, which we think of as Alan Turing’s province, matter directly to the Riemann Hypothesis, which Turing himself worked on.

Open Problems

Can more be derived from this nexus of complexity and complex analysis? Can it help attack the Riemann Hypothesis itself?

I (Dick) find Voronin’s universality theorem quite surprising. I also think that we should be able to use it to prove a lower bound on the computational complexity of computing the zeta function. One of the reasons I find this result cool is it seems to be linked—in some way—to the complexity of computing the zeta function. I cannot prove this yet, but the fact that “all” functions are encoded should in principle yield a lower bound on the cost of evaluating the zeta function. Does anyone have an idea of how to make this precise?


[Linked paper referred to in opening paragraph and clarified that it gives a proof but doesn’t define the functions involved.]

14 Comments leave one →
  1. December 4, 2012 5:34 pm

    What does this do for physicists – can their sums over all surfaces be reduced to integrations over functionals of the Riemann zeta-function?

  2. Andy D permalink
    December 4, 2012 9:32 pm

    Amazing theorem.

  3. fnord permalink
    December 5, 2012 8:28 am

    Tiny errata: Your equation for the Zeta function uses “k” for the summation, but “n” in what follows.

    • rjlipton permalink*
      December 5, 2012 10:32 am

      fnord,

      Thanks for catch. We try to avoid errors, misteaks do occur—i mean mistakes do occur.

      dick

    • December 5, 2012 9:32 pm

      Thanks. Some errores are invisible :-).

  4. December 6, 2012 12:23 am

    One interpretation of the theorem is that there simply aren’t that many analytic functions otherwise $C_0$ wouldn’t be enough to parametrize (ie. $t \in R$) arbitrarily good approximations to all analytic functions.

    It’s been a while since I spent any time doing complex analysis, but I seem to remember encountering other theorems which had the same flavor, i.e. analytic functions are “rare”.

  5. Bo Waggoner permalink
    December 6, 2012 3:02 am

    Very interesting, thanks for the post!

    Here’s an initial reaction for why this feels tricky. zeta can approximate arbitrarily well the function g(x) = Chaitin’s constant (for instance). Since we know zeta is computable, it must be the case that zeta does so only on an (uncomputable) sequence of (uncomputable?) reals.

    Similarly, zeta can solve any NP-Complete problem by mapping instances to the constant that encodes its certificates; but if the $t$ required to do so grows exponentially with the size of the input, then we haven’t learned much about the internals of the zeta function — it’s just a passive pump.

    So it seems hard to understand zeta’s computational power or limits without understanding its language — the sizes of its inputs and outputs. I think it’d be very cool to follow up on zeta density and think about how much “space” zeta has to encode all of these functions. I wonder what happens if we try to make a counting argument with the number of computable analytic functions to show that they just can’t all fit — some functions will need to be encoded exponentially (that is, the associated sequence of $t$s will need to grow exponentially).

    But I think such a result would make finding complexity limits on zeta tougher, because it would show that the zeta function “offloads” difficult computation to its input (rather than showing that zeta itself solves hard problems).

  6. December 7, 2012 4:24 pm

    I am far from expert here, but I remember hearing a talk by Gauthier on universality results for various kinds of entire function. See some seemingly related details here
    http://www.mfo.de/document/0807a/OWR_2008_06.pdf which suggest that universality should somehow be generic. (But perhaps only in the sense that a “generic” continuous function should be nowhere differentiable.)

  7. tchow8 permalink
    February 26, 2021 11:07 pm

    If I understand the history correctly, your “ain’t necessarily so” remark refers to Voronin’s 1985 paper, “On the distribution of zeros of some Dirichlet series,” where he does prove a result of the form you describe. But the function in question was not one that he originally came up with; credit for that goes to Davenport and Heilbronn in 1936. For more information, one can search for “Davenport-Heilbronn function.”

Trackbacks

  1. A “The Files are in the Computer” moment « Pink Iguana
  2. Hunting Complexity in Zeta | Gödel's Lost Letter and P=NP
  3. Modeling Reality | Gödel's Lost Letter and P=NP
  4. Riemann Hypothesis—Why So Hard? | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading