Could Euler Have Solved This?
A new problem about the unit circle
Leonhard Euler needs no introduction. But let’s give him one anyway. He can be called the first truly modern polymath, one man who made foundational contributions to what are today considered widely disparate areas: graph theory, number theory, power series, complex analysis, harmonic numbers, and more. He made the first key step toward the Riemann Hypothesis, proving that
which converts the distribution of primes, a discrete problem, into a problem of analysis. He and Nicolaus Copernicus are the only mathematicians or scientists commemorated in the Lutheran Calendar of Saints.
And of course the number 2.71828… represented by is named for him, although it was implicitly known to John Napier and used under different names by several others. Euler may have helped his own Matthew Effect by using the letter
for it.
Instead of presenting some of Euler’s work, today we will do the opposite. We will pose an open problem that we feel could have been explained to him, and ask how he would have attacked it. Could he have solved it? We will try to find out, even though we currently cannot solve it ourselves. It is a simple problem about points on the unit circle, nothing more.
Why Euler? Well, he connected the unit circle and complex analysis to five fundamental constants of mathematics in his famous formula
The Chordal Product
Suppose we have a set of
points on the unit circle. There are
-choose-
chords connecting each pair of points. Define the chordal product
which is just the product of the lengths of the chords. How can you maximize this product? When there are no constraints on , your intuition is correct: space the points equally as far apart as possible.
Theorem 1
The maximum value ofis
, and is achieved by locating the points at the complex
th roots of unity. This optimum is unique up to rotations.
To prove this, we show that the product of the lengths of the chords emanating from any one point is
. Taking this product over all
points then makes
, but since each chord is double-counted, the final answer is
.
Now consider the chords coming from the point in the complex plane. Taking
to be the principal
th root of unity, so the other points in
are
for
to
. Those
points are the roots of the complex polynomial
, so by the fundamental theorem of algebra,
Thus
(One can also observe that is the derivative of
evaluated at
.)
A second proof notes that the chordal product for the th roots of unity is the absolute value of
This is because the DFT matrix is a Vandermonde matrix whose
entry, indexing from
to
, is
. The matrix
is unitary, meaning
. Thus
, so
The New Problem
Now suppose we bar any point in from the open interval
in radians. When
, this constraint prevents the equal-spread solution. Thus subject to this constraint,
.
Problem. Find the maximum value of
when
and
, and find an
that achieves it.
We can also ask, what is the asymptotic growth of for fixed
? Although
, it could still have
growth. What may be surprising is that it does not have any “growth” at all:
Theorem 2
For any fixedand all
,
.
Note that this means the chordal products crash to zero even faster than the reciprocal of the unconstrained answer . “Dude—what happened to my nice exponential growth?” We believe, but haven’t proved, that in fact they approach
at the fastest possible rate:
Theorem 3 (?)
For any fixedthere is a constant
such that
as
.
Complexity-Theory Excursion
Now let itself vary with
. That is, consider
where
. I and Maurice Jansen, who obtained his PhD under me in 2006, and has since enjoyed postdocs in Aarhus, Beijing, and currently Edinburgh with Rahul Santhanam, worked out that if we could keep
whenever
, then we could improve some lower bounds by Ran Raz and by Peter Bürgisser and Martin Lotz. These bounds concern arithmetical circuit families whose numerical constants are bounded in size.
We had thirty pages of work all set to go, pending the needed estimates. Unfortunately we guessed wrong—or at least I did, while Maurice proved:
Theorem 4
When,
, for any constant
.
However, when
, there exists
such that for all
,
.
Likewise unfortunately, having was no good for our complexity-bound machinery. We have not been able to close the gap between
and
in the exponent, on exactly where the chordal product approaches
faster than
. What is so special about
versus
here, anyway?
What Would Euler Do?
If we had a closed-form expression for then all this asymptotic mystery would be lifted. Or it might be—it would depend on how nasty the expression came out to be.
How would Euler have attacked this problem? We can make facile guesses that he would have tried to express as a sum-of-products, since
has that form as a determinant. Or he might have used power series or infinite sums like
, where
might depend on
and
in some hopefully-simpler manner than
itself. He might have treated it as a problem in analysis, regarding the constraint as first breaking the circle and then deforming it.
I have been unable to locate anything like this simple problem in the literature, and have wondered why. The closest known problem that has been suggested to me is the Thomson spheres problem. This also sounds simple and turns out to be complex, even though it does not have the “missing bite” feature. That feature may be the historical cause of the problem being overlooked. People have been motivated by problems that are natural, but Nature does not arbitrarily exclude matter from some region of space. Perhaps it is only computation theory that brings “unnatural” motivations.
We have tried one thing that Euler couldn’t do: unleash computers on the problem. One can try standard heuristics of starting with a given and perturbing its points, retaining sets
that increase the chordal product. However, a student who did so reported to me that numerical precision issues quickly prevented him from getting interesting results even for
about
. He tried writing in C with the Gnu MP Library for arbitrary (as-needed) precision, but computation quickly became too slow. This speaks to why the problem may ultimately be difficult.
The DFT: An Incredible Balancing Act?
Let’s go back to the original unconstrained problem, and ask how we might estimate , without using either of the above proofs of
growth. Here is a rough “Fermi-style” estimate: The chords go from
to
on average, and so an “average” chord has length
. Hence estimate
as:
Of course this is a great over-estimate, and the flaw is easy to see: While long chords have length at most , short chords can have lengths approaching
.
This does mean, however, that a tendency of long chords to produce is balanced by a cabal of short chords that produce
. When a slice of the circle is excluded, almost however thin, the points in
are driven closer together, causing more enough shorter chords to tip the balance of power to the cabal, so that
dominates all. If the circle is inflated slightly, then we can expect the positive
growth tendency to prevail. Thus the actual nearly-linear growth in the exponent is an exquisitely fine balance between the two tendencies.
The effect shows even better upon taking logarithms. For large and any
,
can be grouped into sub-expressions that contribute for all
,
. Here
means that the “constant”
may contain factors of
. The upshot is that when
, for all
other than
, every term
is magically canceled by another term
. All of the large magnitudes involving long and short chords cancel, leaving a residue of
.
We would be interested to hear if—and how—other literature has presented the Discrete Fourier Transform as a study in huge cancellations, or has traced this as the “cause” of numerical-instability issues that are associated to it. Are there cases in Nature described by formulas that can be said to result from huge cancellations? We speculate on this in connection with renormalization in quantum field theory. Even more speculatively, we note that “rough” estimation from quantum-field theory gives a value for the energy content of our vacuum that is more than 120 orders of magnitude higher than the measured value reflected in our tiny-but-definitely-positive cosmological constant. Is there a chance that this value reflects the built-in nature of the Fourier Transform rather than mere multiversal chance?
Open Problems
Would Euler have solved the problem of computing Can you solve it? Can you at least approximate it asymptotically for values
as
?
[fixed a jagged line]





This is my first solo-everything post; Dick is ill at home this week and was only able to read an earlier draft of this and suggest a few edits. He is OK—we just spoke—but e-mail to him may take awhile to answer; mine is [my-lowercased-surname] [at] buffalo [point] edu. I ran the Python scripts to translate LaTeX source into WordPress’ HTML-with-LaTeX-environment, cleaning up some things manually afterwards. The way symbols drop below the baseline pertains to the WordPress environment, and I see it is worsened by having superscripts and things like tildes stacked on top. I don’t know if that can be worked around.
Trivial comment: the chordal product itself is generically the determinant of a Vandermonde matrix, not just for roots of unity.
Highly speculative and probably useless comment: do you think there may be a way to crack this nut using random matrix theory?
Thanks—I could have mentioned the general Vandermonde fact. We have indeed tried some approaches related to work by Tao et al. involving random matrix theory, but maybe haven’t gone full-bore with it.
Also there’s the whole Coulomb gas avenue, which is related and which you’ve probably tried but I didn’t think of until the update to this post.
Dramatic cancellations in quantum field theory often involve supersymmetry.
You emulated Dick’s style quite well (whether intentionally or not)… I had no idea it was a second author until I got to the comments. Thanks for a nice post!
Thank you in return. There’s actually some empirical evidence: I ran this post thru Typealyzer, and it came out “J” like other posts by Dick, whereas I’m generally a strong “P” :-). I did try to keep my sentences shorter.
It seems to me that Euler would have tried easier problems first, like what if $\epsilon=\pi$? Is the answer to this known?
Didn’t you forget to prove the upper bound in Theorem 1, that is, that one cannot do better than equal spacing? This probably follows from Hadamard inequality.
Unless my maths is spectacularly terrible, the stochastic case is quite different. E.g. choose a distribution over the circle to maximise the expectation of the chord product when you draw n points from that distribution. From Jensen’s inequality you have that the expected chord product in the epsilon=0 case is bounded below by exp(2n(n-1)/pi) and in the positive epsilon case, even preserving a sub-optimal uniform distribution you have that it’s bounded below by something proportional to exp(2n(n-1)/pi) provided epsilon’s less than 2. (It’s also a lot easier to work out the optimal distribution here than in the deliberate case, you basically just use the convolution theorem on the first order condition for the density (assuming it exists).)
Is this result obvious for some reason? It seems a bit odd that just by randomising you can switch the expected chordal product from decreasing in n to increasing in n.
Interestingly, if you look at expected log chordal products then things swap round again. In the epsilon = 0 case you then get a zero expected log chordal product for all n (and presumably a negative and declining one for epsilon > 0, though I haven’t checked.)
Oops the claims in the first paragraph above are wrong. Late night maths really is beyond me. It doesn’t have the high rate of growth I claimed, as is obvious from the fact that via Jensen’s you are just comparing it to the expected log chordal product case…
So in the stochastic case from Jensen’s you just get a zero lower bound on the growth rate, which is rather less interesting. You could probably do better.
This can be analyzed using classical potential theory in the plane. Taking the negative of the logarithm of the chordal product gives the logarithmic energy for the point set (i.e., the energy with respect to the pair potential given by r -> – log |r|). On any reasonable curve, such as the circle minus an arc, there is a unique continuous unit mass distribution that minimizes the logarithmic energy. As n tends to infinity, the optimal points will converge to that distribution, and its energy will govern the asymptotics for the n-point energy. Specifically, if you divide the energy for n points by n^2, it will converge to the limiting energy. See Section 2 of http://www.math.vanderbilt.edu/~esaff/texts/196.pdf or other papers by Ed Saff and his collaborators.
This is enough to prove Theorem 3. The chordal product behaves like e^(-(C+o(1)) n^2), where C is the logarithmic energy for the optimal (continuous) distribution, so all we have to prove is that C>0. For the entire circle we have C=0, with the uniform distribution being the unique optimizer. For fixed epsilon>0, we must have C>0, since any distribution on the circle minus an arc can also be viewed as one on the entire circle (which must be strictly worse than the uniform distribution).
The case where epsilon varies as a function of n is trickier, and I don’t know offhand how to deduce it immediately from results in the literature (although I’m far from an expert in potential theory). However, I strongly suspect that known techniques could say a lot about it.
Thank you very much! This also supplies the proof asked for by Anon 3:04pm above, which I left out from thinking I’d known and then not. I have printed off the paper and will scrutinize the methods.
The review paper linked by John Sidles below will also be useful for something else we are doing…
I don’t think you can actually deduce Theorem 1 from the potential theory approach (which gives only asymptotics), but the upper bound follows from the complex Hadamard inequality, the way Anon thought it would. Incidentally, the earliest reference for this I know is a 1918 paper by Schur (http://www.digizeitschriften.de/dms/img/?PPN=GDZPPN002364190), which gives the Hadamard inequality proof. He thanks Polya for pointing the result out to him.
Oops, I meant to specify the page in that article: bottom of page 385.
Ken, I’m glad you found useful material in Charles van Loan’s article “The Ubiquitous Kronecker product“. Our QSE Group regards van Loan’s article as being like fine Hungarian paprika … this article’s “spicy” ideas are likely to improve pretty much any line of research! 🙂
A striking example of the “paprika power” that Kronecker products possess was presented at least week’s ENC, by a young researcher named Ilya Kuprov, who with his Oxford collaborators began their recent article “Spinach–a software library for simulation of spin dynamics in large spin systems” with the amusing assertion:
As with Jane Austen’s Pride and Prejudice, the intent of the Oxford group’s first sentence is openly ironic: pretty much no-one attending the ENC believes (any more) that large-dimension quantum spin systems are infeasible to simulate; moreover this “can do” attitude is becoming prevalent in fields as diverse as quantum chemistry, condensed matter physics, synthetic biology and (more broadly) compressive sampling and image processing.
Nowadays many lines of computational research are seek to evade the “curse of dimensionality” by embracing representations that are sparse and/or factored—and thus efficient in terms of storage and computation—and yet faithfully respect the symmetries and conservation laws of the parent systems. According to our engineer’s reading of van Loan’s article, it provides us with a Rosetta Stone that illuminates the Kronecker-centric algebraic and geometric concepts (including even chordal sampling theorems?) that are providing broader and stronger foundations for many modern lines of research.
Charles van Loan discusses the intimate relation between the FFT, the Perfect Shuffle, and the Kronecker product in his enjoyably thought-provoking review article The ubiquitous Kronecker product.
One wonders whether the problem of computing chordal products can be associated to the Nyquist–Shannon sampling theorem? That is, suppose our sampling of a bandwidth-limited function almost satisfies the sampling theorem? Then is the chordal product hypothesis trying to tell us about the error bound on our reconstruction of the function?
Without knowing the answer myself, van Loan’s review provides a good entre into the vast literature on sampling theorems … it would be mighty cool/fun/neat if a connection to sampling theory could be established.
By the way, Harry Cohn’s post (above) was terrific.
A typo: The first two products in the proof of Theorem 1 shall have n – 1 elements, otherwise they equal 0.
You are correct—it seems the upper bounds should be n-1 not n. Thank you very much. I’ll double-check that and then edit the post; I’ve been traveling and have a wonky connection.