Skip to content

Butterflies And Complexity Theories

February 21, 2011


Meet the two types of complexity theory

Ray Bradbury is one of the great science fiction writers of all time. He is known for countless stories, short and long, many of which have become part of the fabric of our culture. Bradbury’s classic Fahrenheit 451 became a major motion picture starring Julie Christie and Oskar Werner; many of his other stories are also the basis of television shows and films.

Today I plan on talking about the other meaning of complexity theory, not computational complexity. It is related directly to one of Bradbury’s stories.

The story is his science fiction short story called “A Sound of Thunder.” It was published in Collier’s magazine in 1952 and has been re-published many times since then. As far as I can tell it introduced the name, the butterfly effect, that has been used to describe chaotic behavior.

Collier’s magazine was published from 1888 to 1957: the last issue was dated January 4, 1957. It is gone now. With all the changes occurring in publishing today, I wonder if we will have to explain that there once was a magazine called “Time.” Or even worse that we will have to explain what a “magazine” was?

Complexity

The word “complexity” means two things, at least, in mathematics. The one used here, in GLL, is the cost of a computation: a problem has high complexity if the amount of some resource needed to solve the problem is large. The {\mathsf{SAT}} problem is a classic example, as are the Traveling Salesman Problem, and integer factoring—all are believed to be problems with high complexity.

The one used elsewhere is the behavior of a system: a system has high complexity if the system’s behavior—its evolution over time—is extremely difficult to predict. The weather is a classic example, as are turbulent airflow, and social systems—all are believed to be problems with high complexity.

In order to tell them apart the first is usually called computational complexity and the latter just complexity. I wish we had the term all by ourselves, but we have to share “complexity” unfortunately with others. The term by itself is so neat and descriptive. Oh well.

Chaos—An Informal Definition

One of the fundamental concepts in complexity of behavior theory (CBT) is the notion of chaos. This notion is traceable back to Henri Poincaré’s famous prize winning paper on the many-body problem. Informally a system that evolves is chaotic if it is unpredictable: The behavior must be sensitive to the initial condition—small changes should cause large changes in future behavior. Thus a mapping from a vector {v} to {Uv} where {U} is a unitary matrix will not be chaotic: if {v} is replaced by {v + \delta}, then

\displaystyle  v + \delta \rightarrow Uv + U\delta \rightarrow U^2 v + U^2 \delta \rightarrow \dots

As long as {\delta} is small, the future state will not be far from the unperturbed state, since

\displaystyle  U^t \delta

does not grow in size as {t \rightarrow \infty}. Note that the formal definition of chaotic behavior usually includes additional conditions.

In “A Sound of Thunder” Bradbury introduced the notion of the butterfly effect. The short version of the story is simple: time travelers go back to the days of the dinosaurs, and when they return to the present, things are almost the same but not quite. They eventually discover that one of the travelers has a dead butterfly on his boot—the slight change to the past somehow rippled and changed the future. Since his story there have been several other stories and films based on exactly this simple notion. Clearly, in a dramatic way Bradbury is saying that the future is chaotic: even the removal of a single butterfly could somehow change the future.

Later Edward Lorenz, one of founders of CBT, was invited to give a talk at the AAAS meeting in 1972. Since he did not supply a title in time the chair created one:

Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?

Indeed does it?

Chaos—A Provable Example

The story of the term chaos as applied by CBT is itself a bit chaotic—perhaps that is to be expected. It was one of those cases where a beautiful paper proved a special case, then it was discovered that an earlier paper had proved the general case. Luckily there is plenty of credit to share.

Let’s look at it in the reverse order that it happened from an observer in the West. Tien-Yien Li and James Yorke wrote an engaging account of a beautiful theorem in their 1975 paper in the American Mathematical Monthly titled “Period Three Implies Chaos.” They proved:

Theorem: Let {F:I \rightarrow I} where {I} is an interval and {F} is a continuous function. If {F} has a fixed point of period {3} then it has one of all orders.

Note that a fixed point of order {k} is a point {x_0} so that

\displaystyle  \underbrace{F(F(\dots (x_0) \dots ))}_{k \text{ copies of } F} = x_0.

I believe they also were the first to use chaos in a technical way—now there are much more formal notions. Keith Burns and Boris Hasselblatt have said in a paper on chaos:

This theorem amazed the mathematical community.

Burns and Hasselblatt further give the story of how Aleksandr Sharkovsky’s work became known in the West: Some time after the publication of his paper Yorke attended a conference in East Berlin, during which a Ukrainian participant approached him. Although they had no language in common, Sharkovsky (for it was he) managed to convey that unbeknownst to Li and Yorke (and most of Western mathematics) he had proved his results about periodic points of interval mappings well before.

Indeed in 1964, Sharkovsky had introduced the following ordering on the positive integers:

\displaystyle  \begin{array}{rcl}  3 \prec 5 \prec 7 \prec 9 \prec 11 \prec \dots \prec 2\cdot 3 \prec 2\cdot 5 \prec 2\cdot 7 \prec \dots \\ 2\cdot 2 \cdot 3 \prec \dots \end{array}

He then proved:

Theorem: Let {F:I \rightarrow I} where {I} is an interval and {F} is a continuous function. If {F} has a period point of order {k}, then it also has a point of period length {l} for any {l} such that {k \prec l} in this ordering.

This includes as a special case the great result of Li and Yorke. Thus if {F} has a periodic point of order {5} it has all orders except possibly {3}. The importance of this result is it shows that simple maps can have very complex behavior. A map with many different points, with many different periods, has a chaotic type behavior.

This happens in mathematics more than I would have guessed: a special case is proved after the general case has been proved.

What Is The Complexity of Complexity?

Mark Braverman has worked on CBT, for his Ph.D. thesis. Since then he has gone in new directions and solved some terrific problems. One of the questions that we have discussed is what is really the complexity of predicting where a chaotic map will be. Let me explain. Suppose that {f(x)} is a map that exhibits chaotic behavior. Then it should be the case that predicting where {x_0} will be after {n} applications of {f} is hard. But I do not see why this is the case, and I know of no complexity theorem—in our sense—that proves this. For example, can we prove:

Theorem? Let {f} map {z} to {az^2 + bz + c} over the complex numbers, where {a,b,c} are constants. Start at some point {z_0}, and define

\displaystyle  z_k = f(z_{k-1}),

for {k \ge 1}. Any algorithm that can determine a {y} so that

\displaystyle  | y - z_n | < \epsilon

must take computational time {X(n,\epsilon)}.

What is the best we can do for {X}? Just because the mapping is chaotic does not imply that locating where the point is approximately is computationally hard. I asked Mark this exact question, and it appears to be conventional wisdom that it must be hard. But that is not a proof. Even if it is hard, what is the best upper bound? And what are special cases that are easy?

On the side, we can mention also a paper by Sam Buss, Christos Papadimitriou, and John Tsitsiklis, titled “On the predictability of coupled automata: an allegory about chaos.” It appeared in FOCS 1991 and then was published in the journal Complex Systems, thus getting the attention of both kinds of complexity people. The paper is on sets of {N} identical finite automata that are coupled in the sense that the next 0-or-1 input for all of them depends on a first-order expressible property of their current states. The problem is, given an {N}-tuple of starting states and a time {t} in binary, predict the states after {t} steps. They show a sharp dichotomy that the problem is in {\mathsf{P}} or is {\mathsf{PSPACE}}-complete according to whether the property is invariant under permutations of the (names of the) states. Thus again, the complexity of complexity is open.

Open Problems

I have raised some open questions already. Just want to announce that this post is number {256}. We will not hit another power of {2} for some time. Wikipedia points out that {256} has many cool properties which prompts us to mention:

  1. Ryan Williams’ hometown area code in Alabama.
  2. The number of NFL regular season football games.
  3. The number used by short track speed skating Olympian Apolo Ohno.

21 Comments leave one →
  1. February 21, 2011 10:54 am

    Very interesting post (and I’ll be coming back, I stumbled across your blog poking around for Watson-related stuff). I was not aware of Sharkovsky’s Theorem, and your description of the ordering of the positive integers left me wondering, “Why is that particular ordering useful, and where are all the powers of 2?” I did some digging, and found a paper that offers a “natural, direct” proof at http://math.arizona.edu/~dwang/BurnsHasselblattRevised-1.pdf. I’ll have to look this over tonight…

    Thanks for pointing me to cool ideas!

  2. Cristopher Moore permalink
    February 21, 2011 11:27 am

    Consider this map on the unit interval: f(x) = 2x mod 1. It is chaotic, with a Lyapunov exponent of ln 2, unstable periodic orbits of all orders, etc. But when x is expressed in binary, it’s just the shift map, 0.x1 x2 x3… -> 0.x2 x3…, making it computationally trivial.

    Examples like this have convinced me that chaos and computational complexity are not particularly related: chaos is mainly about the amount of information, i.e., the number of bits, that you need to predict the future t steps in advance (which here grows as t). Computation is what you do with this information, and some chaotic systems don’t do much with it.

    Heuristically, systems that are computationally complex tend to be less chaotic. My Ph.D. thesis was about simulating Turing machines with maps and flows in two and three dimensions, where the digit sequence shifts left or right depending on the motion of the TM’s head. (This is all pretty trivial, but I didn’t know much about CS at the time, so it made a splash in the physics community in 1990.) A chaotic system then corresponds to a TM with an overall drift in its motion, moving Omega(t) steps away from its initial position after t steps. But a TM that is doing a complicated computation typically moves away from its initial position much more slowly than that, since it revisits the same places on its tape many times. In dynamical systems terms, if it moves o(t) steps away after t steps, then its Lyapunov exponent is zero — it is computationally complex, but not chaotic.

    • Robert permalink
      February 21, 2011 1:02 pm

      I missed your comment before i commented. But wow you did what? Thanks for that inspiring idea!

  3. Anonymous permalink
    February 21, 2011 12:11 pm

    It is great that Wikipedia has articles for all natural numbers up through 217. Then it skips to 220 and 223 (a prime). Anyone want to fill the gap?

  4. Robert permalink
    February 21, 2011 12:57 pm

    I cannot follow you question. A simplistic example for a chaotic system is the tent map which formalizes simple kneading. The problem of predicting z_i is *exactly* the problem of getting z_0 with high precision.
    If you flip the downward line of the tent to start at 0 and go up to 1 the map reduces exactly to the left bitshift operation (while cropping non fractional digits).
    I cannot see what this would have to do with computational complexity. Someone ares to explain?

  5. anonymous permalink
    February 21, 2011 1:23 pm

    Maybe not relevant for the core of the blog, but almost all orbit types actually imply period 3… see “Almost all orbit types imply period-3” (Erik Lundberg). I think he identifies some cases in which such statement holds and then counts at least how many times that case happens among {1,…,n} orbits.

  6. February 21, 2011 2:57 pm

    2^8=256, it is byte, so some basic types in Standard Pascal (and other programming languages) have size 256 ;))

  7. bza permalink
    February 22, 2011 5:23 am

    Just a pedantic point: In giving the ordering of the intergers used in Sharkovsky’s theorem, you left out the powers of 2. They should appear at the end, in descending order (so that under this ordering the intergers are totally ordered with maximum element 2).

    • rjlipton permalink*
      February 22, 2011 12:16 pm

      No, I forgot them…thanks

  8. February 22, 2011 7:26 am

    Hmmm … this posts asks in essence: Why is it tough to show that (classical) chaotic iterated maps are computationally hard?

    Well … an engineering intuition is that these maps are not computationally hard … in essence because their “smoothed” quantum generalization is not hard:

    Theorem?  Let f_\psi be an unraveled Lindbladian flow f_\psi\colon\psi_k\to\psi_{k+1} for vectors \psi on a Hilbert space H. Let f_\rho be the induced map on density matrices \rho_k. Then any algorithm that can determine an estimate \rho'_k so that |\rho_k-\rho'_k| < \epsilon must take computational time dimH.

    Here the point is that (empirically at least) for a class of Lindbladian flows that is sufficiently general as to encompass all the quantum flows one meets in nature or in the laboratory, this theorem is not true: the intuition is that (generically noisy) Lindbladian flows generically compress quantum trajectories onto low-dimension submanifolds, such that the computational complexity computing trajectories as integral curves, to finite accuracy, is generically very much smaller than dimH.

    Pulling this “Lindbladian compression” intuition back to the classical domain suggests that even if computing individual trajectories is computationally hard, the properties of trajectory ensembles may be computationally easy to estimate—which in practice is what one mainly wants to know.

    In other words, perhaps this is a class of questions in which, from a trajectory-unravelling point-of-view, upper-bound complexity theorems, and even concrete algorithms, are more natural and useful than lower-bound theorems? It seems to me that a great deal of practical work (and possibly fundamental work too) might be accomplished along these lines.

  9. proaonuiq permalink
    February 22, 2011 11:15 am

    Comment prologue: 256 th ? 2^8 ? Digital ! I could not resist…

    Dissertation: Just to remind beginners that what you call BCT is just what is usually called Dynamical Systems (DS thereoff). As it is well known, dynamical systems can be studied as models of real systems, physical (that was the motivation of Poincaré; a discussion about complexity issues here: http://physics.stackexchange.com/questions/252/the-many-body-problem ), etc, etc and etc, or just as mathematical objects, of interest independently of their realization. An important divide in the world of dynamical systems is continous and discrete.

    Regarding continous DS a good introduction is http://www.ams.jhu.edu/~ers/book.pdf, which covers some algorithmic issues but the related computational complexity ones. Those has been studied by Braverman (in fact some of its work could be titled “The complexity of complex complex systems”), and others.

    Regarding complexity issues of discrete DS (more updated than Papadimitrou´ s), a good updated source is this PHD thesis, which however focus more on the counting complexity of discrete DSs: http://osl.cs.uiuc.edu/docs/PhD-thesis-TR/PhD-thesis-TR-nov.pdf.

    According to C.Moore´s above comment chaos and computational hardness might be “orthogonal”. From the same author a short and nice summary of the relations of real physical dynamical systems and abstract dynamical systems bridged by computational complexity issues is this paper: http://arxiv.org/PS_cache/cond-mat/pdf/0109/0109010v1.pdf.

    Comment epilogue: Formalizability, decidability, complexity and parallelizability, these are the questions !

    Comment epilogue: Formalizability, decidability, complexity and parallelizability, these are the questions !

  10. February 22, 2011 11:49 am

    Perhaps a physics topic which is more closely linked to computational complexity than chaotic behavior is: renormalization group (RG) and phase transitions. I’d love to see a Lipton/Regan blog post on the connection between complexity theory and RG.

    • February 22, 2011 12:06 pm

      Second the motion for an RG post. It was Richard Hamming, in his AMS Monthly essay “The unreasonable effectiveness of mathematics” (1980), who was among the first to plainly assert a principle of orthodoxy that most undergraduate QM texts respect:

      In quantum mechanics the quantum states are absolutely additive; they are not just a convenient linear approximation.

      On the other, in recent decades a great deal of progress has come via computational methods (DDMRG for example) that flout Hamming’s principle pretty vigorously … and so it is good for mathematicians, scientists, and engineers to have at least some idea of the circumstances under which Hamming’s QM orthodoxy can profitably be abandoned, and the practical uses of the computational methods that result.

Trackbacks

  1. Tweets that mention Butterflies And Complexity Theories « Gödel’s Lost Letter and P=NP -- Topsy.com
  2. Sightseeing « phknrocket1k
  3. This Statement Does Not Refer To Itself « Gödel’s Lost Letter and P=NP
  4. Whose Complexity Is It Anyway? « Gödel’s Lost Letter and P=NP
  5. Have you noticed a recent heat rash? | World's Only Rational Man
  6. What will trigger the next correction? | Above the Market
  7. Bear Territory | Above the Market

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading