Skip to content

Predictions 2024

February 2, 2024


Would an AI-powered meta-prediction be smarter?

Cropped from video source

Punxsutawney Phil is our populace’s perennially percipient prognosticator. Every February 2, Groundhog Day, at dawn, he is summoned from his burrow to look around. If he sees his shadow and recoils from it, he predicts six more weeks of winter. If not, an early spring is proclaimed on the way.

Today we discuss some predictions for 2024.

Phil has been doing this since 1887. It must be said up front, his track record is not great. The previous three years, he saw his shadow but was judged wrong by the US National Oceanic and Atmospheric Administration. NOAA scores him only 3 out of 10 for the past decade, and various sources put Phil under 40% over all time. Part of the problem is that his choosing “shadow” almost 85% of the time has bucked the trend of a warming planet.

This year, he not only wised up and chose no shadow, he grokked another lesson of the 21st Century: He crowdsourced his prediction. There are other furry friends with filmy eyes:

  • Four groundhogs in Lancaster PA split their predictions 2-2.

  • Edwina of Essex NL predicted an early spring. She also projected Taylor Swift’s team to win the Super Bowl.

  • Stonewall Jackson IV, Staten Island Chuck, Holtsville Hal, and Malverne Mell all agreed with Phil and Edwina.

  • A hedgehog named Elsa, however, took the opposite view and burrowed in deeper for more winter in Cape May, NJ.

This still left a healthy 7-3 agreement with Phil. We wonder, though, whether their handlers should go whole hog and hire an AI in place of Phil & Co. AI has already proved able to outperform whole teams of human weather modelers, without having to solve a single differential equation. It shouldn’t cost too much to replace a few groundhogs.

Predictions and Meta-Predictions

There has been a plethora of articles with predictions for AI in 2024. Here are some:

We may say more later about some of these, especially the last three. But first we ask whether the divergent ideas over all the above represent a failure of vision. Surely an AI should be best at predicting AI. A model not much above (Chat)GPT-4 should be able to sift all the predictions through its tons of other data and ferret out plausibility that we data-limited souls cannot. The results could be called meta-predictions—but this kind of thing may soon simply slide under the existing ho-hum heading of “business forecasting.”

The other meta-prediction for AI in 2024 is one we have already hinted: surprise. In fact, we at GLL not only fear but sense a surprise that none of the above specifically mentions. It goes beyond the common predictions of deepfakes and other misinformation flooding our election year zones.

AI Hacking

Here is our main prediction for AI in 2024:

A non-governmental organization will use AI to formulate and execute a cyberattack on an unprecedented scale, not only disabling but taking control of over a billion dollars worth of assets of the targeted entities.

Well, we didn’t intend to wait until Groundhog Day to say this. Dick and I discussed this when I visited him in Manhattan in mid-January, but we were not satisfied with any details we tried to brainstorm.

So this post (originally with a different intro teasing whether a British company with the Portuguese name AI Caramba!—not Spanish and not Bart Simpson—would really exist) stayed hibernating. That is, until FBI Director Christopher Wray saw shadows everywhere in testimony before Congress on Wednesday. In particular, he warned:

“Obviously, AI will enhance some of the same information warfare that we’ve seen from our foreign adversaries for quite some time.”

The article goes on to say that Wray “also noted that AI can enhance foreign adversaries’ abilities to collect personal data and feed it into disinformation and influence operations.” We would still be interested to indulge some crowdsourcing of our own, to see if our readers can come up with a plurality prediction of how such an attack might unfold.

Open Questions

What predictions do you have for AI? Some even predict a return to “AI Winter.”


[persipcacious->percipient in first sentence; some other word changes]

4 Comments leave one →
  1. Frank Vega permalink
    February 3, 2024 5:49 am

    We predict these two breakthrough results for this year.

    1. The inequality R(N_{n+1}) < R(N_{n}) holds for all primes q_{n} (greater than some threshold).

    This fact implies that the Riemann hypothesis is true as you could see it here:

    https://www.researchgate.net/publication/377443163_The_Magic_of_Prime_Numbers

    Besides, another draft shows that this also implies that the Cramér's conjecture is false. This is a paper written by well-known number theorists:

    https://arxiv.org/abs/1012.3613

    1. The Monotone Weighted Xor 2-satisfiability problem (MWX2SAT) is NP-complete and P at the same time.

    This another fact implies that P = NP. We could see the proof here:

    https://www.researchgate.net/publication/377656601_Note_for_the_P_versus_NP_Problem

    Moreover, we have implemented the polynomial time algorithm in Python:

    https://github.com/frankvegadelgado/alma

    Finally, we have joined both results in the following single paper:

    https://www.researchgate.net/publication/377808644_Note_for_the_Millennium_Prize_Problems

    These two groundbreaking results will change the way we see the world! 🙂

  2. Javaid Aslam permalink
    February 3, 2024 4:21 pm

    So, looks like AI can be expected to have lot more implications – good or bad, than achieving NP==P?

  3. February 3, 2024 11:28 pm

    I’ve been predicting an AI winter for a while now. Current AI is illogical (neural nets don’t look anything like real neurons, but since they’re called neural nets, there’s a ton of research claiming that you can tell something about animal or human cognition by what neural nets do (this stuff is seriously silly, but NNs are a particular SIMD computational model that does have uses)), badly done statistics (machine learning (stupid name, but some of the people doing this aren’t idiots. Some.)), or text pattern matching that can’t (in principle due to the underlying algorithms) do reasoning, inference, or tell truth from fiction. Really. LLM output is all, by definition of the algorithms, halucination. All. LLM processing has no way of relating the text it generates to anything in the real world, or to anything whatsoever. Any sense, reason, logic a user sees in LLM output is in that user’s head, not in the LLM’s memory. LLMs don’t do reasoning, logic, understanding. They just randomly recombine and regurgitate patterns from their training data. It’s just as stupid as the Markov chain Shakespeare generators. But on steroids.

    But I think the thing that’s going to kill AI, and kill it this year, is copyright infringement. People who go to the effort of writing (or paying for someone to write) sensible text (similarly for artwork) are going to get tired of getting ripped off, and insist that the LLM companies pay for the use of the data. At which point, LLMs become economically infeasable. As well as being stupid. But stupid doesn’t seem to be a problem nowadays. Sigh.

  4. February 8, 2024 2:51 pm

    \mathrm{\textbf{L}}=\mathrm{\textbf{NL}}: Space complexity theory has its biggest result, which has no impact on time complexity theory.

    \mathrm{\textbf{L}}\subsetneq\mathrm{\textbf{NL}}: the biggest result, for space complexity theory, that has no impact on time complexity theory.

Leave a Reply to Javaid AslamCancel reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading