Things We Did Not Know How to Compute
Artificial Intelligence and P=NP
Enio Moraes is the Product and Engineering Director of Semantix in Sao Paulo, Brazil. Semantix provides AI platforms for businesses. He recently blogged on LinkedIn on what P=NP would say about AI.
Today we pose the converse question: how do this past year’s advances in AI influence our thinking about P versus NP?
Read more…
Harvard’s View of Plagiarizing
And other views and questions to ask
Dr. Claudine Gay, the 30th president of Harvard University, was accused of plagiarizing by Dr. Carol Swain, who retired in 2017 from Vanderbilt University. Swain said that Gay used sections of her 1993 book and a 1997 article without proper citation.
This followed allegations by others earlier this month, culminating in a column by Ruth Marcus in yesterday’s Washington Post calling on her to resign.
Marcus does not mention Swain, but Ken covers their main academic examples below. Marcus also references the Free Beacon article linked under “month,” which has more-recent examples than the others. Update 1/2/24: Gay resigned today after new plagiarism instances surfaced—see section of our new post here.
Have I Been Cheating at Wordle?
What natural streaks say about some recent chess cheating accusations
|
| Sundials source |
Frank King programmed an ancestor of Wordle at Cambridge University in the late 1960s. That game, called “Bulls and Cows” or “Moo,” was popular on timeshare computers—so popular that various kinds of cheating and tampering arose, as attested by this 1972 note. Cheating at Wordle has been in the news again this month—to say nothing of new headline accusations of cheating at chess.
Today we discuss whether I have been cheating at Wordle—and how this should shape our thinking about the new allegations over winning streaks in chess.
The leading one is by the former world champion Vladimir Kramnik over a 46-game streak last month by the many-time US champion Hikaru Nakamura in blitz chess on Chess.com. Some further allegations have been brought to my attention just this morning. Replying in appropriate detail to everything will take more time than I’ve had at the end of a busy term before holiday travel. Here, now, I will only address an important principle that is often overlooked and misunderstood: the Look–Elsewhere Effect (LEE). The impasse is evident in Kramnik’s response to Chess.com’s Nov. 29 statement about the accusations:
The real logical fallacy isn’t analyzing one streak versus other streaks, it’s failing to include the non-streaks. Nakamura just yesterday passed 50,000 games in the two blitz categories alone. Another facet is the need to account various kinds of clustering in long sequence data, where any of those kinds might have provoked an equivalent response from you.
Numbers Too Big for Our Universe
Starting from games and models that look like child’s play
Ben Brubaker is a staff writer covering computer science for Quanta. He previously covered physics as a freelance journalist, and his writing has also appeared in Scientific American, Physics Today, and elsewhere. He has a Ph.D. in physics from Yale University in 2007.
Brubaker wrote an article last week on the vector addition system (VAS) reachability problem. A VAS is just a finite set of integer vectors of some dimension
. The vectors can have negative entries but the starting vector
must be non-negative. At any point, you have a current vector
. The transition rule is:
You can step from
to
if
and the vector
is non-negative.
The non-negative stipulation makes the order of adding vectors matter so as to frustrate simple analyses of . In 1975–1976, I constructed cases where the shortest path from
to a given goal vector
is doubly exponential in
and the size of
. Furthermore, I showed that any exponential-space computation could translate into this kind of example. In consequence I showed that there is no way to decide whether
is reachable from
via
in less than exponential space. This goes for any decision procedure—not just ones that try tracing all possible legal paths.
This left open whether exponential space—or allowing doubly exponential time irrespective of space—is sufficient. That question was recently resolved in a way nobody I knew expected.
Read more…
Thanks to Will Shortz
For keeping us human—in crosswords at least for now
|
| Cropped from Guardian source |
Will Shortz has just celebrated 30 years as the Crossword Editor of the New York Times. He remains the only person in the world whose college diploma lists the major as enigmatology, meaning the study of puzzles. He has also hosted a wordplay segment on NPR’s Weekend Edition Sunday since 1987.
Today we express our thanks to Shortz and his coterie for the proliferation of puzzles—while musing on implications for the intellectual future of humanity and non-humanity.
Read more…
Kurt Gödel In The Movies
And a Limited TV Series — all in 2023
|
| Vanity Fair source |
Kurt Gödel—or rather the actor Chris Urbaniak portraying him—appears in one brief scene in the movie Oppenheimer. He has no lines. He is named only by Albert Einstein—played by Tom Conti—regarding Gödel’s fear of being poisoned by Nazi spies.
Today we discuss how Gödel has been portrayed on the silver screen and its streaming successors.
Read more…
A Conference At TTIC
Adam Tauman Kalai is one of the speakers at the 20th Annual Conference at the Toyota Technological Institute at Chicago this November 9-10. You are welcome to register for the conference at 20th Annual Conference at TTIC.
Adam is one of the top researchers in the world. He scored a hat-trick in my book already over twenty years ago:
- He found something original to say about factoring.
- He got the paper into SODA — SODA 2003, and
- He matched the great Leonid Levin’s feat of a top-conference paper being just one page.
Fairness and Sampling
A talk by Sruthi Gorantla while visiting Georgia Tech
Sruthi Gorantla is a fourth-year PhD candidate in computer science at the Indian Institute of Science and Technology in Bangalore. Her research on computational fairness is supported by a Google PhD Fellowship award and advised by Anand Louis.
Just today, Santosh Vempala announced a special talk by her for this Tuesday at Tech. The title of her talk is “Ex-Post Group Fairness and Individual Fairness in Ranking.” The abstract of her talk is:
Fair ranking tasks, which ask to rank a set of items to maximize utility subject to satisfying group-fairness constraints, have gained significant interest in algorithmic fairness, information retrieval, and machine learning literature. Recent works identify uncertainty in the utilities of items as a primary cause of unfairness and propose randomized rankings that achieve ex-ante fairer exposure and better robustness than deterministic rankings. However, this still may not guarantee representation fairness to the groups ex-post. In this talk, we will first discuss algorithms to sample a random group-fair ranking from the distribution that satisfies a set of natural axioms for randomized group-fair rankings. Our problem formulation works even when there is implicit bias, incomplete relevance information, or only an ordinal ranking is available instead of relevance scores or utility values. Next, we will look at its application to efficiently train stochastic learning-to-rank algorithms via in-processing for ex-post fairness. Finally, we will discuss an efficient algorithm that samples rankings from an individually fair distribution while ensuring ex-post group fairness.
The talk is at 11am tomorrow (Tuesday 10/17), but does not seem to be available online.
Read more…
Possible Impossibilities and Impossible Possibilities
A livestreamed talk by Yejin Choi at TTIC on Monday 10/16, 11:30am CT
|
| MacArthur Foundation source |
Yejin Choi is a professor and a MacArthur Fellow at the Paul G. Allen School of Computer Science & Engineering at the University of Washington. She is also a Distinguished Research Fellow at Oxford’s new Institute for Ethics in AI.
She is about to give a talk in the TTIC 20 Years Distinguished Lecture Series on Monday, October 16th at 11:30 AM CT. The title is the title of this post. Here is her abstract:
Generative AI has led to an unprecedented amount of global attention—both excitements and concerns, in part due to our relatively limited understanding about intelligence—both artificial and natural. In this talk, I will question if there can be possible impossibilities of large language models (i.e., the fundamental limits of transformers, if any) and the impossible possibilities of language models (i.e., seemingly impossible alternative paths beyond scale, if at all). I will then discuss the Generative AI Paradox hypothesis: for AI, at least in its current form, generation capability may often exceed understanding capability, in stark contrast to human intelligence where generation (of e.g., novels, paintings) can be substantially harder than understanding.
The talk will be available next week online: via Panopto (Livestream). This requires what seems to be a one-click-and-done registration.
Update 10/16: the talk is available for viewing at the same above link.
Read more…














