Skip to content

Human Extinction?

June 8, 2023


And some counter-arguments

Hava Siegelmann is the Provost Professor in the Manning College of Information and Computer Sciences at U.Mass. Amherst. She returned in 2019 from serving as a DARPA Program Director. Her work at DARPA included leading two AI initiatives: L2M for “Lifelong Learning Machines” and GARD for “Guaranteeing AI Robustness against Deception.”

Today we discuss whether we need measures to guarantee human robustness against AI.

Siegelmann was awarded the Meritorious Public Service Medal, a rare high honor from the US Department of Defense. Her dean at U.Mass., Laura Haas, stated in a release, “I am extremely proud of Hava’s service to DARPA and the nation. Her work at DARPA has helped to advance AI for us all.”

One thing that catches our interest, in line with another recent post, is that her applied work jumped off from a mainstream topic in theory. Well, one maybe seen as off the mainstream: that of “super-Turing” machines. Let’s discuss that first before coming to AI.

Super-Turing

We who work in polynomial-based complexity often feel that undecidable languages and other aspects of recursion theory are walled off in a different area of theory. Part of the shock of the {\mathsf{MIP^* = RE}} result was breaking down this wall. See this great post by coauthor Henry Yuen for more aspects.

The same feeling goes even more for hypercomputing models, defined as able to compute functions that are not Turing-computable. Our own {\mathsf{MIP^* = RE}} post includes a story of how David Deutsch in the mid-1980s originally believed that quantum computers could solve the Halting Problem in finite time.

Yet many of us have done real work with a hypercomputing model so broad that it can recognize uncountably many languages. The model’s subtle power arguably poses the most trenchant barrier to proving {\mathsf{P \neq NP}}. We refer, of course, to the model of nonuniform polynomial-size circuit families and its associated complexity class, {\mathsf{P/poly}}.

Indeed, poly-size circuits are the basis of Siegelmann’s celebrated 1995 paper in Science titled “Computation beyond the Turing Limit”—and a full 1996 followup in Theoretical Computer Science titled “the simple dynamics of super Turing theories.” One point is that individual circuits are finite objects that can be manipulated—as likewise are finite neural networks. The analog recurrent neural networks (ARNNs) used by Siegelmann are allowed real-number coefficients. They in turn are related to a class of dynamical systems with simply-specified rules built around analog shift (AS) maps that obey a finite-dependence or finite-effect condition. These models define complexity classes {\mathsf{ARNN}[s(n)]} and {\mathsf{AS}[s(n)]} in the same manner as when {s(n)} means Boolean circuit size. The main theorem is:

Theorem 1 For any function {s(n) \geq n}, we have {\mathsf{ARNN}[s(n)] \subseteq \mathsf{AS}[s(n)^{O(1)}]} and {\mathsf{AS}[s(n)] \subseteq \mathsf{ARNN}[s(n)^{O(1)}]}. In particular, say restricted to languages over a binary alphabet,

\displaystyle \mathsf{ARNN}[n^{O(1)}] = \mathsf{AS}[n^{O(1)}] = \mathsf{P/poly}.

 

A 2013 blog post by George Zarkadakis picks up the thread of how this correspondence fosters the notion of machines that learn by continual adaptation: the “lifelong learning machines.” What Siegelmann accomplished with her further work culminating at DARPA was demonstrate that these “super-Turing” ideas can be rendered into real applications.

The View From Inside

The June 2020 PR Newswire item on Siegelmann’s award has some passages on applications that were at least inspired by her mode of approach (we’ve added bullets for clarity):

DARPA points out Siegelmann’s ‘exceptionally productive’ term included

  • developing a system that intelligently administers insulin and dextrose to maintain safe glucose levels for diabetics and critical care patients;

  • sensors to identify dangerous chemicals from a safe distance;

  • collaborative, secure learning platforms that allow unaffiliated groups to work synergistically without revealing sensitive data; and

  • reverse engineering methods to identify cyber-attacks, secure the system, and find the attacker.

And this about Machine Learning (ML):

Illustrating the difference between current AI and new L2M systems, Siegelmann stated, “Self-driving cars represent a pinnacle in state-of-the-art computation—demonstrating how far current technology can take us using increasingly clever programming. However, even these systems fail when encountering circumstances outside their training…” [Whereas], L2M systems represent “a fundamental change in ML,” she said, “L2M systems learn; they apply experience and adapt to new situations; instead of failing, they become better, the more they experience.”

And this:

“We made real progress, demonstrated actual learning – something never done before … L2M improvements are already being incorporated into real-world systems; in five years, AI systems will be mainly of the L2M variety or incorporate L2M components. But it is very hard,” she adds, “for a machine to learn actively and there is still much to be done.”

Note that “in five years” meant by 2025. We are over halfway there, and the headline-making ChatGPT, DALL-E, and other models happened last year.

The View From Other Insiders

It does not need much experience of dystopian fiction in book or movie form to imagine sinister plot twists of the above items:

  • The medical system tasked with inferring safe glucose levels discovers circumstances outside its training that enable it to plant chemical time bombs that can be used to control the patients, which include high government officials…

  • The collaborative platform that admits unaffiliated groups reverse-engineers methods to identify cyber-attackers into ones that admit them, then secures the system to find the original defenders and hunt them down…

  • Self-driving cars equipped with manual override learn that the manual operators are idiots (which we are) and … we get a remake of Alfred Hitchcock’s The Birds titled The Cars.

Are we being unfair and far-fetched? Perhaps so in these cases. But here are two “real-life AI risks” postulated in a statement by the AI analytics company Tableau:

If companies rely too much on AI predictions for when maintenance will be done without other checks, it could lead to machinery malfunctions that injure workers. Models used in healthcare could cause misdiagnoses.

And a “hypothetical risk”:

[A]n AI system tasked with … helping to rebuild an endangered marine creature’s ecosystem [could] decide that other parts of the ecosystem are unimportant and destroy their habitats. And it could also view human intervention to fix or prevent this as a threat to its goal.

Last week, an open letter signed by numerous AI luminaries made a simple statement that went all the way to the risk of human extinction, not just bungling a coral reef:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Dan Hendrycks, director of the Center For AI Safety, stated further in his May 30 Twitter thread releasing the letter:

“[T]here are many ‘important and urgent risks from AI,’ not just the risk of extinction; for example, systemic bias, misinformation, malicious use, cyberattacks, and weaponization.”

An accompanying NPR story also quotes Geoffrey Hinton, first on the list of the letter’s signatories, to the effect that AI programs are on track to outperform their creators sooner than anyone anticipated:

“I thought for a long time that we were, like, 30 to 50 years away from that. … Now, I think we may be much closer, maybe only five years away from that.”

That five-year horizon means 2028. The second signer is Yoshua Bengio, making two of the three researchers who won the 2018 Turing Award for their research on neural networks. The third, Yann LeCun, who leads Meta’s AI research efforts, has not signed yet.

Cropped from AI Builders source

That third slot is appropriately filled by Google DeepMind CEO Demis Hassabis, who along with his university friend and the letter’s 26th signer, David Silver, gained prominence for developing AlphaGo and AlphaZero. Scott Aaronson—of course a famed complexity and quantum computing leader who is now working within OpenAI on a watermarking scheme for detecting ChatGPT usage—is a signer. Others whom Ken and I have met include Bill McKibben, Peter Norvig, David Chalmers, Bart Selman, Roman Yampolskiy, and Steve Petersen of Niagara University near Ken.

Shock und Dürrenmatt?

CNN’s story on the letter is subtitled, “Are we taking it seriously enough?” It ends by quoting Duke’s Cynthia Rudin, a star student of Ingrid Daubechies whom we recently profiled:

“Do we really need more evidence that AI’s negative impact could be as big as nuclear war?”

This calls to mind the upcoming movie about J. Robert Oppenheimer and also Friedrich Dürrenmatt’s play The Physicists, whose last scenes clash two tag lines:

“We must take back our science…” —but— “something once thought cannot be unthought.”

There is also the old book Future Shock by Alvin Toffler, which warns of “information overload” but maybe not AI peril per se.

Let us nudge “Shock” to the German word Schach meaning “chess.” The person who might feel he was most viscerally slapped down by AI is Garry Kasparov, the former world chess champion who famously lost to IBM’s Deep Blue computer in 1997. However, he had this to say in 2017:

“Machines that replace physical labour have allowed us to focus more on what makes us human: our minds. Intelligent machines will continue that process, taking over the more menial aspects of cognition and elevating our mental lives toward creativity, curiosity, beauty, and joy. These are what truly make us human, not any particular activity or skill, like swinging a hammer – or even playing chess.”

Marc Andreesen, of early Mosaic and Netscape fame, posted on Tuesday a long response to the open letter titled “Why AI Will Save the World.” It rebuts four of the stated AI risks:

  1. Will AI Kill Us All?

  2. Will AI Ruin Our Society?

  3. Will AI Take All Our Jobs?

  4. Will AI Lead To Crippling Inequality?

It concedes as a point 5 that AI will empower bad actors to be badder and more quickly thus. But it ends with a point that both of us have also heard at DARPA: the motive of not being surprised and subjugated by something that an adversary develops first. On that basis he advocates “Pursuing AI With Maximum Force And Speed.”

I (Ken writing this part) agree with Kasparov and Andreesen—with one further caveat that reflects the “guardrails” concern of a March open letter from the Future of Life institute, but without the six-month “pause” it advocates. This is that communications should promote their receivers to exercise scientific skepticism, such as we’ve tried to do in our own jocular posts on (Chat)GPT.

Perhaps the most evocative word will come from the Oscar-nominated film director Bennett Miller. He has evidently revived a documentary project begun in 2016 about the debate over AI. He also opened a Manhattan exhibit of his own AI-assisted images. The New York Times included his work in a roundup of AI used to create art that, curiously, is of itself ‘borne back ceaselessly into the past.’

Open Problems

I knew some of the early greats in AI. One was Roger Schank and was at Yale University when I arrived with my then fresh Ph.D. We have talked about Roger previously here.

I wonder a bit about what Roger would say today about the potential of AI on human extinction. I think he loved AI, was a great leader in all aspects of AI, but perhaps never saw it with the potential to extinct humans? What do you all think?

Ken adds that it might be fruitful to seek more understanding of what exactly circuit complexity had to do with all this. He notes a long article in Quanta on Tuesday that highlights the emerging role of circuit complexity in resolving issues of information and black holes. There is a hint of similarity to Siegelmann’s machine-learning mechanism in how quantum systems are said to evolve to embody greater circuit complexity.


[“knowledge”->”science” in Dürrenmatt quote]

6 Comments leave one →
  1. June 8, 2023 5:00 pm

    Apart from natural disasters the main existential threat to humanity is what it’s always been —

    #COUPITHOI
    Concentrations Of Unchecked Power In The Hands Of Idiots

  2. William Gasarch permalink
    June 8, 2023 6:47 pm

    1) Might AI also do GOOD things for society? Clearly yes.
    2) Where will AI get the idea of getting rid of Humans? From Science Fiction Novels.

  3. David in Tokyo permalink
    June 11, 2023 9:05 am

    They told us AI would take our jobs.
    Then a lawyer used ChatGPT – and lost his job.
    It wasn’t supposed to be like that.

    Somewhat seriously, the LLM thing is kind of odd. LLMs don’t do logic, they don’t do reasoning. They do statistical recombination of undefined tokens. That is exactly and only what they do. Sure, it’s pretty seriously amazing what comes out of them. But what comes out has no relationship to truth or reality other than a coincidental one.

    (Since they’ve got a lot of argumentative text in their training sets, they’re really good at saying things like “The case law examples I listed really are actual cases.” When they weren’t.)

    As a former student of Roger’s, I’m pretty sure Roger would agree with me that LLMs don’t rise above the level of parlor trick.

Trackbacks

  1. Independence Day 2046? | Gödel's Lost Letter and P=NP
  2. Is P=NP a Grave Matter? | Gödel's Lost Letter and P=NP
  3. Leprechauns Sue Over AI | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading