|
One of the foremost AI innovators of the modern age has quit his life’s work, saying he’s concerned about the threats the technology poses to humanity.
The question is, will the world listen?
Dr. Geoffrey Hinton, a longtime computer scientist credited with co-authoring highly influential neural net research in the 1980’s that is foundational to technology used in OpenAI’s ChatGPT and other systems, has quit Google, as reported by The New York Times.
The 75-year-old says he feels it’s necessary for him to speak out about what he sees as the dangers of unconstrained AI growth, including widespread human job market disruptions, disinformation and other threats.
In a Times interview Hinton said: “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”
He added: “Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary.”
Hinton now joins others including Elon Musk who have recently spoken out publicly, calling for stopping the proliferation and development of more sophisticated AI, to examine and consider the consequences of the rapidly advancing, unprecedented technology.
Over fifty thousand scientists, industry professionals and thought leaders have signed onto a “Pause Giant AI Experiments: An Open Letter,” issued by the human rights advocacy group Future of Life. (See “AN AI PAUSE WON’T CUT IT: REJECT SCIENTISM OR BRACE FOR A POST HUMAN FUTURE,” 4 Apr 2023.)
Google responded to Hinton’s departure by downplaying the renowned researcher’s concerns. Senior VP of Google Research and AI, commented, according to NBC news:
“Geoff has made foundational breakthroughs in AI, and we appreciate his decade of contributions at Google. I’ve deeply enjoyed our many conversations over the years. I’ll miss him, and I wish him well! As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”
(“Artificial intelligence pioneer leaves Google and warns about technology’s future,” 1 May 2023.)
As AI Explodes, Concerns Also Gaining Momentum
A growing number of advocates and insiders are clearly worried that neither large corporations salivating over new profit paradigms nor governments bent on an AI arms race, weaponizing the technology to control both foreign and domestic “threats,” have any incentive to change the trajectory of rapidly ascendant AI.
In March 2023, Tristan Harris of the Center for Humane Technology told NBC Nightly News:
“We can have AI and research labs that’s applied to specific applications that does advance those areas. But when we’re in an arms race to deploy AI to every human being on the planet as fast as possible with as little testing as possible, that’s not an equation that’s going to end well.”
Hinton himself has previously expressed concerns about AI, even as he won the Turing award in 2019, along with several other major innovators, for his considerable contributions in the field.
Almost every day new uses and innovations—which can be interpreted as promising or disturbing—are making news.
One of the latest? How about AI that can “non-invasively” interpret brain activity to produce streams of text that convey what a person is thinking?
The University of Texas at Austin employed a fMRI scanner in conjunction with generative AI similar to OpenAI’s ChatGPT and Google’s Bard, to develop the system.
In a report on the technology, published in the journal Nature Neuroscience, CNBC noted the system, dubbed a semantic decoder, could help patients who have lost the ability to communicate, due to certain maladies. (“Scientists develop A.I. system focused on turning peoples’ thoughts into text,” 1 May 2023.)
But privacy advocates may see troubling possible uses of such technology to extract information from political dissidents, or incriminating evidence from suspected “criminals.”
University researchers provided privacy reassurances about their particular technology:
“As brain–computer interfaces should respect mental privacy, we tested whether successful decoding requires subject cooperation and found that subject cooperation is required both to train and to apply the decoder.”
But systems that could extract thoughts from unwilling persons using AI will almost certainly see technical barriers broken.
The Trends Journal has been long been sounding a clarion call regarding existential threats to humanity posed by an unbound quest for “strong AI” or AI singularity, which are terms used to describe efforts to create artificial intelligence which can outperform humans in every respect.
Among our many articles published over the several years:
- “THE FUTURE: MORE TECH NIGHTMARE THAN NIRVANA?” (20 Apr 2021)
- “SINGULARITY UNIVERSITY: FUELING AI ASCENDANCE” (3 Aug 2021)
- “METAVERSE: THE NEW COLLECTIVE” (14 Dec 2021)
- “MICROSOFT ANNOUNCES GLOBAL AI ‘SINGULARITY’” (1 Mar 2022)
- “AI IS LEARNING YOUR JOB” (24 May 2022)
- “DARPA WANTS TO LEAD ‘THIRD WAVE’ OF WEAPONIZED AI” (7 Jun 2022)
- “THE AI LEGISLATOR YOU DIDN’T VOTE FOR” (23 Aug 2022)
- “EVOLUTION 2031: FROM HUMANS DESIGNING MACHINES, TO
- MACHINES DESIGNING HUMANS” (4 Oct 2022)
- “AN AI PAUSE WON’T CUT IT: REJECT SCIENTISM OR BRACE FOR A POST HUMAN FUTURE” (4 Apr 2023)
TRENDPOST: The decisive defection of Geoffry Hinton from the AI quest should give pause to anyone who supposes that AI will be substantially contained and limited to serve humankind.
The world currently stands closer to armageddon than at any time in the modern era, from another novel technology unleashed 77 years ago in Hiroshima and Nagasaki.
As devastating as nuclear technology is, it does not represent something that can outthink and outperform humans, or potentially soon act autonomously in its own interests. That is the specter of “strong AI,” and there are currently no hard limits or practical bounds barring the creation of this technology, any more than there is effective regulation preventing unbound introduction of genetically edited EVERYTHING into the world.
Hinton should be welcomed by those standing on the side of circumspection, humility, and limits to technocratic power pursuits beyond the naturally human, in what is a quickly escalating, real-world transhuman war.
This includes calling him to testify before peoples’ representatives, and asking him to serve on whatever commissions are deemed prudent to fully lay out dangers of AI, and find consensus on how to avoid a future no human should want—the obsolescence, “progress beyond,” or even active destruction of humanity.