|
The current headlong rush to use AI across industries and applications is “unhealthy” and could pose a “danger to political systems, to democracy, to the very nature of truth,” Yoshua Bengio told Canada’s C2 International business conference on 24 May.
Bengio, a computer science professor at the University of Montreal, is the founder of the Quebec Artificial Intelligence Institute and considered a godfather of modern AI. In 2018, he shared the Turing Award, the computer world’s equivalent of a Nobel prize.
“We have reached a level of intelligence of these systems, with ChatGPT last November, which corresponds to essentially being able to pass for human,” he told Canada’s National Post newspaper.
Today’s AIs are powerful enough to be put to a range of odious uses, including destabilizing national political institutions, he noted.
“There’s already information suggesting that countries have been trying to influence our [elections],” he reminded. AI gives nefarious actors tools vastly more powerful than they have had until now, he said.
“Countries have already been using trolls to try to influence people, but behind each troll account, there’s a human. Now, if we can do the same kind of thing with a machine, then your 100 trolls can control millions of accounts,” Bengio warned.
The gears of regulation turn slowly and “the next U.S. election is just around the corner,” he emphasized.
Bengio urged regulations requiring AI developers to disclose the data being used to train their systems and monitor their output.
He also has joined other AI luminaries in proposing an international effort to fund AI’s use in solving global problems such as food access, healthcare, and the climate crisis.
“Like investments in space programs, that’s the scale where AI investment should be today to bring the benefits of AI to everyone, and not just to make a lot of money,” he told the NP.
Bengio is among almost 32,000 computer experts, politicians, bioethicists, philosophers, and others who have signed an open letter drafted in March calling for a six-month pause in AI’s development while thinkers grapple with questions about its regulation and control.
He also is one of more than 350 AI and computer science luminaries who signed a one-sentence note saying that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Critics say it’s too soon to think in apocalyptic terms.
“I think it’s a way of controlling the narrative,” researcher Sasha Luccioni, with AI startup Hugging Face, told Bloomberg.
“We should be talking about legislation, things like the risks of bias, the ways AI is being used that is not good for society, but instead we’re focusing on hypothetical risks. It’s a bit of a hat trick.”
TRENDPOST: While well-intended, a ban—even temporary—on AI research would be unenforceable. Malicious actors, perhaps China or North Korea, would likely ignore it and use that interim to advance their own development of AI and perhaps gain some advantage that the rest of the world would be hard-pressed to catch up with.
The European Union has drafted initial regulations around AI; last week, the G7 group of nations met in Japan to begin what was called the “Hiroshima AI process” to debate issues related to AI’s proliferation and implications.
Regulation will evolve slowly and take years to mature into a globally coherent network of oversight.
Until that happens, the ethical use of AI will remain in the hands of companies far more skilled in technology than in loftier human concerns.
At the same time, claims that it’s too soon to think about the possibility of AI running amok are short-sighted. We need to grapple with that question now, not wait until AI has the power to wreak massive destruction before we begin to think about how to keep it from exercising that power.
The question of when to address AI’s short-term impacts versus its long-term possibilities is not a question of either-or but of both-and.