The Allen Institute for AI is building its own AI as an alternative to that of OpenAI or Google.

Why bother? Because Allen’s AI is “open source”—anyone who wants to look under the hood and customize their own version can do so. 

The institute calls it “radical openness” to “democratize” access to source codes so the tools to build a custom AI are available to all.

This approach is in stark contrast to Open AI, Google, Microsoft, and most other developers who keep proprietary information under wraps.

Instead of the usual “black box” systems, “we’re pushing for a glass box,” Allen CEO Ali Farhadi told The New York Times.

“Open up the whole thing and then we can talk about behavior and explain…what’s happening inside,” he said. 

Many have said that Allen is making an enormous blunder: open source software is wide open to bad actors who might want to pervert a software system’s instructions or recode the system to override built-in safeguards against undesirable or illegal uses. (See “AI-Made Images of Child Pornography Could Flood the Internet, Foundation Warns” in this issue.)

Meanwhile, Allen’s AI has been downloaded more than 500,000 times. The institute is using it to build an even more powerful AI that the institute expects will be available late this year or early next.

The institute’s view has few other adherents so far. Meta’s LLaMA AI is open source, as is an AI from Falcon, a venture funded by the government of Abu Dhabi.

Mozilla, the nonprofit that operates the Firefox browser and Thunderbird email program, has sunk $30 million into creating its own open-source AI.

Mozilla was founded 20 years ago with a mission to keep the Internet open and usable by everyone.

“A tiny set of players, all on the West Coast of the U.S., is trying to lock down the generative AI space before it gets out the gate,” Mozilla CEO Mark Surman said in comments quoted by the NYT.

Hiding proprietary information is an invitation to hackers and other malefactors to find and exploit weaknesses and loopholes, Farhadi argues. An open-source system is the best and fastest way to find and fix problems, he believes.

“Decisions about the openness of AI systems are irreversible and will likely be among the most consequential of our time,” researcher Aviv Ovadya at the Berkman Klein Center for Internet & Society told The New York Times. He has urged that international agreements be forged to govern what aspects of AI should remain closed.

TRENDPOST: Due to issues of legal liability, most developers will keep their systems closed. 

Although open-source systems may have some guardrails built in, the simple fact of being open-source makes it easier for knowledgeable hackers to knock them down. 

A democratized realm for AI is a noble vision but isn’t realistic until engineers figure out how to make AI’s internal controls far stronger than they are now.

Skip to content