Experimenting with a new artificial intelligence (AI), Microsoft engineers told it to figure out how to stack a book, a notebook computer, nine eggs, a nail, and a bottle into a stable arrangement.
The computer said it would set the book down first, arrange the eggs in three rows of three on the book, set the notebook computer (a very light one) on the eggs, and then put the nail and stand the bottle on top of the computer.
Because solving the problem requires a certain kind of intuition that only humans had demonstrated so far, the engineers thought they had birthed a “generalized artificial intelligence” (GAI).
GAI is an artificial intelligence that “can do anything a human brain can do,” The Wall Street Journal noted.
The paper the engineers published about their work lit up the AI world. Had Microsoft evolved AI to a human level or were they attributing more to their creation than it deserved?
If Microsoft is right, its researchers have created something that could shortly become like a human brain, but faster and smarter: capable of unimaginable discoveries and inventions, but also something that could quickly and easily speed beyond humans’ ability to guide it.
Attributing human-like qualities to AI is like attributing an event to God’s hand: while some will believe, others will find more mundane explanations.
Claiming to invent a human-like AGI also is dangerous to a career. In 2022, Google fired an engineer who claimed he had created a “sentient” AGI—one that was not only human-like in its processes but could sense what’s happening in the world around it.
However, some AI developers are sensing that the field is peering at “something that can’t be explained away,” the WSJ said—“a new AI system that is coming up with human-like ideas that weren’t programmed into it.”
Recent AI buzz has focused on “generative” AI, which is fed stores of information, text, and data from Internet pages, Wikipedia, scientific journals, and other sources. It “learns the language,” then can write or speak text it creates itself.
With their new AI, Microsoft’s team told it to write a proof that there is an infinite number of prime numbers and to write it in rhyme.
The proof was so masterful, mathematically and linguistically that “I was like, what is going on?” Sebastien Bubeck, a lead Microsoft researcher, told the WSJ.
As the work continued, the team documented the AI’s “deep and flexible understanding” of human-like mental capacities: creating a letter of support for an electron as a candidate for public office and read it in the voice of Mahatma Gandhi; looking over a fictional person’s vital statistics and calculating their odds of becoming diabetic; creating a Socratic dialog about the dangers of large language models used by AIs.
By combining and integrating knowledge as disparate as history, medicine, physics, and philosophy, the AI “was certainly able to do many, if not most” of the things asked of it, Bubeck said.
Many in the field aren’t persuaded.
Maarten [sic] Sap, a computer scientist at Carnegie Mellon University, called Microsoft’s claim “an example of some of these big companies co-opting the research paper format into P.R. pitches.”
“When we see a complicated system or machine, we anthropomorphize it,” Alison Gopnik, a psychologist in the AI research group at the University of California Berkeley, said to the WSJ. “Everybody does that.”
“Thinking about this as a constant comparison between AI and humans, like a game show competition, is just not the right way to think about it.”
TRENDPOST: Many of the complaints against Microsoft’s results revolve around their lack of scientific rigor. However, Microsoft acknowledged in its paper that their AI’s results were not always consistent and the claims they were making were subjective and informal.
The lack of rigor doesn’t mean Microsoft hasn’t created the next step forward in AI. Watch this space.