CHATGPT-3 HAS THE REASONING ABILITY OF A COLLEGE STUDENT, TEST SHOWS

CHATGPT-3 HAS THE REASONING ABILITY OF A COLLEGE STUDENT, TEST SHOWS

On standard tests of logical reasoning, including the SAT, OpenAI’s ChatGPT-3 AI performed about as well as a typical college student, according to psychologists at the University of California Los Angeles.

That raised a question for them: is the chatbot mimicking human reasoning simply by ransacking all of the data it’s been trained with or is it using a fundamentally new cognitive process?

Humans solve problems they haven’t encountered before through a process known as analogical reasoning: this problem I’ve never seen is like that other problem I know how to fix, so I’ll try the same method here.

That kind of reasoning has been thought to be uniquely human. Maybe it’s not any more.

The researchers gave ChatGPT-3 a test based on a set of problems called Raven’s Progressive Matrices, which requires subjects to predict the next image in a complex sequence of shapes. The shapes were converted to text so the chatbot could “see” them, a step that guaranteed the bot had never encountered this version of the test.

The chatbot got about 80 percent of the problems correct—well above the average of just under 60 percent that human college students scored, but within the range of the highest-scoring people.

ChatGPT-3 also bested the average human score on a test of word analogies, such as “big is to small as tall is to what?”

The researchers emphasized that the bot is far from perfect. In some instances, it provided spectacularly wrong answers to some questions.

It also is a bust at problems involving physical space. For example, when shown a cardboard tube, tape, and scissors and asked to use the items to move gumballs from one bowl to another, the bot was stumped.

Still, “language learning models are just trying to do word prediction so we’re surprised they can do reasoning,” one of the researchers told Science magazine. 

The psychologists hope to determine if generative AI is beginning to develop a human-like thought process or processing information in an entirely new way.

To answer that question, they would need to open up ChatGPT-3’s code and see how it works.

“GPT-3 might be kind of thinking like a human,” another researcher said to Science, “but on the other hand, people did not learn by ingesting the entire Internet, so the training method is completely different. We’d like to know if it’s really doing it the way people do, or if it’s something brand new — a real artificial intelligence — which would be amazing in its own right.”

TRENDPOST: Not long ago, engineers at IBM were shocked by the “mental” powers of an AI they were testing.

It seems unlikely that any AI now existing has taught itself to actually think. However, until engineers and psychologists collaborate to dissect and trace AIs’ ways of processing information, the question won’t be definitively answered.

Skip to content