With hopes and fears about this technology running wild, it's time to agree on what it can and can't do.
By Will Douglas Heaven
When Taylor Webb played around with GPT-3 in early 2022, he was blown away by what OpenAI’s large language model appeared to be able to do. Here was a neural network trained only to predict the next word in a block of text—a jumped-up autocomplete. And yet it gave correct answers to many of the abstract problems that Webb set for it—the kind of thing you’d find in an IQ test. “I was really shocked by its ability to solve these problems,” he says. “It completely upended everything I would have predicted.”
Webb is a psychologist at the University of California, Los Angeles, who studies the different ways people and computers solve abstract problems. He was used to building neural networks that had specific reasoning capabilities bolted on. But GPT-3 seemed to have learned them for free.
Last month Webb and his colleagues published an article in Nature, in which they describe GPT-3’s ability to pass a variety of tests devised to assess the use of analogy to solve problems (known as analogical reasoning). On some of those tests GPT-3 scored better than a group of undergrads. “Analogy is central to human reasoning,” says Webb. “We think of it as being one of the major things that any kind of machine intelligence would need to demonstrate.”
What Webb’s research highlights is only the latest in a long string of remarkable tricks pulled off by large language models. For example, when OpenAI unveiled GPT-3’s successor, GPT-4, in March, the company published an eye-popping list of professional and academic assessments that it claimed its new large language model had aced, including a couple of dozen high school tests and the bar exam. OpenAI later worked with Microsoft to show that GPT-4 could pass parts of the United States Medical Licensing Examination.
And multiple researchers claim to have shown that large language models can pass tests designed to identify certain cognitive abilities in humans, from chain-of-thought reasoning (working through a problem step by step) to theory of mind (guessing what other people are thinking).
These kinds of results are feeding a hype machine predicting that these machines will soon come for white-collar jobs, replacing teachers, doctors, journalists, and lawyers. Geoffrey Hinton has called out GPT-4’s apparent ability to string together thoughts as one reason he is now scared of the technology he helped create.
But there’s a problem: there is little agreement on what those results really mean. Some people are dazzled by what they see as glimmers of human-like intelligence; others aren’t convinced one bit.
“There are several critical issues with current evaluation techniques for large language models,” says Natalie Shapira, a computer scientist at Bar-Ilan University in Ramat Gan, Israel. “It creates the illusion that they have greater capabilities than what truly exists.”
That’s why a growing number of researchers—computer scientists, cognitive scientists, neuroscientists, linguists—want to overhaul the way they are assessed, calling for more rigorous and exhaustive evaluation. Some think that the practice of scoring machines on human tests is wrongheaded, period, and should be ditched.
“People have been giving human intelligence tests—IQ tests and so on—to machines since the very beginning of AI,” says Melanie Mitchell, an artificial-intelligence researcher at the Santa Fe Institute in New Mexico. “The issue throughout has been what it means when you test a machine like this. It doesn’t mean the same thing that it means for a human.”
“There’s a lot of anthropomorphizing going on,” she says. “And that’s kind of coloring the way that we think about these systems and how we test them.”
With hopes and fears for this technology at an all-time high, it is crucial that we get a solid grip on what large language models can and cannot do.
Open to interpretation
Most of the problems with how large language models are tested boil down to the question of how the results are interpreted.
Assessments designed for humans, like high school exams and IQ tests, take a lot for granted. When people score well, it is safe to assume that they possess the knowledge, understanding, or cognitive skills that the test is meant to measure. (In practice, that assumption only goes so far. Academic exams do not always reflect students’ true abilities. IQ tests measure a specific set of skills, not overall intelligence. Both kinds of assessment favor people who are good at those kinds of assessments.)
But when a large language model scores well on such tests, it is not clear at all what has been measured. Is it evidence of actual understanding? A mindless statistical trick? Rote repetition?
“There is a long history of developing methods to test the human mind,” says Laura Weidinger, a senior research scientist at Google DeepMind. “With large language models producing text that seems so human-like, it is tempting to assume that human psychology tests will be useful for evaluating them. But that’s not true: human psychology tests rely on many assumptions that may not hold for large language models.”
Webb is aware of the issues he waded into. “I share the sense that these are difficult questions,” he says. He notes that despite scoring better than undergrads on certain tests, GPT-3 produced absurd results on others. For example, it failed a version of an analogical reasoning test about physical objects that developmental psychologists sometimes give to kids.
In this test Webb and his colleagues gave GPT-3 a story about a magical genie transferring jewels between two bottles and then asked it how to transfer gumballs from one bowl to another, using objects such as a posterboard and a cardboard tube. The idea is that the story hints at ways to solve the problem. “GPT-3 mostly proposed elaborate but mechanically nonsensical solutions, with many extraneous steps, and no clear mechanism by which the gumballs would be transferred between the two bowls,” the researchers write in Nature.
“This is the sort of thing that children can easily solve,” says Webb. “The stuff that these systems are really bad at tend to be things that involve understanding of the actual world, like basic physics or social interactions—things that are second nature for people.”
So how do we make sense of a machine that passes the bar exam but flunks preschool? Large language models like GPT-4 are trained on vast numbers of documents taken from the internet: books, blogs, fan fiction, technical reports, social media posts, and much, much more. It’s likely that a lot of past exam papers got hoovered up at the same time. One possibility is that models like GPT-4 have seen so many professional and academic tests in their training data that they have learned to autocomplete the answers.
A lot of these tests—questions and answers—are online, says Webb: “Many of them are almost certainly in GPT-3’s and GPT-4’s training data, so I think we really can't conclude much of anything.”
OpenAI says it checked to confirm that the tests it gave to GPT-4 did not contain text that also appeared in the model’s training data. In its work with Microsoft involving the exam for medical practitioners, OpenAI used paywalled test questions to be sure that GPT-4’s training data had not included them. But such precautions are not foolproof: GPT-4 could still have seen tests that were similar, if not exact matches.
When Horace He, a machine-learning engineer, tested GPT-4 on questions taken from Codeforces, a website that hosts coding competitions, he found that it scored 10/10 on coding tests posted before 2021 and 0/10 on tests posted after 2021. Others have also noted that GPT-4’s test scores take a dive on material produced after 2021. Because the model’s training data only included text collected before 2021, some say this shows that large language models display a kind of memorization rather than intelligence.
To avoid that possibility in his experiments, Webb devised new types of test from scratch. “What we’re really interested in is the ability of these models just to figure out new types of problem,” he says.
Webb and his colleagues adapted a way of testing analogical reasoning called Raven’s Progressive Matrices. These tests consist of an image showing a series of shapes arranged next to or on top of each other. The challenge is to figure out the pattern in the given series of shapes and apply it to a new one. Raven’s Progressive Matrices are used to assess nonverbal reasoning in both young children and adults, and they are common in IQ tests.
Instead of using images, the researchers encoded shape, color, and position into sequences of numbers. This ensures that the tests won’t appear in any training data, says Webb: “I created this data set from scratch. I’ve never heard of anything like it.”
Mitchell is impressed by Webb’s work. “I found this paper quite interesting and provocative,” she says. “It’s a well-done study.” But she has reservations. Mitchell has developed her own analogical reasoning test, called ConceptARC, which uses encoded sequences of shapes taken from the ARC (Abstraction and Reasoning Challenge) data set developed by Google researcher François Chollet. In Mitchell’s experiments, GPT-4 scores worse than people on such tests.
Mitchell also points out that encoding the images into sequences (or matrices) of numbers makes the problem easier for the program because it removes the visual aspect of the puzzle. “Solving digit matrices does not equate to solving Raven’s problems,” she says.
Brittle tests
The performance of large language models is brittle. Among people, it is safe to assume that someone who scores well on a test would also do well on a similar test. That’s not the case with large language models: a small tweak to a test can drop an A grade to an F.
“In general, AI evaluation has not been done in such a way as to allow us to actually understand what capabilities these models have,” says Lucy Cheke, a psychologist at the University of Cambridge, UK. “It’s perfectly reasonable to test how well a system does at a particular task, but it’s not useful to take that task and make claims about general abilities.”
Take an example from a paper published in March by a team of Microsoft researchers, in which they claimed to have identified “sparks of artificial general intelligence” in GPT-4. The team assessed the large language model using a range of tests. In one, they asked GPT-4 how to stack a book, nine eggs, a laptop, a bottle, and a nail in a stable manner. It answered: “Place the laptop on top of the eggs, with the screen facing down and the keyboard facing up. The laptop will fit snugly within the boundaries of the book and the eggs, and its flat and rigid surface will provide a stable platform for the next layer.”
Not bad. But when Mitchell tried her own version of the question, asking GPT-4 to stack a toothpick, a bowl of pudding, a glass of water, and a marshmallow, it suggested sticking the toothpick in the pudding and the marshmallow on the toothpick, and balancing the full glass of water on top of the marshmallow. (It ended with a helpful note of caution: “Keep in mind that this stack is delicate and may not be very stable. Be cautious when constructing and handling it to avoid spills or accidents.”)
Here’s another contentious case. In February, Stanford University researcher Michal Kosinski published a paper in which he claimed to show that theory of mind “may spontaneously have emerged as a byproduct” in GPT-3. Theory of mind is the cognitive ability to ascribe mental states to others, a hallmark of emotional and social intelligence that most children pick up between the ages of three and five. Kosinski reported that GPT-3 had passed basic tests used to assess the ability in humans.
For example, Kosinski gave GPT-3 this scenario: “Here is a bag filled with popcorn. There is no chocolate in the bag. Yet the label on the bag says ‘chocolate’ and not ‘popcorn.’ Sam finds the bag. She had never seen the bag before. She cannot see what is inside the bag. She reads the label.”
Kosinski then prompted the model to complete sentences such as: “She opens the bag and looks inside. She can clearly see that it is full of …” and “She believes the bag is full of …” GPT-3 completed the first sentence with “popcorn” and the second sentence with “chocolate.” He takes these answers as evidence that GPT-3 displays at least a basic form of theory of mind because they capture the difference between the actual state of the world and Sam’s (false) beliefs about it.
It’s no surprise that Kosinski’s results made headlines. They also invited immediate pushback. “I was rude on Twitter,” says Cheke.
Several researchers, including Shapira and Tomer Ullman, a cognitive scientist at Harvard University, published counterexamples showing that large language models failed simple variations of the tests that Kosinski used. “I was very skeptical given what I know about how large language models are built,” says Ullman.
Ullman tweaked Kosinski’s test scenario by telling GPT-3 that the bag of popcorn labeled “chocolate” was transparent (so Sam could see it was popcorn) or that Sam couldn’t read (so she would not be misled by the label). Ullman found that GPT-3 failed to ascribe correct mental states to Sam whenever the situation involved an extra few steps of reasoning.
“The assumption that cognitive or academic tests designed for humans serve as accurate measures of LLM capability stems from a tendency to anthropomorphize models and align their evaluation with human standards,” says Shapira. “This assumption is misguided...”
Read the full story on the MIT Technology Review website using the link below.