Language Mission
Date Posted:
December 2, 2022
Date Recorded:
November 4, 2022
Speaker(s):
Ev Fedorenko, Jacob Andreas
All Captioned Videos Advances in the quest to understand intelligence
Description:
Ev Fedorenko - MIT BCS, MIT Quest
Jacob Andreas - MIT CSAIL, MIT Quest
Jim DiCarlo: Welcome back. Next, we're going to hear from the Language Mission, which is a newly launched Mission that grew out of a seed funded project. And you're going to hear from Professor Ev Fedorenko and Professor Jacob Andreas. They're going to tell us about the recent research into workings of human language and computational models of that core ability, its connection to human thought.
Ev is traveling today, but Jacob will present. And Ev has generously recorded her presentation for us. So I'll turn it over to you guys here.
EV FEDORENKO: Thanks very much, Jim, for the introduction. It's my pleasure to tell you about our Mission.
So I'll start by saying that language is an incredibly powerful capacity. I can use language to tell you how to solve a differential equation or why giraffes have long necks or what I think about my neighbors. And this ability of language to reflect the inner contents of our minds led Alan Turing to propose his famous test, whereby linguistically interacting with another agent, you can probe their knowledge and reasoning capacities to try to figure out if it's another conspecific with a brain just like ours or, in fact, a machine.
The fact that language can and often does reflect thoughts has led many over the years to conflate language and thought. In a recent position, we referred to this as a good at language means good at thought fallacy. And the rise of large language models like GPT-2 and BERT has really brought this fallacy to the forefront.
These models can produce text that is often indistinguishable from that produced by humans. And that has led some to think that these models have abilities that go way beyond language. Let me illustrate some of the linguistic successes of these models.
So consider the following example. This is a prompt, three sentence sequence that we gave to GPT-3, a relatively recent language model. And it read as follows. "State-of-the-art language models now show proficiency on a number of tasks traditionally thought to require explicit symbolic representation of sophisticated, hierarchical linguistic structure. Below, we will use GPT-3-- a state-of-the-art model in 2021-- as an example system. GPT-3 can produce text that obeys most of the standardly accepted grammatical rules of English."
And then we had the model continue. And it said the following. "It can do so even though it is trained purely on the statistics of the English language as it is actually used, and with no knowledge of syntax, semantics, or even writing." Now these models have been around for a few years and so this may seem kind of business as usual to you. But I want to highlight that this is actually really impressive.
These models not only use words in appropriate ways, given the preceding context, but they use pronouns correctly. They put the words in just the right order. They can use ellipses like, "do so." They use complex constructions like the passive voice. And the model is also maintaining coherence with the preceding text and uses the right connections between clauses to express logical relationships between them and so on.
Only a few years ago, many linguists argued that some of these aspects of language cannot be learned from just being exposed to the regularities of language, leading them to propose that some aspects of language may be innate. Aside from their ability to generate coherent texts, when probed with more careful tests, these models have been shown to exhibit some of the core properties of language, including hierarchy and abstraction. So for example, in a sentence like, "the keys to the old wooden kitchen cabinet are on the table," the verb has to agree with the noun-- the subject-- the keys, even though they are not locally adjacent.
And in fact, the noun that is local to the verb is singular and so should take a singular noun. The only way a model can solve this is by representing the structure of the sentence. So that's hierarchy for you. And some of these abilities have been shown to generalize to sentences with totally novel words, effectively non-words, suggesting that some of the patterns these models learn generalize beyond the particular sequences of words they're exposed to.
Moreover, we and a few other groups have recently found that the representations that we can extract from these models can be used to capture something about human language processing. So we took over 40 language models and tried to align them with neural and behavioral responses to language in humans-- data we have previously collected.
So we basically fed the same stimuli, the same sentences to models for which we had responses from humans. And then we used a regression to relate the model unit weights to human neural and behavioral responses. And then we tested how well the model could predict brain responses to new stimuli. And it worked remarkably well.
Here's what we found. So on the y-axis, I will plot the degree to which model representations are similar to human neural responses. All these bars will be in different colors that are representing different models. And higher values mean better alignment with the human brain.
This is how the 43 models performed. As you can see, models vary in how well they align with humans. But impressively, some models in the class of unidirectional attention transformers like the GPT-2, class of models shown on the right, basically, we already hit the estimated ceiling determined by how noisy the data are. So not only are these models good at language. They can generate beautifully well-formed sentences.
There is also apparently something similar in how they process language so that they can capture human responses to the same stimuli. And we think that the similarity between models and human brains with respect to language may have to do with optimizing for the ability to predict upcoming linguistic input-- so figuring out what comes next. This is, of course, how these models are trained. And we think that this is a core aspect of human sentence processing as well.
As a result of these and other impressive feats, claims have emerged that language models represent not only a major advance in language processing but more broadly in Artificial General Intelligence or AGI, a step toward thinking machines. And here are just some quotes from recent papers to this effect. And as some of you probably heard, some have even suggested that these models maybe have become sentient. But we think these claims are premature.
Not that these models will never get there, but they're not quite there yet. In spite of their linguistic successes, these models still struggle with quite a few things. For example, they seem to lack a model of the world. They do not possess stable knowledge about objects, properties, events that can happen, much like what Leslie Kaelbling and Nancy Kanwisher talked to you about.
They also seem to not be very good at formal reasoning. So if we present to them a problem, a math problem, in the form of a sentence, they will often give you the wrong answer. They can't reason logically.
And last but not least, they're not socially aware. They have no communicative intent. And of course, they do not model mental states of their communication partners.
So in spite of their truly impressive mastery of linguistic patterns, current language models' ability to think remains limited. And we believe that all of these overblown claims about language models being a step towards thinking machines result from the tendency I mentioned to conflate language and thought-- to infer an ability to think from the ability to use language fluently. So I keep making this distinction. But does the distinction even make sense? Are language and thought actually separable?
To ask this question, let us turn to one system that we know something about which implements general intelligence, which is the human brain. So how are all these different capacities organized in our brains? Well, we have learned in the last couple of decades or so that mechanisms that humans have for processing language are sharply distinct from various other cognitive mechanisms, including those that store our abstract knowledge about the world, support formal reasoning, as well as build representations of others' minds.
There are two kinds of evidence we have. First, using tools like functional MRI, we can find regions in the brain that support language-- shown here schematically in red. And we can ask, do these brain areas that work really hard when we produce and understand language, do they respond when we engage in different kinds of thought, when we solve math problems, when we reason about the world, and so on? And it turns out, the answer is no.
So language brain areas respond during language, but not during many different non-linguistic tasks we have tested before, including math, solving logical puzzles, making decisions, thinking about other people's emotional or mental states, and so on. Instead, all of these abilities engage distinct brain mechanisms.
Another approach is to examine thought in individuals who don't have a functioning language system. Some individuals sustain severe damage that effectively wipes out the entire language system, including its frontal and temporal components. So we can then ask, these individuals who have lost language, can they still do math? Can they reason about logical connections between events? Can they infer something about other people's thoughts?
And the answer here is yes. In spite of these severe linguistic difficulties, these individuals seem to be perfectly fine in all sorts of thinking. So the brain imaging and the evidence from patients with severe damage to the language system seem to converge to suggest that language and thought are, in fact, distinct in the human brain.
And so now I'm going to pass the mic to Jacob who will tell you about how we might use some of these insights from the human brain to build better models of language and thought and then, subsequently, use those models to gain novel insights into how general intelligence might be implemented in biological and artificial systems.
JACOB ANDREAS: Thanks to Ev-- or I guess the virtual avatar of Ev here. So what I want to talk about for the rest of this session about the Language Mission at the Quest is how we get from this kind of starting point of scientific understanding to the rest of the research that we're going to do under this project. And to kind of set things up, here's a schematic representation of the story that Ev just told.
We know that in humans there's hardware that's specialized for doing language processing and really just language processing. But using language in the real world and producing the kinds of sentences that we actually encounter in practice also requires thinking, which requires doing things like reasoning about the environment, reasoning about our plans and goals, about the mental states of our interlocutors, and various kinds of things that live outside of language. All right, and so when we day-to-day encounter language that's being produced by other human beings, what we're seeing are thoughts that originated in more general cognitive processes, which have been translated into language by specialized hardware.
And as I was saying just now, the computational language models that we have right now, which you can think of as basically fancy autocomplete models that have been trained on, more or less, all the text on the internet work quite differently. And in particular, use a kind of single artificial neural network to do both the thinking and the talking or the thinking and the writing. And in particular, they have to learn how to do this from text alone. And maybe unsurprisingly, they're not very good at this.
And like we saw before, they're great at producing fluent text but less good at producing text that's true or even text that's globally coherent. And I really want to emphasize that the ultimate goal of this whole Quest for Intelligence is to build general purpose agents that can do both the thinking and the talking. And so if we're going to get this right and if we're going to get the language part of this right, then we really need to figure out how to build language models that are not just models of language but that can interface with models that are properly models of the world.
And there has been a little work within the natural language processing community in the last couple of years showing that you can take these big language models and offload from them specific, non-linguistic tasks like querying databases or knowledge bases or doing things like physics simulation, offload those to specialized computational processes. And as a result, wind up with language models that do a better job of actually doing their own job, which is the job of understanding language.
But right now, when we build models that work in this way, we really have to start from scratch in every problem domain we care about. It's a huge amount of work. And it doesn't get us to systems that exhibit human-like generality in their ability to generate and understand language and actually use it to accomplish other kinds of concrete goals in the real world.
And so at the very highest level, what we're trying to do in this Mission, which I guess Ev and I are maybe going to unilaterally re-brand the Language and Thought Mission rather than the Language Mission, is to really figure out what this interface needs to look like in computational models. And in particular, take what we know about what it looks like in human cognition and human brains and use it to build better computational models for language understanding and generation. And as Ev was saying at the very end of her talk, they use these computational models maybe as tools for gaining even deeper scientific insight into language use itself.
And in other words, what we want to do here is build language models that, in some sense, are just models of language but that know how to talk to the models that the other teams in the Quest are going to be developing that know how to do the thinking. And this is going to involve a bunch of major challenges. On the computational side, there's a lot of core work to do just in designing machine learning models and designing new learning algorithms that can acquire, that can learn and represent language in human-like ways.
And we need to figure out how to build, actually at the implementation level interfaces, between these models and models of the kind that Leslie, and Nancy, and Merdad were talking to you about earlier this morning. And of course, we need to answer the analogous scientific questions and really, in a much deeper way than we understand right now, figure out how this interface works in the brain, how our world knowledge and our reasoning skills shape the kinds of language that we produce. And in turn, how the things that we learn about the world from language, not from direct experience, but by reading about things in books, by being told things by other people affect the other kinds of representations that we build of the world and the other kinds of skills that we possess like doing formal reasoning.
This is all very abstract. And so what I really wanted to spend most of my chunk of this talk doing is outline some of the research that we've started to do in this direction or in this family of directions, on understanding the relationship between language processing-- the language part of this model in humans and machines and on understanding the relationship between language and thought more generally kind of across the board.
So I want to start with the top of this diagram here. And you'll recall from the beginning of the talk that Ev was just saying a minute ago that the neuroscience community and the neuroscience of language community has gotten really excited about these big computational language models like GPT-3 because they appear to represent language, or at least represent pieces of language, in a way that looks a bit like representations of language in the brain. And the better these models get, the better they become at accomplishing the task that we train them to do of predicting next words, the better aligned they are with brain activity as well.
And so one of our major research thrusts in this Language and Thought Mission has been to really try to dig in and understand why this is happening, what it is that we're actually measuring when we measure alignment between language model representations and brain representations, and what it is about the language model training process that drives this alignment. So for the first of these things-- and this is work by grad students at BCS, Corinne and Greta who I think are in the room somewhere-- and you should ask them about this. If we have artificial neural networks that can process sentences, we can compare the representations that those build to patterns of neural activation that we get from humans using imaging technologies like fMRI. And we can measure how well aligned these various things are.
And if we get a good fit, this tells us that there's some kind of similarity between the ways these two systems are representing sentences. But it doesn't tell us what aspects of those representations actually drive the similarity. Do we see this alignment because both our fMRI recordings and our language model representations are really capturing fine-grained semantic content of our sentences? Or do we see it because both of these things are encoding the fact that the sentences are six words long and nothing else?
And so what we are finding as we start to do this is that it's not quite this bad. The current language model to brain alignments really are driven by the coarse, like semantic content of sentences-- by something about meaning but not necessarily their syntax or their really detailed propositional content. So what is driving these results is something closer to meaning than low level acoustics, but also not all the way there.
And what I think is really generally exciting about these kinds of results is the fact that we can use these computational models to do a bunch of experimental manipulations that would be very costly, or time consuming, or difficult to do in humans. And so we can use these computational language models as efficient tools for probing the information content of the existing brain recordings that we already have, and thereby really understand what it is that we're actually recording.
And importantly, we're not restricted to just asking questions about similarities between off-the-shelf language models and human learners. After these things have already been trained, we can actually use these computational models to probe the learning process itself. And so this is work by Chung Shu, who is somewhere there. Hi.
So all of these computational language models that we've been talking about are trained, as Josh Tenenbaum said before, on like enormous amounts of language data, billions and billions of tokens from a bunch of internet sources that include a lot of formal and technical language like Wikipedia or the kind of stuff that you find in the New York Times. Needless to say, this is not the kind of data that humans will use to learn language. And instead, we acquire our language abilities from orders of magnitude less-- mostly, in some cultures entirely, in spoken form rather than written form.
Some of which is adults talking to other adults that we overhear. Some of which is adults talking directly to us as we're child language learners. Maybe a little bit in the form of written text.
But as a result, this data has very different statistical properties from the kinds of data that our current computational models use. And so there's a deep scientific question about how the nature of this training data influences the ways in which language itself is learned.
And one way you can think about this is, suppose we could train a language model on a more human-like distribution. For example, one with simpler sentences, one with more explicit teaching about the content of language itself. Would this actually allow us to train models more efficiently in more human-like ways with 1,000 times less data than we're doing right now or in a way that produced more human-like representations? So this project is still in very, very early stages, but already starting to see interesting differences in the trajectory of language learning in these computational models depending on what kind of text you train them on.
So to zoom out again, if we really want to succeed at these previous tasks, at building computational models that align with human language learning trajectories, that build human-like representations of language, we're also going to need these computational models to be able to think or to talk to other kinds of models that can think. And so the next two projects that I'm going to talk about are aimed at exactly that, representing two very different philosophies about how we might go about building language models that think or that can talk to other models that think, and in particular, that explicitly reason about the world that language describes.
So what does it mean to really understand a sentence like there is at least one red mug in this picture? Well, one answer is that if you know what this means, you should be able to recognize pictures containing red mugs. And you should maybe even be able to generate or to imagine a hypothetical images in which this sentence is true. And both of these things, right, recognizing the world or generating possible images like this require you to build explicit models of the state of affairs described by the sentence.
And so one way you might build such a model is to-- led by my students Gabe Grand and Cathy Wong, who also works with Josh-- is to draw on long traditions in both linguistic formal semantics and cognitive science and implement these world models as symbolic computer programs. And so what we're doing here is developing new language models that work by mapping from sentences to programs. Here, we're using these programs to do visual imagination of the scenes described by this text.
And here, we can draw on a huge body of work, which Vikash Mansinghka is going to talk about a lot more later today on building specific programming languages and an inference engines that let us do probabilistic reasoning from these symbolic representations. And I want to emphasize what's new here, in contrast to a lot of what's gone in NLP in the language understanding side is that most of this world model is being written not by our graduate students, but by the language model or by some computational process itself.
And so what we wind up with at the end of the day is a model in which there is a clear separation between language understanding, which you can think of as writing the code, and inference, which you can think of as running the code, but where all of this is being learned. We also don't have to go through explicit symbolic representations-- and another route that looks really promising here that's being led by my student Belinda Lee-- is to have this simulation process itself be learned. And so to not write code but have a separate neural model from the language model that's actually modeling transitions between world states.
And so in this picture, what it looks like to understand language is to map from sentences to these kind of learned representations of states of the world and use neural network-based simulation engines to do inference over these representations. And we're starting to see already that even without any explicit symbolic representations in the loop, even just learning these things end to end, you can get substantial accuracy over standard language modeling approaches.
OK, I'm running out of time here. What about all the other relationships on this diagram? And this one on the left turns out to be tricky, right? While we've done a good job of localizing where language understanding happens in the brain, it's still a major open question what actually the computation at the interface looks like.
And so as this project moves forward, and in particular, as we make progress on the right side of this diagram, our expectation is that we're going to get computational models that are specific enough to make predictions about what this-- at the level of behavior is or even at the level of representations-- about what the corresponding implementation might look like in the brain. And so what we'll have are useful engineering artifacts for doing language understanding but also tools for scientific hypothesis generation.
And as we've been saying this whole talk, the last arrow on this diagram is the one that you're going to be hearing about for the rest of the day because it corresponds, in some sense, to all the other work that's going on in the Quest. Models of embodied intelligence like you heard about today are really going to be necessary for figuring out how to generate and understand language that talks about movement through space, that does things like generate navigational instructions, models of collective intelligence like Tom Malone is going to be talking about later will help us understand language about mental states and social cognition. And pretty much every other piece of the Quest fits into the story in some way.
I'm going to talk very briefly about evaluation because we're running low on time. Unlike in a lot of these other problem domains, the natural language processing community has no shortage of established benchmarks.
If I have a new language model, I can go online and there's a million data sets that I can download that will give me some score for how good my new language model is. And you can use very simple sentence completion tasks to test knowledge of syntax, of semantics, like Ev was talking about before, things like factual trivia, and all the way up to these kinds of world modeling tasks that we're still quite bad at.
But the existing benchmarks are missing a couple of things. First, and maybe most importantly, what they're missing is information about humans, either in the form of ground truth information about human responses or maybe even for information about human brain activity, if we really care about representational similarity. The second thing is that a lot of the existing language evaluation benchmarks are not really hypothesis driven. They're mostly aimed at understanding just how good models are as some specific engineering task, but not to answer questions about how these models work or how it is that they make specific predictions.
And so a big part of this project-- led by our postdoc, Anna, who-- I don't know if she's here-- and building on work that was started by our summer visiting researcher, Nafisa, is to build new data sets that really have this kind of ground truth information that our hypothesis-driven to let us answer the questions that we were looking at. So not a clear self-contained task like "Get the Grape." What we're really aiming for is broad coverage in the space of axes of language understanding.
There's some cool new engineering tooling that we're going to hear about in the next talk. So I won't talk about it now. To wrap up here, the ultimate goal of this Mission is to-- simply stated-- get to human level performance on these kinds of human-like benchmarks using models that build human-like representations of language that they've learned in human-like ways from human-like and human scale data. And we think we'll be able to do this by really understanding not just language, but how language interacts with the rest of thought.
And if we can do this, it's going to pay off in a bunch of different ways, right? First, these new language models will, themselves, be incredibly useful as engineering artifacts, will not have the frustrating Siri experiences that Jim was talking about at the beginning but really have the building blocks for much more capable personal digital assistants, search engines, information retrieval systems, basically any piece of software you want to build where language is an interface. We anticipate specifically that these better language models will be useful in assistive settings, right, everywhere from building household robots that can follow instructions to do things for you to better brain machine interfaces that allow you to write text directly using your mind.
And finally, numerous scientific and medical applications coming out of this work. Good computational models of language are ultimately good computational models of how we process language, or at least that's the working hypothesis here. And as these become mature, we can use them really as in silico models of the brain in a way that may help us diagnose diseases, treat language processing disorders, and things like that.
And I will wrap up there. So thank you very much. And I'll take questions later.
[APPLAUSE]