Collective Intelligence
Date Posted:
December 5, 2022
Date Recorded:
November 4, 2022
Speaker(s):
Thomas Malone, MIT Sloan, MIT Quest
All Captioned Videos Advances in the quest to understand intelligence
Description:
Thomas Malone, MIT Sloan, MIT Quest
Jim DiCarlo: So you've heard-- we were talking at the very low hardware level, building up through some neural circuits, trying to put systems together. And Robert nicely painted that picture from that low levels up to these higher level systems, and how we might think about integrating that from a neuroscience perspective.
We're going to shift gears. Remember, this session is Looking Down, Looking Up. We're now going to look up. We're going to hear from Tom Malone, who's the Patrick J. McGovern Professor of Management of the MIT Sloan School of Management.
He's also the founding director of the MIT Center for Collective Intelligence. And so Tom focuses on how new organizations can be designed to improve our collective abilities-- so collective intelligence emerging from groups, rather than just what we do as individuals. With that, Tom.
THOMAS MALONE: OK. Thank you, Jim. So Jim made it sound like this is going to be at the top level and unrelated to the other things. But I think--
Jim DiCarlo: I'm sorry. I didn't mean to imply.
THOMAS MALONE: Well, what I'm going-- I'm not going to talk about it much, but I think you'll see in what I'm about to say a bunch of connections to other things we've heard about this morning and in the talks just now.
So I'm here to talk about our incubating initiative on our Mission on Collective Intelligence. Since this is an incubating Mission, we're still building out our team. But the two people pictured at the top of this slide, David Rand and Abdullah Almaatouq, were both involved in preparing this presentation and will play a key role in the Mission as we go forward. The other people shown here are either involved in one of the projects I'm about to tell you about, or have indicated their interest in being involved in the future.
Now, I suspect that most of you would probably think that the most intelligent entities on our planet are humans. And you'd think probably that these humans are smarter than plants, smarter than animals, in most ways smarter than computers. But what if I told you that there are intelligent entities on our planet that are more intelligent than people? They're all around us all the time. They are groups of people.
These groups of people are responsible for almost everything we humans have ever done, from inventing writing, to landing humans on the moon, to making the turkey sandwiches I have for lunch almost every day. So these groups of individuals do all of these things in interesting ways. The intelligence that arises from them involves multiple people working together, often over time in space. We call this form of intelligence "collective intelligence", which we define as groups of individuals acting together in ways that seem intelligent.
Now, by this definition, lots of different kinds of collective intelligence exist in the world, not just groups of people, but groups of bees and beehives, ant colonies, schools of fish, and maybe groups of neurons or brain regions.
Now, I want you to imagine a world in which we really understood in a deep scientific way how collective intelligence worked in all these different kinds of groups, how intelligent actions could arise from the combination of less intelligent entities, and how to design these groups for maximum effectiveness. That's the world we'd like to help create.
Now, one thing you need to do to build a science like that is to be able to measure collective intelligence. Here, we'll build on work we did some years ago on developing a collective intelligence test for groups that's analogous to IQ tests for individuals.
We'll also build on work we've done very recently on a test for human computer groups that's analogous to the Turing test, except unlike the Turing test, which measures how close computers can come to human performance, this test measures how well humans and computers together can perform-- how much better they can perform relative to humans alone or computers alone. But to develop this science, we need more than just measurement. We need to develop and use different kinds of theories.
Now, fortunately, to do that, we don't have to start from scratch. We can build on lots of relevant knowledge from many different disciplines-- biology, economics, computer science, psychology, and on and on. But what do we need to do to integrate these different fields?
These different fields often have very different languages, very different approaches. To build an integrated theory here, I think we need to have a systematic map of the space of possible designs for collectively intelligent systems. To create such a design space, one thing we need is a way of characterizing the different kinds of tasks that collectively intelligent systems can do. And to illustrate that, here's a simplified example of a family tree for two different kinds of decision-making tasks, selecting among alternatives and specifying numbers.
Now, one value of family trees like this is that many of the properties of these tasks, like the processes needed to do them, or the functions for predicting their performance, many of those properties are inherited down these trees. For instance, the processes that are used in selecting among different states are often very similar from the generic level at which those things occur to the specific examples below them on this tree. So in that way, these trees already suggest simple, testable theories about the tasks.
Here's another example of a highly simplified family tree that illustrates a different dimension for designing collectively intelligent systems. In this case, the dimension is what processes are the groups using to perform these decision-making tasks? Here, the family tree includes two forms of decision-making used often in human groups-- democracies and markets.
And now with these two dimensions, we can see another important use of this simple kind of theory, and that is to consider possible combinations of the two dimensions, and use that to come up with things that we can test scientifically or consider using in practice.
For instance, if you combine all the types of tasks and processes on my previous two slides, you get a matrix that looks like this. Each of the cells in this matrix suggests a possibility for combining a type of task with a type of process. Some of these combinations are obvious, like using a representative democracy, such as the US electoral college to choose US presidents.
Some of the possibilities aren't so obvious, like using a prediction market to decide who to hire by predicting the likelihood of success of the different candidates if they were hired. So one way to do this is what my colleague Abdullah Almaatouq has called systematic integrative experimental design, where we iterate between top-down development of theories and bottom-up testing of those theories.
For instance, in the simple example we've been talking about so far, you could do experiments to see how the different decision-making processes worked on different types of tasks. For example, if you did experiments in the two cells we just highlighted, you might get the hypothetical results shown here. The darker cells here represent better performance. So this summary table suggests that when you're choosing among job candidates, representative democracy works better than a prediction market.
And then if you also have rough models, even rough models, to predict the results in all the cells, you can use those models, along with techniques from what's called active learning, to efficiently choose which cells to test next out of what could in practice be a very large number of possibilities. And, of course, you can then modify your theories based on these new results, and select more cells to test next.
For instance, if purely hypothetically, it turned out that markets were better at specifying numbers, and democracies were better at selecting among alternatives, then you might get a result that looked something like that.
It's important to notice that this gives you not only a body of empirically tested theories, but a sharper indication of the range of validity for these theories. My colleague, Abdullah Almaatouq, is using this approach to explore the conditions under which groups of people perform better than their best individuals. And here are some of the illustrative hypothetical results that may come from this in-progress work.
Now, we're also using this growing body of knowledge to generate design ideas for collectively intelligent systems in practice. For instance, we've already done some preliminary work with GPT-3, the state of the art AI system, to generate ideas for problems like how to create reward systems for employees, or how to improve primary education.
And, in general, we hope to use this approach to guide our theory development and experimentation in a number of areas, including like how to identify misinformation online, and how to create superintelligent human computer groups.
But I'd like to leave you thinking about what could happen if we and others around the world are successful at developing the kind of scientific and engineering knowledge I've just been describing. Maybe we could help companies combine people and computers to build higher performing, more innovative teams.
Maybe we could help deal with climate change better, by identifying new institutional structures to help governments, businesses, and individuals work together more effectively. Maybe we could even help design new forms of democracy that were better adapted to today's world.
In short maybe a new science of collective intelligence could not only help unlock more of the mysteries of how intelligence really works, but maybe the design and engineering principles based on this science could also help solve some of our most important human problems. Thank you.
[APPLAUSE]