On cellular complexity and the future of biological intelligence: Q&A with Sam Gershman [The Harvard Gazette]

May 6, 2025

Contact Yohan J. John http://kempnerinstitute.harvard.edu

What is the logic underlying human and animal intelligence? This is the motivating question behind the research of Samuel J. Gershman, a Kempner Institute associate faculty member and professor in the Department of Psychology. Gershman’s lab studies a wide spectrum of phenomena related to intelligence, ranging from the complexity of human cooperation to learning in single-celled organisms. The Kempner’s science writer, Yohan John, sat down with Gershman to chat about his research and the evolution of learning mechanisms.

How do you see your research in relation to the Kempner Institute’s mission to study both artificial and natural intelligence together?

I’ve always been interested in understanding what makes humans and other animals smart, and how this happens at the biological level. Ideas from machine learning have been useful in developing formal accounts of how intelligence arises. The idea is we don’t really understand intelligence until we build a version of it.

How do you approach the study of intelligence?

I’ve always been particularly interested in reconciling two points of view. One the one hand, within AI, humans are often treated as the benchmark for intelligence. But within psychology, there’s a long tradition of showing how ‘stupid’ people are. Reconciling those two perspectives has always perplexed me. The approach I take is to try to understand the deeper underlying logic of human intelligence, which requires taking into account limitations on computation, memory, and data.

The question central to my research is: How well could a rationally-designed information processing system perform under the constraints imposed on it? And then how do you actually do something useful, algorithmically, with those constraints?

My research focuses on explaining apparent inefficiencies in natural intelligence by developing algorithms that can do useful computation under resource constraints. What happens when we only have a small amount of data? What happens when we can only think a small number of thoughts? What happens when we can only store a small number of memories? Interestingly, you can explain a lot of cognition in terms of approximately optimal algorithms subject to those constraints...

Read the full story on the Harvard Gazette's website using the link below

Associated CBMM Pages: