Learning Hub - Museum of Science
Museum of Science, Boston
Learn about the work of the Center for Brains, Minds, and Machines through guest researcher talks on the Museum's Current Science and Technology stage, and annual events that feature hands-on demonstrations created by the CBMM community.
Harvard Professor Gabriel Kreiman joins MOS host Meg Rosenburg to answer questions about the history of machines playing chess: why is it challenging to build an AI system to play chess, what factors enabled machines to outperform humans, how do AI systems differ from human chess players, and what can we learn from the human cognitive ability to play games like chess?
MIT Professor Rebecca Saxe and graduate students Heather Kosakowski and Brandon Davis talk with MOS host Meg Rosenburg and answer questions about the human brain and mind; how the brain grows from infancy and enables us to learn how to recognize people and their emotions, make decisions, and learn language; and how scientists study the brain and mind.
Graduate students Kelsey Allen and Catherine Wong, and researcher Evan Shelhamer, talk with MOS host Megan Litwhiler about how understanding human intelligence can help scientists to create intelligent machines that interact with the world through language, vision, and actions, and perform useful tasks in areas like medicine, education, and monitoring the environment.
Graduate students Maddie Pelz and Junyi Chu from the Early Childhood Cognition Lab led by Laura Schulz, talk with MOS host Meg Rosenburg about how observing young children at play can help us understand how people learn, how researchers design experiments to probe childrens’ thinking and problem solving, and the challenges and rewards of studying how children learn.
Graduate students Dana Boebinger, Andrew Francl, and Malinda McPherson from the Laboratory for Computational Audition led by Josh McDermott, talk with MOS host Meg Rosenburg about how our sense of hearing is shaped by our auditory experience, how the brain interprets sounds, and what we can learn from building computer models whose behavior resembles human audition.
Watch graduate student Jarrod Hicks and postdoc Mengmi Zhang, along with members of other NSF STCs, display their science communication skills in the semi-finals of the Reach Out Science Slam competition. Jarrod reveals some special challenges that arise from ambiguity in sensory perception, and Mengmi, dressed as Waldo, talks about how humans engage in visual search.
Our brains are wired with specific regions for face-recognition, color perception, language, music, and even for thinking about how other people think. MIT neuroscientist Nancy Kanwisher reveals the techniques used to localize brain activity and to track its development from infancy, in this talk at the Museum of Science, Boston.
MOS educator Meg Rosenberg describes a new citizen science website that connects families with researchers looking for participants in scientific studies to help advance our understanding of child cognition. CBMM researchers Laura Schulz and Julian Jara-Ettinger helped to create the Children Helping Science website.
What we think we see is not always what we see, and this has implications for relationships, public safety, crime reporting, witness testimony, and even parenting. How does the brain prioritize among multiple visual stimuli? Cognitive scientist Farahnaz Wick reveals new insight into human recall of visually dynamic scenes in this talk at the Museum of Science, Boston.
MOS educator Meg Rosenburg discusses a new study from the laboratory of Nancy Kanwisher showing that the face-specific region of the brain lights up in response to touching 3D-printed faces for both sighted and congenitally blind volunteers.
What is AI? Can we simulate human intelligence with a computer? What are useful applications of AI and questions for society to consider as AI systems become more pervasive? MOS educators Megan Litwhiler and Meg Rosenburg explore these questions and how artificial neural networks and deep learning have helped researchers to build intelligent machines for applications such as face recognition, agile robots, and self-driving cars.
Scientists are using artificial intelligence and our understanding of the human brain to teach computers how to see. MOS educators Meg Rosenburg and Megan Litwhiler talk about how artificial neural networks that resemble neural circuits in the ventral visual pathway in the brain, can be trained to perform visual tasks like recognizing objects and controlling self-driving cars, and also solve problems in areas like web search, speech recognition, and navigation.
Explore the CBMM playlist on the Museum of Science, Boston YouTube channel, to learn about human and artificial intelligence, the human mind and brain, and exciting applications of AI and neural networks to solve important problems in fields like medicine and astronomy.