AI Agents Help Worms Find Food by Integrating with Their Nervous Systems [Harvard University]

June 14, 2024

* This article is about recent work at the Kreiman lab outside of his CBMM projects.

There is a class of AI called reinforcement learning (RL), which works by taking actions in an environment and then learning from the outcome. RL agents have become extraordinarily powerful in recent years, beating human experts in chess and Go, learning to play video games, or learning to control robotic systems. In every case, RL has achieved these feats by maximizing a reward signal from the environment: points in a game, for instance, or proximity to a goal.

We wanted to see if RL could play a different kind of game. Could an artificial agent improve behaviors in an animal by learning to control a living nervous system?

In a recent paper, we used the nematode worm Caenorhabditis elegans to find out. C. elegans grows up to be about a millimeter long and has only 302 neurons. Because scientists have been studying this animal for a long time, we had a lot of tools at our disposal, including a way to modulate C. elegans neurons with light called optogenetics. We could genetically modify animals so that certain sets of neurons were activated by blue or green LEDs, which provided a set of controls that an RL agent could learn to use.

We wanted to see whether agents could use these controls to navigate animals to targets. In each trial, we put a single C. elegans in a 4 cm arena and chose a random target location. RL agents were fed images of the worm from a camera while continuously making decisions about whether to turn a light on or off, and this allowed agents to interact with animals’ nervous systems in real time. We found that with a few hours of training data, agents could not only direct animals toward some goal, but they could also learn to do it for different sets of neurons, even though some of these neuronal sets had opposite effects on nematode behavior. For instance, some sets made animals go forward faster, and other sets made them reverse. In bigger nervous systems, neuron identities and their roles in behavior can vary a lot between individuals, so we were excited to see that the agents could tailor themselves to their connections.

Could we then discover what our RL agents had learned? We gave agents simulated animal states to see the decisions they would make in each case and mapped out agents’ policies for each set of neurons they’d learned to use. These policy maps showed how different neurons influenced directed movement in the worm, in a causal way, making our system a potentially useful tool for neuroscience.

Finally, we wanted to know what kind of animal-AI system we had built. Was the animal just a robot that the AI could control? Or was there cooperativity between the living and artificial neural networks?

Read the full article and link to the paper on the Harvard University website using the link below.

Associated CBMM Pages: