Tools for mapping and repairing the brain [part 2] (11:11)
Date Posted:
June 2, 2016
Date Recorded:
July 8, 2015
CBMM Speaker(s):
Ed Boyden All Captioned Videos CBMM Summer Lecture Series
Description:
Ed Boyden, Professor of Biological Engineering and Brain and Cognitive Sciences at MIT, leads the Synthetic Neurobiology Group, which develops tools for analyzing and repairing complex biological systems such as the brain, and applies them systematically to reveal basic principles of biological function and to repair these systems. In this three-part lecture, he discusses tools for mapping and repairing neural circuitry using expansion microscopy (part 1), whole-brain imaging with light-field microscopy (part 2), and optogenetics (part 3).
Videos:
Slides:
Resources:
Ed Boyden’s Lab website: The Synthetic Neurobiology Group
Karagiannis, E.D. & Boyden, E. S. (2018) Expansion microscopy: Development and neuroscience applications , Current Opinion in Neurobiology 50:56-63.
Klapoetke, N. C., et al. (2014) Independent optical excitation of distinct neural populations , Nature Methods 11:338–346.
Prevedel, R. et al. (2014) Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy , Nature Methods 11:727-730.
Tye, K. M. & Deisseroth, K. (2012) Optogenetic investigation of neural circuits underlying brain disease in animal models , Nature Reviews Neuroscience 13:251-266.
ED BOYDEN: The expansion method is great at looking at preserved tissues, but obviously a living brain will not be very happy if it's expanded a hundredfold, so we have to have another method for looking at brain dynamics.
So the way we do this is to try to image the brain dynamics. And the basic concept is fairly straightforward. In fact, the idea was published by Ted Adelson's group here at MIT about 30 years ago. And the basic concept is to try to build microscopes that work the way that our visual system works.
So this is a regular old fluorescence microscope with one difference. There's an array of lenses inserted into the middle. Each lens captures a different angle of the object. And so kind of like a tomography machine, like a CT machine, you can actually reconstruct in three dimensions.
Now that's actually how our visual system works. We have two eyes. Each eye takes a slightly different angle of the world, and our brain can reconstruct in 3D what we're seeing. And it comes at a cost. The cost is resolution. Because you're trying to use a 2D imaging service, like a retina or a camera, to interpret a 3D world.
Now for us and the world, it's not too big a cost, because the world is kind of 2D, right? On any single line that we look at, we see one thing. And so it's kind of like a 2D projection. But for the brain, it's 3D. So it's harder to use this microscopy then for our eyes.
And again, we always put all of our protocols on the web. So you can go to lightfieldscope.org. This is this is known as light-field or plenoptic imaging.
So here, we can actually take a worm-- this is the worm C. elegans, which is a really useful organism in biology. It has only 302 neurons. And this is a worm that's been engineered-- so that every cell has a fluorescent calcium indicator in it where the neuron is active. It'll blink.
Here you can see the head of the worm. And these are these slices through the head of the worm going from top to bottom. But if you stack these slices on top of each other, it would look like a 3D worm head. But all these slices were done without actually physically slicing. This is all done just by interpreting the data from one camera frame using topographic reconstruction.
And so now you can record neural activity of many neurons-- here are 75 neurons that were being imaged across time. And since there are no moving parts, you can go as fast as you want.
Here's a movie of the worm. And if you look carefully, each neuron is blinking at you. So this is, to my knowledge, the first imaging of a whole organism's nervous system. A small one, but you've got to start somewhere. And so there, you can see the worm thinking or whatever it does.
This is a slightly larger organism, the larval zebrafish. Its brain has 100,000 neurons. And we can analyze the neural activity throughout the whole brain during a sensory to being delivered.
Now what about very large brains? What about the human brain? We've also been working on electrodes, and in particular, on electrodes that have very densely-packed electrodes. So here's a traditional electrode array made out of silicon with gold pads. And you can see four gold pads here and four there.
Here's what we're making. We're making electrode arrays that are needle-like that you can put into the brain that have hundreds or thousands of electrodes. So a vast increase in the number of possible electrodes. Though how do we do it? It's because we're doing nanolithography. Basically using electron beams to indicate the paths to make the metal traces on.
Now why bother? Well, to get more neurons, but it's not just that. If you have many neurons and too few electrode pads, it's very hard to take that complex signal and break it down into individual neural signals. And obviously, if all of us started talking and we've only two microphones, it's very hard to unravel our voices and assign them back to us, right?
But if there were five of us in the room and 100 microphones, it would be very easy, right? It's an over-constrained linear algebra problem. And they're pretty good algorithms known as blind source separation or independent components analysis which have been applied to this over-constraint inversion problem where you have more recording devices than you have speakers.
So our goal is to have more electrodes than neurons, at least locally, right? And to see if we can then automatically unravel the neurons. And so it works. I can show some data here. Here's one of those dense electrodes arrays. We're going to zoom in onto a few electrode pads here.
And color code, these are the different neurons extracted by independent components analysis. And the blue hue means that these electrodes picked up more of that neuron's activity. So here, this neuron up here is sort of in the upper-- so the middle, this neuron on the upper part of the electrode, this neuron on the lower part of the electrode, and so forth.
And what you can see is that the spatial over-sampling of the neural activity allows us to assign each neuron even to a distinct point in space, right? We're kind of using electrodes to image. I guess that's kind of a theme of our work now. We're trying to use electrodes to act more like cameras and image. And then as I showed you earlier, to use cameras more like electrodes-- not to obtain images, but to obtain temporal activity, right? Blurring the lines between electrical and optical recordings.
So anyway, once you've sorted the neurons out, we can then try to estimate the probability of an error. Given that a spike was from one of these pads, and with this particular pad distribution, of course, what's the probability that it came from this neuron? And of course, that's very, very small.
So we're very excited now that we can make potentially the analysis of neural activity an automatable thing by spatially over-sampling the neural activity.
Going beyond this, can we build 3D electrode arrays? Can you record throughout the entire brain to follow information as it comes through the senses and gets processed with decisions and emotions, and finally results in actions? And we're working now on ways to assemble 3D devices that can record neural activity throughout entire brains.
Now one thing that's kind of an interesting story is how do you actually just get all the data? How do you store all the data? And we started realizing that storing data on computers is expensive. So we decided, what if we built minimal computers? A processor-- actually, an FPGA that stores data directly to disk-- no intermediate in between? And then those FPGAs talk to each other over ethernet, and that's it?
So we sort of did a high-level design here at MIT, and we found a local company LeafLabs, which was started by a bunch of former undergrads here, and they were looking for new projects to do. And so we said, hey, why don't you build some of these? And they did the low-level design and figured out how to make the actual code. And now they're-- we helped them get an NIH small business grant to get going, and now they're selling these to customers.
And it's a very scalable system for the acquisition. So now people are thinking about using it for all sorts of scalable data acquisition, from imaging systems, from medical devices, all sorts of things eventually.
And one possibility is that these could also be useful for data analysis, right? Could use these to do a new kind of cluster computing, as it were.
So before I move on to the third story about control, I just want to acknowledge all the people who led these projects. The light-field imaging was driven by primarily Youngjin Yoon in our group and Robert Prevedel and [INAUDIBLE] group. The 3D probes were driven by Jorg Scolvin, a postdoc kind of group. And the high-speed data acquisition and data analysis has been driven by Caroline Moore-Kochlacs, Jake Bernstein, Justin Kinney, and the LeafLabs company.
Any questions on imaging or recording before we move on to control? Yes?
AUDIENCE: So when you were talking about [INAUDIBLE], what do the different readings represent [INAUDIBLE]?
ED BOYDEN: When you record neural activity, each pad picks up just that a linear trace, right? And has big spikes and little spikes and all sorts of other spikes. We then run an algorithm called independent components analysis that tries to break this down into independent uncorrelated spike shapes.
And then what is plotted here is essentially each component that comes out of that algorithm. So unit 1 you could call component 1. And this component has a high amount of signal on these electrode pads and these shapes on those electrode pads, and very little signal on these electrode pads. Unit 2 is the second component, and that appears on these upper pads and so on and so forth.
So in some ways, we're using words like unit because we really want to say neuron 1 and neuron 2. But to claim it's really a neuron is tricky, right? How do you know for sure that this is just one neuron? Suppose that two neurons next to each other always fire at the exact same time. How can you tell them apart?
So unit is neuroscience lingo for, we really want to say neuron, but we're afraid somebody's going to criticize us.
Associated Research Thrust: