Baylabs: Impacting billions of people by increasing quality, value, and access to medical imaging (22:59)
Date Posted:
July 5, 2017
Date Recorded:
August 22, 2016
Speaker(s):
Charles Cadieu, Baylabs
All Captioned Videos Brains, Minds and Machines Summer Course 2016
Description:
Baylabs seeks to expand global access to medical imaging with low-cost and safe ultrasound technology for the diagnosis of health conditions such as rheumatic heart disease (RHD). The success of Baylabs ’ medical diagnosis system is enabled by the application of deep learning methods, used in the DiCarlo Lab to study object recognition in humans and primates. This technology can distinguish the severity of RHD and has the potential to identify individuals at risk for RHD before they exhibit recurring symptoms.
Cadieu, C. F., Hong, H., Yamins, D. L. K., Pinto, N., Ardila, D., Solomon, E. A., Majej, N. J. & DiCarlo, J. J. (2014) Deep neural networks rival the representation of primate IT cortex for core visual object recognition . PLoS Computational Biology 10(12):e1003963.
CHARLES CADIEU: I'll give a talk about my company, Baylabs, and give you some of the thought that we've had behind the creation of this company and what we want to try to accomplish with the company. And so in the previous talk, we saw technical details and a little bit of the ecosystem of deep learning, where these things are going.
With Baylabs, we really wanted to try to create one of those vertical companies and try to go after a very specific type of market. And a lot of that thesis has been from the learnings of what deep learning is good at now and where it can go next. And also some of the learnings we had through previous entrepreneurial efforts of trying to do horizontal plays and doing SDKs and APIs and things like that.
And so in this case, we really wanted to go off and try to really impact a certain type of area and have a really big type of positive impact out there in the world. And overall, obviously, medical has a great trajectory for providing positive impacts to the world, but we really thought about it in maybe a unique way. And I think that's one of our key insights. Instead of just trying to make radiologist better, which is, I think, a great thing to do and deep learning can definitely impact that direction, we thought about how can you bring actually medical imaging to a broader scope of people, how do you bring it further in different parts of the first world medical system and also the developing world medical system.
And we think that this has impacts here in the first world for reducing health care costs, getting medical imaging more towards preventative medicine. We think that's a great direction that we think we can impact and also has a great impact for the economics of a startup. And then, also, as these technologies get developed, you can then bring them to a completely new settings and I'll describe a little bit about that today.
So at Baylabs from that directionality is like we're trying to give a doctor anywhere anytime the ability to see inside your body and to see disease earlier, more accurately, and more early in the disease trajectory. And the big question is, how do you do that, of course. And today, I'll give you a little bit about the thesis about why we picked this direction, some of those early deep learning results as to why we think this type of problem is now tractable with some additional technical work. And then, some of the early stages of success that we're trying to push out there into the world.
So in terms of that impact, here I'll just bring up this woman, Angelique, who I'll introduce you to a little bit later in the talk. But she's an individual who lives in Rwanda that we think we can have a direct type of immediate impact on her right now. And so she was very fortunate that she was interacting with a group of first world medical practitioners that found her disease and were able to treat her in this situation. But there's a score of other types of individuals that are in her same shoes where the necessary tools just don't exist and we're really dependent dysentery on first-world medicine somehow getting to these locations.
So we think that kind of bringing those experts through devices, through deep learning has huge potential for positive impact. I'll describe that a little bit further in the talk. But, of course, Rwanda seems like a really faraway place. And, of course, the access there is really far out in the spectrum. But here in the first world, there's all these situations where it almost would seem slight tweaks can have big impacts on the cost of providing health care.
So, for example, whether it's the radiologists in that dark room that's reading imagery that may be increasing their accuracy, whether it's in the back of an ambulance as it's racing to the emergency room, getting earlier information to that emergency room where the doctor is sitting there waiting for that emergency room to arrive. Whether it's on one floor of the hospital that doesn't have access to medical imaging and you're waiting for a professional come down to get you a diagnostic.
And so even these little simple changes we think can have impacts in the first world medicine. And further down the line, also even in your primary care checkup. The big questions is, like, why are we still today using the stethoscope as kind of the primary way that your doctor looks inside your body in that first interaction? So why can't we bring them technology to do much better?
So that's one of the major questions that we asked when we created the company was, how can we bring medical imaging to more people and in more situations where we'll make an impact? And all at the same time increasing quality and increasing values, so really thinking about efficiencies in this whole space. Of course we think this is a big problem within the American health care system, where we're spending about $8,000 per person per year. And within that, $2.5 trillion ecosystem. There's about $100 billion spent in medical imaging per year.
And we think that the key solution to reducing these costs and providing earlier access to imaging is through the application of deep learning and ultrasound. So we think medical imaging modality of ultrasound just has a very unique ability to impact this. And, of course, today, part of that $100 billion is ultrasound in the American system. But we think that as a proportion that ultrasound should be used more and more and it has a lot of high level characteristics to it that we think are really beneficial.
So a lot of the problems we look at, we analyze kind of from first principles. So my co-founder has a physics background and got his PhD in string theory. So we often kind of look at a situation, we think about, OK, how should it be from first principles, and then what is this missing gap that exists today? So from the ways we can look inside the body, of course, there's a myriad now today. But ultrasound just has all the kind of right characteristics at a high-level for going anywhere, for being safe and being effective. So it's safe. Ultrasound emits no ionizing radiation, so many people, of course, are familiar with it during fetal monitoring. So it can be safely used, there's no known side effects, of course, compared to, for example, CTs. And from a population health level, you can essentially be increasing the rates of cancer by doing these types of scans in more cases.
It's also very effective. So in expert hands it has comparable or superior diagnostic capabilities to other modalities. It's also quite affordable. So, machines today, you can get very capable ultrasound machines for $10,000 to $20,000. Now, compare that to an MRI which is about $1.5 million to start or so. So it has all these right ingredients. And importantly, of course, Moore's law is on the side of the ultrasound machine to make it smaller and smaller.
And I think it's an interesting from-- many people here are, I'm sure, in machine learning and don't know the sort of history of ultrasound machines. So it's kind of interesting to look back and see where this has been and where it's going just in the very short term here. So back in 1985, this is the first real-time Doppler ultrasound. So Doppler is a specific type of processing they do on ultrasound signals. And so before they're either doing this like off-line or they would literally send the data from the machine in the examination room to another room, which was a computer the size of the room. They'd do that processing and then bring it back. So these integrated circuits are obviously making this thing smaller and more capable over time.
And then, in the early 2000s, this company, Sonosite, which really helped create this point-of-care revolution where you have actually your doctors using these laptop-like ultrasound devices right by the bedside. So instead of having to send you off to a special imaging center, now they can start to do ultrasound right there with you. And that is really kind of created this point-of-care revolution and you can look up a number of very nice Ted Talks on this essentially development of how these very forward-thinking doctors are starting to use this type of imaging modality in new settings, like some of the settings I just talked about at the beginning of the talk.
And just last year, we're seeing tablet ultrasound. So here in this image, this is another device by Sonasite. So that is the entire ultrasound machine. So what they're holding there is a little screen. It has a little electronics panel on the back and then the probe. And that's the entire ultrasound machine. And they're getting even smaller and more portable. Here's one from Philips released by the same time the contrast isn't very good there, but essentially there's a single handheld probe. All the electronics are inside that probe and it connects to your smartphone over just a standard interface there.
So as all of these technologies and, of course, Moore's law helps us to pack more electronics into these ultrasound machines and make them more capable and smaller. The question becomes, OK, why don't we see this in every doctor's pockets? Like, why isn't this already happened in some ways? And how do we increase the quality and value that's coming from these machines?
And a key insight, we think, is that a lot of this friction today exists not in making the machine smaller anymore. It's not about getting it smaller and more compact in some ways, it's all about the experts that go along with that machine and make it actually effective. So this is a great problem, obviously, for machine learning and deep learning to try to fill that gap, to fill that gap between what's currently this status quo, which is essentially there's the stenographers, which are the professionals that hold the scanners and they go to school for two years. Then there's other mid-level practitioners and other internist type doctors that are going to week-long classes just to sort of get their feet wet in this space. And they often describe they don't feel comfortable using these things for another two or so years.
So what we're really trying to do is get that expertise of the best clinician out there and instead just put it in that machine. So when you look at these experts that do ultrasound interpretations, what we came to see is that a lot of the cognitive process that they engage in is very much like this object recognition in a glance that we are studying in Jim DeCarlo's lab, and of course, has been kind of the order of the bread and butter of some of the object recognition scientists and neuroscientists that we've heard talk from talks from and that we've worked with.
Here's a little kind of mock-up of what it looks like if you were to stand over the shoulder of our chief medical officer who's been doing this for many, many years. He would look at this image and then, within essentially almost a blink of an eye-- I mean, he only needs a few seconds to look at this. And then, all of a sudden he gets these diagnoses about what's going on in this image. And you can really just kind of like fly through this. It's very similar in the radiology practice as well, where they very quickly go through, look at these images, provide assessments, move on to the next one, go through this image.
And so I'll just kind of go through some of these here. And so there are many things here that look completely identical to me, but they're under the expert eye quite different disease states and different disease trajectories that these individuals are on. And as I said, this looked a lot like the problem that we were studying in the laboratory which is this image recognition in-a-glance problem. I'm sure Jim is giving a talk here as well. Yes.
So, essentially, this is from Jim's lab where we were doing-- in this case, this is literally one of the things we are running in MTurk. So this is just screenshots taking from that, a video taken from that, where we were testing that object recognition in-a-glance, some of these complex settings. And we know, of course, that people are good at this, primates are good at this, and that the neural basis for this through Retinal, LGN, V1 to the ventral stream is neural substrate that subserves this.
And then, in the same kind of period of time, in that 2012 era, we started to see interesting results. So the problem that we're studying here in Jim's lab is object recognition in-a-glance. We spent-- Jim's lab has been in the business of very finely quantifying the performance of the neurons of the behavior. And in one of the projects we looked at, the performance of neurons-- and we won't get into too much details of this quantification, but more or less we're trying to look at the quality of the representational space that the neurons were created from these image in-the-glance observations.
And for about a decade we were working on this problem, and we can measure these neurons in IT and the performance is just pretty outstanding, especially compared to the latest and greatest models that we would test against there. So I'll just go back and forth. So affective deep learning strategies. This is about the regime that we were in. We would test models and they would just be scratching the surface above zero. So were always kind of struggling against this, where we knew this IT was a great thing to try to achieve and try to go for, but really our day to day was a lot of grueling work down at the bottom near the zero.
And so we became very excited, of course, when these tables started to turn in that 2011, 2012 time frame, where some of the models we created in the lab-- I think this is the HMO models, the one that Dan Yamens created in Jim's lab. And then working with other people in the field, like Alex Krizhevsky and Matt Zeller starting to test their models. All of a sudden we saw these huge performances increase.
And so I actually remember-- it's like a flashbolt memory for me, having worked in trying to model IT and make performance of object recognition work for so many years without really huge success. I collaborated with Alex Krizhevsky got essentially him to take his model that he had just, like a week or so before, presented it NIPS, had him run his images through the Krizhevsky Alex net. He sent me back the features, we ran them through this pipeline, and all of a sudden, the performances were about as good as IT neurons.
And so that was like this moment, it's like, wow, this stuff is actually working. And it just means so much, of course, I think for the study of neuroscience and how we can push the boundaries here. And then also for all these applications and where we can actually push the boundaries of applying these technologies to have positive impacts out there in the world.
And so with that, of course we think from the science point of view ventral streams are adapted to this naturalistic object recognition classification problem, and that when we look at these expert clinicians, these expert doctors that have been trained over sometimes decades to in-a-glance recognize these attributes in these ultrasound imagery, we think what we just need to do is train these deep neural networks on these ultrasound imagery.
That's really where the impetus for the company comes from is to take these deep learning technologies that excel at understanding the world around us, whether they be imagery or ordered auditory signals, they kind of turn them back inwards and try to help us make ultrasound as effective as it can be and to really unlock ultrasound's potential to essentially bring that expert doctor onto every single one of these ultrasound devices.
And we think that in the first world health system, this has is this really impactful opportunity to replace the stethoscope or at least to augment the stethoscopes that becomes part of every single physical examination and we know from clinicians that are out there on the forefront that this can have a huge impact on the accuracy of diagnoses. And we think in terms of overall reduction of the cost in the health system, better outcomes for patients, lower costs. It has all the right ingredients for the right thing to do.
So just getting back to that story about Angelique, who was treated by Team Heart, which is a non-profit organization. So Angelique, she was suffering from rheumatic heart disease. And so rheumatic heart disease is a very large problem out there in the world. And so here are some facts about rheumatic heart disease. So rheumatic heart disease is a inappropriate autoimmune response to rheumatic fever.
So in these areas where they don't have ready access to antibiotics, essentially people are getting rheumatic fever. And certain individuals have this response where their immune system attacks the valves of their heart. And those individuals, then, through repeated infections, those attacks become more and more severe. The scarring essentially in the valves become more and more severe. And, essentially, their hearts begin to not function, of course, and then they end up dying in their teens and 20s.
So this has a huge impact on not people as they age, but really in children, actually. So it disproportionately falls on people, of course, that lack access to health care and on children. And it's currently responsible for about almost 250,000 deaths per year. So, really, that means someone is dying from this disease about every other minute.
And when you then look at the efforts that people have undergone to try to prevent this disease, of course the simple answer is like, hey, we just need to provide first world medicine to all of these individuals, but that of course is just really far off from becoming economically viable. And the main alternative that's promoted right now is that what they would call a prevention of recurrent attacks. So you essentially need to identify these individuals that are prone to these autoimmune responses, these inappropriate autoimmune responses.
And in order to identify those individuals, well, an ultrasound of the heart is the most effective way to do that. And so then once identified, then the treatment is very simple and very effective and low cost. So once you can identify this person, you can give them a regimen of antibiotics. And essentially that prevents the progression of the disease further and they go on to live a pretty normal life from there on. But the remaining ingredients to make that a reality is really this expert ultrasound interpretation.
So, right now, the way to do this is we send these people from the first world, experts in interpreting and performing ultrasounds, we send them to these developing areas. We can begin scanning individuals and identifying those individuals. And the question then is, of course, OK, can deep learning help? And that's one of the big efforts that we have engaged in internally.
So we are now working with clinicians and we've developed these deep learning algorithms that can distinguish the severity of rheumatic heart disease in ultrasound imagery even from a portable scanner. And we've actually sent our prototype devices to the developing world, for example, in Rwanda, to work with these local clinicians to gain training in how to interpret this imagery.
And I think the biggest excitement is that by bringing that expert eye into the machine and with the coupling of these machines being very low cost, we can really change the way the treatment paradigm would work for rheumatic heart disease. So there's a lot of validation we still need to do. But essentially what I think this opens the door for is providing these very low-cost handheld scanners that plug into a smartphone, they're running this deep pointing software that distinguishes rheumatic heart disease.
A nurse in these settings-- so there are nurses there, they just don't have the clinical knowledge or expertise to recognize these imagery, they can then get this indicator essentially from the deep learning algorithm to tell them the severity of rheumatic heart disease. And then to give this particular individual that antibiotic regimen. So we think that can have a huge impact at a very low cost for trying to prevent the impacts of this disease. And that's one of the many efforts that we're working towards right now.
And I'll just highlight some of our team here. And so, actually, a lot of people from neuroscience, a lot of people from MIT. So Ha Hong has been a great member of the team working on the engineering efforts. So he is one of the PhDs out of Jim's lab. Other individuals are Nicholas Pollvert, also working in deep learning with David Cox, also Stefan Mallah. Other people from neuroscience, Killian Koepsell, my co-founder, and we've been working together at the Redwood Center because we met back in 2005 at the Redwood Center at Berkeley. Natalia Bilenko, she just finished her PhD in Jack Allen's lab, another great team member.
I think there's not-- it's not sort of mystery why all these neurosciences people coming from the cognitive to computer science spectrum are interested in this problem. When you look at what we're really trying to accomplish is to understand the human performance of these expert doctors, to very finally quantify that. So that's a lot what we were doing in Jim's lab. A lot of people are doing this throughout the field, to very precisely quantify through the status quo, then to mimic that status quo into machines, and then to retest that in a very scientific rigorous way about how you might change clinical practice and change these paradigms where imaging is done earlier to save costs and improve outcomes over time. And I just have to give one last plug. So we are hiring. So if anyone is interested, come talk to me.
[APPLAUSE]