BRIAN SUBIRANA: OK, so what I'd like to present today is research in progress. And I'll try to be very brief, so we have time for Bror. I'll be concentrating on, what do we remember from college, if anything? And then he will talk a bit about OK, what can we do if we forget everything? What can we do to make it useful?
The work I'll be presenting is joint work with Katarina here and Sanjay. It's work in progress. So any suggestion, any idea you have, that's great.
Let me give you the gist of what we want to say today-- is that basically college learning as we know today is a bit like encyclopedias. It's content that you put in your head, and you don't really use it ever, like encyclopedias were never really used. And we need to do something about it because the link with college-- between college learning and expertise is not very direct. And there's some good news that Bror will talk about. And what I'll be presenting also shows some insights that are probably good news. about what we can do going forward.
And so this is a bit-- the logic behind the presentation we are giving. The idea is that college learning is mostly forgotten. We'll talk a bit more about what this means in two years. And if you really believe this, if you take that to the limit, this means that you need to reinforce learning. That's where Bror will come in in terms of learning engineering, how you can reinforce learning.
And the second thing is, you need to focus on things you are going to use. Don't learn Latin, because you're going to forget it. Try and focus on learning things that you're going to use.
And he'll also talk about, what can we do to learn things that focus on expertise? And what is the learning engineering that'll help reinforce learning? And eventually, this would mean-- and he'll also talk a bit about how you reinvent teaching and maybe that's why he's talking about large-scale experiments to improve learning, improve what we learn, what's expertise, and sort of tweak it real time.
And in terms of my piece of the talk, which is the "College Learning-- Remembering or Forgetting?" we were trying to come up with a good research question. And it wasn't that obvious what was the right research question. This was the first research question we came up with. And then we did a bunch-- a bunch of interviews to see what people remember.
And I started with my family. This is my grandfather. He was a doctor in Spain, 500 publications. A lot of doctors say he was one of the best teachers they had. So I interviewed a bunch of his students.
And what is it that they remember? Basically nothing. Can you tell me at least something, one thing he teach you. A formula-- I don't know. A molecule-- I don't know. Some illness-- no.
Oh, he was great fun. That's what they remember. He was a funny guy, nice jokes, and they have some anecdotes. But nothing really focus on content.
So we changed it to say, OK, let's talk about academic content. We are not very interested in whether that teacher was fun or not. But what is it that you really learned from the content? And we then said, well, that's difficult as it is. So is there any academic evidence of long-term unused academic content? Because the way we had it here, unused undergraduate academic content-- what's the content? What's forgotten following a curve? The definition of this seems to be difficult.
So we said, OK, but is there any evidence-- it doesn't really matter what-- is there any research? And if you know any research, tell us. Because we haven't really found anything that indicates that you remember things beyond a certain period of time.
And the research we've done to prove this follows three areas. And I'll move very quickly through the different areas, because I don't want to spend too much time. I just want to give you the gist. And since I'm here at MIT, anybody that wants to know more about it, I'm here available.
So basically on strategy, we did about 200 students-- sorry, 200 interviews. We interviewed people from MIT, from outside MIT, deans, students to try and understand what was the strategy. In economics, we are in progress to see if people in economics have found anything that relates say, grades to success, attendance to success. There isn't really anything.
And in psychology, we've reviewed over 100 papers. And I'll tell you a bit about it. And that's probably more interesting to your work. So I'll spend a bit more time on this. And this is a summary of where we are going.
So let's talk about strategy and economics to begin. And really, what I want to show you is that the market doesn't value what we've studied in the classroom. There's no indication that anybody really cares about what you study in the classroom.
And so these are just some pain points from the industry, and not much is related to content. And one of the important things is that we haven't found a good way to show what is the value of going to school. It's a lot of money. People owe-- a lot of-- a lot of money is owed.
Dropout rates-- not in Ivy League schools, but the soon-- the moment you leave the Ivy League school class, dropout rates are unbelievable. I couldn't believe it, how lousy the system is. And there isn't really any linkage to value.
And this what people say. Recruiters let you read this. But here is, at the bottom, what's important from Google. Now, I'm going to show a video, so if we can raise the volume.
Now, then we said, OK, let's think about testimonials. Let's see if we can find someone that says, hey, what I learned in the classroom really helped me. So we said, let's look at that the highest paid employee ever, I think. Steve Ballmer is probably the highest employee ever. And he came to Harvard.
Here's what he has to say. Let's see if this works. That's in CS 50 he came and gave a class about what he learned. So his point is, OK, he said, most of what I learned to know I learned at Harvard. So you say, OK, that seems to be good news. But listen to what he has to say.
- Of all those many important things I learned at Harvard, it wasn't in the classroom. As I listened to David do the introduction, I thought to myself, sounds like I majored in extracurriculars when I was here. That's actually true. I attended all of one class session senior year.
BRIAN SUBIRANA: I think you listened. The point is, he said-- I think you heard it-- he basically said that he didn't really learn anything in the classroom. And then, we went and said, OK, what about MIT? What about some of the most popular courses at MIT, the top [INAUDIBLE] course at MIT, linear algebra from Gilbert Strang?
So we went and talked to him. Do you have any evidence that your students got something out of your course long-term? And you have the response right there, no need for explanation.
So then you go to MIT, and we probably say we're very happy, number one graduate engineering programs, [? law ?] number five. How are these rankings made? Nothing to do with content. Nothing to do with content. It's basically reputation and research. With business schools, the way they do it is pretty lousy, hard to reproduce, but nothing to do, really, with content.
So that's a bit-- the evidence. We've talked about strategy. Economics is a bit the same thing. And it's basically just signaling value. What people have shown is that if you've gone through the ordeal of an-- a BA, you must be a hard-working person. So that's really the value.
And if you've gone-- and if you are a man, it's better than if you're a woman. And if you've gone to an Ivy League school, then it's even better. And they have these curves where they show the MPB of high school depending on the year.
And that's for men. That's for women. And that's over high school. So it sort of looks good, but it's completely unrelated to content. No relationship to content. So there's really no academic evidence.
So let's talk now about psychology. In psychology, there's three things we've looked at. There's evidence that if all you care is to pass the exam, the best thing is a book or digital learning. Like the first year of the MBA, I completely waived. I just took the books and went through it.
And I remember with David Pitcher, we were talking about this-- and nothing like the Richard Feynmans on physics. If you want to read and learn about-- so that seems to be really, really good. And then the learning science, we went through it. And basically, there's very little in learning science about what happens after two years.
And then there's a lot of research that I'll cover quickly about the Ebbinghaus Forgetting Curve showing that it's pretty universal. We forget anything following the Ebbinghaus Curve. So we reviewed this book. And this is something that Sanjay and I, we spend a lot of time talking about. What is learning science?
And here, there's a bunch of techniques of learning science. And if anybody is interested, we can spend a lot of time. And I have here a whole bunch of cards on the different techniques and the different references and what people say. But there isn't really anything that shows that academic content is retained. That's sort of the bad news.
The good news is that a lot of these things focus on mastery and focus on what's the best way to retain concepts. The problem is they stopped at one year, maximum two years, and most is focused on them month-to-month, semester time. But there's a whole bunch of techniques to help you optimize the time you spend in a subject to retain it the longest. That's the good news.
So the third area that we looked at was the Ebbinghaus Curve. And so basically, we took the original work, and we saw who had referenced it to see if there was anybody that had done anything on long-term memory. And this is about the same time 19-- 1885, when neurons were discovered, and it's about 50 years before the Bloom Taxonomy was discovered.
The study had some limitations, but then it's been reproduced later on without any of these limitations by many, many people. And basically what the curve says is that whatever you learn, you're going to forget exponentially. And the parameters depend on what you learn, when you learn it, how you learn it.
But we did a couple of searches on the Web of Science and on Google Scholar to actually find out who referenced Ebbinghaus. It turns out, we don't have the tools. That was a sort of a surprise for us. So you don't get the same results from Google Scholar than from Web of Science. And we don't really know if we missed some.
So there are some limitations here. So maybe some good piece of work that references Ebbinghaus we didn't- we didn't catch, because it's in-- it's in neither of those two searches. Or maybe there is good research that doesn't reference it. And we did go through most of the papers, and that's what we're still doing with Katarina, mostly, and I. We've divided the searches, and we see-- if there's any interesting paper that's referenced within those that reference Ebbinghaus, we'll go and get it. And we have a spreadsheet with all the papers and all the stuff.
This gives you an idea on the Web of Science search. This is from the-- I think-- since 1907, the first paper that referenced Ebbinghaus, to 2016, when we did the search. And these are the contributions that reference Ebbinghaus. This is the impact. There's about 5,600 that's going on.
So it seems to be an area of increasing research, of increasing impact. So it's not like it's something that's been forgotten. But in all of this evidence, what evidence can we find of long-term retention of academia in any form? Really, you could extend it to any sort of knowledge, any type of knowledge, the evidence of long-term existence, it's this.
I never thought that one of my best slides at MIT would be an empty slide. But there is really nothing. And that's, I think, an interesting insight in itself. There is no everything-- no evidence that anything is remembered beyond two years.
And the key facts emerging from the review, from reviewing all these papers, not much on STEM-- very little-- a clear definition in methods, and mostly focus on one, the memory. And obviously, there is just retention. There is mastery. There's versatility.
There's a bunch of things that could be looked at that are not looked at. Again, most of the research is short-term. And nobody really looks at the Bloom Taxonomy as to whether you're higher or lower on the expertise. It's mostly about whether you remember a fact, at most, a little question.
There's some non-conclusive isolated research on what people call permastore. But none of that research, not even the author, I think, believes that anymore. Bahrick, he's been around. He got his PhD over 60 years, and he's still doing research on the subject.
There are some obvious examples that we don't really know what happened. For example, everybody comes up with a bicycle and says, well, if you-- once you know how to ride a bicycle, you always know how to ride a bicycle. But there isn't good research as to, do you forget something about it or not? That hasn't been well researched.
And about the forgetting curve, there is universal evidence for it-- hundreds of experiments, in autobiography, in animals, neurological, even team learning. Teams forget following the same curve. So it really seems that this curve is like an invariant force of nature. Brains are just designed to forget.
And we could make up this rule. It doesn't really matter what memory is going to follow this curve. And there's hundreds of experiments. I'd say all experiments on forgetting follow the curve. It is a bit like the Central Limit Theorem. Where Sanjay was saying it's like if our brain was like DRAM. And so if you don't pump it, if you don't use it, you forget it.
And actually, you can even put a curve to it. And in the case of academia, this parameter, mu, is fixed at this. Because the probability is two years, every two years, these get-- this gets cut in two.
So these are now-- let me just go quickly through some experiments so you see what it is. These are animals-- rats, pigeons, monkeys. These are with undergraduate students. But look at the time-- 28 days, two minutes, a lot of stuff is seconds-- undergraduates up to 196 seconds. So there's a lot of research on short-term memory, working memory. It doesn't really matter. It's always the same curve.
These are just more examples of the same. This is the permastore paper, Bahrick. And this is the data. The data is on Spanish. So Spanish in this country, it's very difficult to measure it. But regardless of these results, here's what someone else found.
Someone else found that along that period, this period of almost 49 years, there was great inflation. I was at Harvard today, and the average grade at Harvard is A, A-minus. But the average grade is A. So this is like-- he said somebody-- this Professor Kingsman took that data and said, well, if you take grade inflation, then it follows the curve.
So this is another experiment on permastore. But again, there's weaknesses, because they didn't take into account excellence on using the material. And there's a difference between A students and D students. So yeah, the curves may be different if you start high or if you start low. Also, if you get cues, your memory responds better.
These are not-- this is an experiment on medical, but it also has lots of limitations. And it has not been reproduced. Autobiographical memory, that's the same. So they looked at the number of memories over a year. And basically, after 18 years of 100 memories, you only remain with [INAUDIBLE].
This is a paper where John Gabrieli was involved. And this is about the September 11th attack. In there, they are claiming, well, maybe there, there is a permastore. But as we were discussing with Bror earlier today, data there doesn't take into account the fact that you probably think a lot about the September 11 attacks. Probably every time you get into a plane you look at the door to see if the captain has turned it-- has closed or not. So it's hard to know if you use it or not, you probably use it.
And this is another experiment, now, closer to step [INAUDIBLE]. Of all the Step Studies that were reviewed, only three looked at more than two years. And only one looked at more than four years, which is the one that the authors did that I showed you earlier that hasn't been reproduced and that has a lot of weaknesses. So really, there [INAUDIBLE].
Now to finish. And I don't know, David, if you want to talk about this slide. But to me, what I think the important thing-- this is data on MIT students. Are you going to talk about it or should I?
DAVID: Go ahead. Go ahead. Maybe I'll talk [INAUDIBLE]. So that's a little-- no, because it's hard to explain.
So we tested students as seniors, actually, the week before graduation. We paid them to retake a freshman year course. They didn't know until they walked in that it was physics. They didn't know until they walked in that it was physics. And we basically gave them a test that was equivalent to the one they'd taken. And in one case, it was exactly the same.
And if you show that curve, we isolated the students into groups. And the important group for what Brian's talking about here is group [INAUDIBLE], which was people who had nothing to do with physics after they left.
And what we basically found was that they-- well, this is a complicated plot. But the bottom line is they lost half of what they did on the final score. Four years later was approximately one half. So the change on the right is proportional to how well they did-- how much they do to forget. But the curve indicated a half in three and a half years, which was a little slower than the others.
BRIAN SUBIRANA: Yes. Yeah. Exactly. So that's it then. And what's interesting is that nobody really did better, nobody at MIT, even the physics majors. And we have a department--
DAVID: So the physics majors-- if the group three--
BRIAN SUBIRANA: Yeah, these are the triangles, these guys. And only these guys probably had a really bad day the day of the exam because he's pretty low. He's below here.
DAVID: All the triangles are above zero, except for a couple, which means that they did better on the [INAUDIBLE]
BRIAN SUBIRANA: Yes. Yeah, there's one, two, three guys, four guys. Maybe five, yeah. So the physics [INAUDIBLE] OK on average.
So OK, good. So now, think about the poor guys that have to do medicine and that spend four years-- four years undergraduate to get accepted. If, you really forget everything maybe that's one easy thing we could change and it have an MD.
So Bror, you can talk about it. So with this, I'd like to hand it on to you, Bror.
BROR SAXBERG: Great.
BRIAN SUBIRANA: I've basically covered the forgetting, and now we're going to cover this piece.
BROR SAXBERG: OK, yes. So I am Bror Saxberg. I'm the Chief Learning Officer of Kaplan, as you heard earlier. And Kaplan is a quite large education company with a million students a year spread around the globe, with all kinds of different students ranging from medical school students who are some of the smartest students around intensely focused on studying who we do a lot of test prep with, all the way to students coming in to get an associate's degree in medical assisting.
They just need a better job. They're 30 years old. They're a single mom. And they have a dead-end job. But they just want to get into health care, and they're not sure how to do. They've had a rocky background.
And everything in between-- different kinds of goals and backgrounds. We do corporate training. We do English language learning. So we have an amazing collection of different kinds of learners and different types of topics.
And so what I do Kaplan is apply learning science at scale to [INAUDIBLE] all of our different learning to speak the same language and try to build and improve learning environments using the same ideas that come out of learning science. And those involve running off of research about memory but also a lot more than that. And that's what I'm going to try to take through.
So there's actually a lot known about how expertise works, and we should be putting that to work. So a basic model that's been replicated around the globe, many different studies, is to think about audio and visual inputs coming into working memory, one of the two main memory areas that has this forgetting curve. It's very short forgetting. It forgets very fast. It's noisy. It's very narrow. You can't do very much at once inside working memory.
But it's also very flexible. It's the place that handles the toughest things that we do. It is, unfortunately, error prone as well. So it's a little bit nasty in some ways if you're relying on working memory. If we only had working memory, we'd be in trouble.
But working memory is supported by something with a much longer half life, long-term memory. And long-term memory has half lives out to hours or years and beyond, as we were hearing, depending on how-- what you look at. It also can [INAUDIBLE] things at once-- many, many different activities and decisions at once. It stores complex procedures and decisions, not just facts.
So just for fun, how many people here drive a car? See? Yeah, we're not asleep yet. That's well done. OK.
So how many of you have had the following experience? You set out for place A, and you start thinking about MIT and life. And you look up, and you've arrived at place B instead. Raise your hand if that's ever happened to you.
Yeah, look around. See? You're not crazy. Not for that reason. Yeah, there you go. And you just drive out to the new place. You don't worry about it.
But what we don't do is think enough about, who drove you to the wrong place? Who was actually in charge of a ton of metal for 20, 30 minutes at 20, 50 miles an hour? Because it wasn't you. You probably don't even remember how you got there.
So this is long-term memory in action. It's not just recall of facts. A complex thing, driving is. And it's not like digesting a bagel. You didn't evolve to run your car to work. It's actually a learned thing learned so well that it can be completely run in this long-term memory storage.
It's very cool, and cognitive psychologists have found lots of expertise looks like that, where complex procedures-- not just patterns, not just facts-- run independently inside long-term memory. And not everything can run inside long-term memory, as we'll talk about.
So if you think about this as a model that's been replicated and used by cognitive psychologists and taken apart into smaller pieces of all kinds, it leads to some real thinking about how should you, then, design learning environments because of this. So the way we're trying to do this at Kaplan is to think about this with four different dimensions. The first one is really spending some time understanding expertise. And that's because of this long-term memory thing.
One of the characteristics of the long-term memory is it's nonverbal, which means experts often can't tell you how their minds work anymore. It's just obvious. It's like that example that Brian said about riding a bicycle.
If you have not thought about riding a bicycle for 20 years and then you've got to teach a child to ride a bicycle, the words do not come. It is not clear what you do. You just do it. And so to actually bring the words back up, you may have to narrate your own behavior.
So we'll talk a little about expertise. Then, there is the use of evidence-based instructional design so that once you know your outcomes, you want to try to increase the odds you're going to get there. And that's where you can use the cognitive psychology work, learning science work to help there. And I'll talk a little bit about that.
You also do want to have valid and reliable measures of learning, which is something that is often not done in learning at scale environments. That's a problem. We won't talk much about that.
And finally, when you have this in place and you're running using education technology, you can actually [INAUDIBLE] potentially very quickly. And I'll show you how that can be done. We've done some of that at one of our units.
But let's start in with the actual cognitive task analysis and expertise. So here's the fundamental problem. If you bring in a top expert, somebody who before performs at a really high level a job, a task, and you call that 100%, and then you ask them, so what do you do when you do it?
They will tell you less than 30% of how their minds work. Less than 30%. 70% or more is tacit, nonconscious. It's happening behind their working memory, which is where their verbal ability [INAUDIBLE] arresting.
So to get more of it visible and extant, you have to do some quite special interviewing, much deeper interviewing work with those folks, and that's where cognitive task analysis comes in. You can get something like 80%. That still leaves 20% as magic, but you can [INAUDIBLE] further than 30%.
And it involves finding those experts, using data, interviewing them in detail individually, and then comparing the interviews to each other to find a combined gold standard for what an expert mind does, and bringing it back to them to get them to review it individually again. And different ones of them pick up different pieces of that hidden expertise. And that's how you can get it.
Sometimes people will have a personal mythology, so the other experts won't recognize what the guy said. And they'll say, I don't do that. So that, you take out.
But often, you'll find one person says something, and the other experts say, oh, yeah, yeah. That's right. That is what I do. And then you can check off that goes into the gold standard.
So what's cool about doing this kind of work is it actually improves instruction, and it identifies things that are not in our conventional curriculum. So we have a paralegal program within Kaplan University. When we did a cognitive task analysis of that program talking to very high-performing paralegals in the field, we found their work split into these large categories on the left from intake interviews down to appellate findings, and then some technology tool things down on the left.
And when you compare that to what the standard textbook that has controlled hours and many other programs for many years, what it teaches, there's a lot that's not even taught. It wasn't even on the radar screen of the teaching programs at all, which explains the career path of many paralegals. As they leave their program, their first job is a disaster. Many leave the professional together because they assume it's because they're idiots.
In fact, some stick with it, then pick up what they're missing, go to another firm, pick up even more. And by five or 10 years later, they have mastered all the things on the left hand side, have forgotten them in terms of verbal awareness, and so are ready to become faculty members themselves. And the cycle has repeated for generations. But you can make it visible, even though that cycle of moving things into long-term memory and making them invisible continues.
There have been a number of experiments. And just for time reasons, I won't try to go through them. But there's a range of different experiments that have been done in professions. Surgeons, for example, have done some experiments with cognitive task analysis demonstrating real reductions in fatalities and major injuries by using cognitive task analysis to understand what the best surgeons do for some dangerous procedures and actually accelerate the training time. Because they're no long training people on things that are irrelevant to expert performance in the modern surgical suite.
Now, you'll notice the one on the lower right. It works better, but it's really intense to actually do it. It's really intense effort to do it.
We did a cognitive task analysis inside Kaplan of a group of professionals who were very important. They placed students from our brick and mortar colleges into jobs. And for certain associate's programs that's a requirement. You must get a job in order for the program to end up continuing to be certified.
So on the left, you can see the dark bar is the control group, if you will, of these job placement folks. The right bar is going to be the treatment group. And what we did is a cognitive task analysis-- identified out of this 400 or so of these folks, who are the top performers, turned that into an evidence-based instructional program. [INAUDIBLE] we released it in half. And so one half is the treatment. One half is the control.
And after we released it, you saw a substantial jump in the performance of the guys who actually were the treatment group, who actually got the new training. And it lasted for months. This was a phased-- a randomized phased rollout, where we took a random half, rolled the training out, and then five or six months later, rolled it out to the rest of them.
And you can see the sustained performance increase, which is more than one, maybe even two placements extra per month. It's really substantial and important in that field. So we had, actually, a very [INAUDIBLE] and at-scale example of how cognitive task analysis can make a difference. And this is a technique that is expensive to do and is not yet being done in scale, but it's an example of what you can do by applying cognitive science research to scale.
So let's talk now about evidence-based instructional design. Well, one of the things that looking at [INAUDIBLE] has shown is that when you learn things, it really goes through stages. You can see this in the memory work, but you can also see it in all kinds of other tasks that you're trying to master.
You begin where you're sort of stumbling along. You're trying to keep things in mind, and it's really hard for you. Because everything is in working memory, which is narrow, and stuff falls out and all that. So how do you help with that stage? How do you get through that stage?
Well, you provide lots of visual job aids so that your eyes can act to help replace your working memory and your long-term memory, because nothing is in long-term memory. Once you get far enough along, it begins to become more familiar. You have more things that are now in long-term memory. And now what can help you [INAUDIBLE] your process further is different kinds of practice tasks, so different conditions to different circumstances. And it's also here where feedback and coaching can help you, because now, you have enough working memory space to actually listen to the feedback.
In the first declarative stage, you just-- you're totally overwhelmed. And so you [INAUDIBLE] process feedback. Here, you start to be able to have feedback. Now, not everything can go to the next stage of being automated.
But things that can become automated, the way you get there is large amounts of practice and feedback. So an example of a thing that's not going to be able to be automated is writing a persuasive essay. You can be quite good at that, but you can't write a persuasive essay while planning your summer vacation at the same time. You can drive to work while planning your summer vacation at the same time.
And the reason is writing a persuasive essay always has very difficult problem-solving challenges around the audience and the language and the tone and the argument. Working memory is always dragged into use, but it is supported by lots of things in long-term memory as well. So the practice and feedback makes the collaboration work really well. So my point is, not everything ends up being automated.
Now, one more piece out of the learning science work comes from work by Carnegie Mellon and Ken Koedinger over there. They looked at the taxonomy and felt that it was not matching the learning science of the last 40 years or so. So they came up with a new taxonomy for learning outcomes, a new way of thinking about this that reflects the learning sciences of the last few decades.
You start at the top level with complex cognitive procedures-- chunk of decisions that your expert people need to be able to decide and do. Those, you've got to practice them, use them. But those are always supported by supportive knowledge, so things like facts, which is very much the Ebbinghaus little nuggets that you have to maintain.
But there's a lot more than that in here. There's also concept. And concepts are basically tools and expert uses to classify situations. In physics, is it a momentum problem or an energy problem? Well, that's a classification that is very useful for deciding what frameworks to apply.
And experts have many of these concepts. It's not just they know the definitions. They know how to apply them quickly to tell, what is the situation we're in?
There's also processes, and that could be like how an engine works, or Krebs cycle, all those things. And again, experts don't just know them. What they're really fast at doing is if you change an input in the process somewhere, what happens to the output?
And the other one is diagnostic. If you have an error in your output, where could that ever have happened in the process? They get very good at those kinds of things. And that's the kind of practice you need to do to really master processes.
And finally, there are principles. When all else fails and things are foggy, how do you navigate your way out of a paper bag? The experts have principles, but again, it's the practice of applying foggy circumstances that really gives them the hunch that they have. But all those things are supportive. You still have to put them together and actually practice the overlying cognitive procedure.
There's more cognitive sciences applicable here. I chart morning. I chart-- this comes from a great piece of work by Richard Mayer and Ruth Clarke called E-Learning and the Science of Instruction, Fourth Edition. They keep summarizing research in cognitive science and how it applies to things like how do media and text and audio help each other for learning and get in the way of each other for learning. And so there are some simple things in some ways.
Like if get rid of irrelevant audio, you can get almost a full standard deviation of impact on learning in laboratory studies. And then there's dozens of laboratory studies behind this chart. So there's a lot of detailed data guiding us on what we can do.
And just as a quick reminder, bell curve-- 50th percentile person, let's say, in the middle of that bell curve-- you move them by a standard deviation, and a lot of those interactions had standard deviation moves. You'd get them up to the 84th percent. So this is not trivial. They are laboratory results, though. Your mileage will vary in the wild, as it always does. But I'd rather start from a standard deviation of impact than from no data whatsoever.
Now, the last theoretical piece here is we actually know a lot about motivation as well. So in order to get people to practice, to get great results, to become fluent, to do well at work, they have to start, persist, and put in mental effort. That's motivation-- starting, persisting, and putting in mental effort.
We had Richard Clarke, who is a pretty good cognitive scientist from Southern California, do a scan of lots of literature. And what he came up with from behavioral economics and cognitive psychology and social psychology and so forth was a four-part model of the things that get in the way, that's actually very helpful and we've been using inside Kaplan.
So one thing that gets in the way is if you don't value what [INAUDIBLE] You're a dancer in a math class. That's going to stop you from focusing. So you've got to try to link to, what is a dancer interested in that has to do with math?
The second thing that goes wrong is you simply don't believe you can do it. I am another dancer in that math class, and I am no good at math. So it's no good telling me how important it is. You're just making my life miserable. I need a different help. I need to get help about how, no, you can do math. It's just a matter of what's missing and how do we get you forward, some storytelling from other places, and so forth.
The third thing is a little bit like the second one. It is, I blame something in my environment. My teacher hates me. I can understand my TA. This textbook sucks. Or the classic, I don't have enough time. So I don't start. I don't persist. I don't put in mental effort. That's different yet again.
And the final one is the hardest-- negative emotional states. If you're angry, if you're frustrated, if you're scared, you're not going to start, persist, and put in mental effort. So each of those has its own kind of diagnostic [INAUDIBLE] treatment possibilities. And you do have to disentangle them.
So a lot of this-- some of you might be thinking, well, a lot of this is intuitive. Can't we just use our intuition? Well, let me show you with a little bit of a dirty laundry from Kaplan how that doesn't work.
So we're very good at LSAT, the law school admissions test. The complex reasoning part of this-- anyone here ever taking the LSATs? Just curious. A few of you. Yeah, yeah. These are nasty. They're bad. They're hard. We know it, because all the examples people train with us on, it's really hard.
So we've become very good at it, and when the multimedia world came up, we, then, created a beautiful video of the Kaplan [INAUDIBLE] way for handling LSAT reasoning problems along with a workbook. It was a thing of beauty. Our work here was done.
Well, one of our learning engineers recently trained said, wait a minute. I don't think long-form video is going to help very much, first. And secondly, this sounds like the kind of complex reasoning problem that overwhelms [INAUDIBLE] which is exactly what John Sweller, a cognitive psychologist in Australia, has been doing research on for years, showing that a very simple technique, having students study previously worked examples, can really help with.
So we did a four-part randomized controlled trial with roughly 1,000 students-- no, 400 students. On the right is the glorious Kaplan way for learning approach-- oh, no, sorry. On the right is nothing. You just hit the problems on your own with no training at all.
Just next to it is the glorious Kaplan way for learning video plus workbook. And marketing colleagues tell me, Kaplan-- worse than nothing, is not a rallying cry for the business. Fortunately, when you apply the statistics, the two right hand bars are actually equal. So we've moved up to Kaplan-- as good as nothing. My colleagues tell me that still doesn't sing.
The two on the left [INAUDIBLE] worked examples. We built 15, but we gave one group eight and one group 15. We didn't know how many we needed. Those two bars are basically the same, and those are statistically much better than the two on the right. So real progress by doing worked examples.
But more than this, look at the time. The video and workbook took more than an hour and a half. Eight worked examples took eight minutes.
And then finally, production costs-- eight PowerPoint slides versus a professionally produced hour-long video and a workbook. The Kaplan test prep product development team was just flummoxed by this, because wait. It costs less. It works better. And it takes less time.
What are we supposed to do? Because everyone knows video is so cool. Right? [INAUDIBLE] They backed up and said, whoa, whoa, whoa, whoa. We can't run off our gut. Our intuition was dead wrong here. So we have to actually, now, put learning science right in the front end whenever we get started on a project, and that's what we've been doing from now on.
So how do you do that? What's the systematic way to start doing that? So one thing we did was to create a checklist of different characteristics of a learning environment that's been built using good learning science. And it has-- it's fairly simple, but it has a structure to it.
The objectives-- are they-- are they stated? Are they performance objectives? Are they actually aligned with each other, and so on? So we have some components here. And not just about the lesson design. It's also across the lesson-- things like motivation, the organization, integration.
We applied it to nine different major products across Kaplan. And the dark is supposed to be excellent. The white is not really doing so well. And look at that. It's an ugly picture.
Now, in fact, as an engineering person, as a learning engineer trying to be practical, not every block in here should be dark. Personalization can be really expensive. So it may not be worth making many different environments for personalization.
However, there's no reason not to have the objectives be really good. And we had several major products where the objectives were not stated well at all. So this was a bit of a call-to-action and said, we have to get busy and be more systematic with our teams.
So we began doing these product reviews across hundreds of our products. They began to create comments and structures for reviewing them. They would talk to each other about what was actually happening.
We captured examples of what was good practice. We didn't capture examples of bad practice. Because that just seemed rude. But we captured examples of good practice and used those. And we began doing more systematic work to apply learning science at scale.
As a pilot, we took some existing online courses that are very conventional. Their outcomes and content were not terribly well-aligned because they were just generated by the faculty member, limited demonstrations, and so forth. And we did a much better job. We broke it up differently. We added more specificity to it. We actually put in some of the motivation tools.
And when we ran a randomized trial here with-- across six courses that we change this way-- so about 900 students-- we found a really significant improvement in their success. Success defined as they passed the course. They mastered the key outcomes of the course. And they came back from the next course. Because that's really important in the learning environment this was about. They had to come back.
And it was one and a half times more successful in the new courses, which then led us to realize we really can redesign courses and get benefit. But it also, at scale, we suddenly realized we could keep doing pilots like this. If you have just one high-volume course with 1,000 students in it, you can suddenly think of it as five sets of 200 students each.
And each of those could be a randomized controlled trial suddenly. And because they're distributed as a virtual university, they start every month. So you have a cohort starting in January of 1,000. You have a cohort starting in February of 1,000. You have a cohort starting in March of 1,000 and so forth.
So all of a sudden one large course should be an engine of innovation, an engine of exploration for this. And we are starting to explore all kinds of different themes and approaches so that now when we look at having done 130 of these randomized controlled trials in the last two years, in this one environment, Kaplan University environment, you see what looks like a typical innovation portfolio.
We have several different major themes here, including improved persistence, improved learning, lowering costs, and we also did some work on just measurements. And you can see there's a bunch of these studies that are in that intermediate gray state, which is-- it says, inconclusive and no firm recommendation, which is basically a long way of saying, "huh?" We're trying to figure out what's going on. We don't know yet.
But we do have some that we figured out what's going on, and we didn't like it. So we deleted it. We stopped. It's not working. We stopped.
And we had some where we got to a conclusion and we liked it. So we rolled it out. And there are some that are still in process.
So this is how innovation should look. I mean, that's how it often works. Most of your things are kind of uncertain. And then you have a few wins, a few losses, and you keep going, which is what we're trying to do.
So as you think about how you can apply learning science and what's needed to move [INAUDIBLE] the research results, like the memory results, for-- whether it's for short-term memory or for long-term memory-- and get long-term benefits, you really have to start thinking more systematically about what you can do with scale, but also, what are the processes you have to put in place given that you have scale? So those are our comments.
BRIAN SUBIRANA: Excellent.