Using Lookit to run developmental studies online
Date Posted:
September 4, 2020
Date Recorded:
September 3, 2020
CBMM Speaker(s):
Maddie Pelz All Captioned Videos Computational Tutorials
Loading your interactive content...
Description:
Lookit is an online platform for designing and running asynchronous developmental studies. This technology allows for more diverse and representative populations to participate in developmental studies than would typically be able to engage in the research process (e.g. participation at a children's museum requires a ticket purchase, and coming to the lab space to participate during the workweek often limits single-parent families or those where both parents are working). Along with this benefit, the limitations of COVID-19 have also put more pressure on labs to move their research online. During this tutorial we will discuss why researchers might be interested in Lookit and what types of capabilities it has, as well as go through a demo on how studies are structured/how to get started with designing a study to fit your needs. Those who are not working in development but are collecting other behavioral research online can also learn more about how to make online studies engaging and intuitive in order to get the best quality data possible. The demo will be done through a shared screen, but you can also make a researcher account at https://lookit.mit.edu/ if you'd like to work on your own computer or continue to develop a study after the tutorial.
JANELLE: We have speaker Maddie Pelz. Maddie is a PhD student in Laura Schultz's lab at BCS, and she will be talking about Lookit, which is an online platform for designing and writing childhood development studies. Take it away, Maddie.
MADDIE PELZ: Great. Thanks, Janelle. And hi, everyone. Thanks for coming. Hopefully you've had a chance to follow the instructions that are on the screen now. So I put a quick link in the chat, but essentially you need to make an experimenter account if you want to follow along in terms of editing the studies that we're going to look at today. But I'll also show you on my screen. So if it's hard to balance both, that's fine too.
But once you make the account, you'll need to fill out a demographic survey, which can be quick and also doesn't necessarily need to be true because we're not actually going to use you as a participant. It's just to kind of test out your studies. And then also, add a child. So you'll see those in your two options when you look at my account. So you can add that demographic survey and a child information.
So hopefully you guys have been able to do that. And if not, you can still kind of do that as we go along. Can I get a quick like-- have people been able to do that successfully?
AUDIENCE: Yep.
MADDIE PELZ: Thumbs up or something? OK, perfect. Let me know if at any point you need me to repeat instructions for that or anything else. Great. So one thing that we need to do, which will make a little bit more sense later, but in order for us to be able to edit the study that comes as an example when you make an experimenter account-- you can see if you're-- under your experimenter account, and you look at manage studies, you'll be able to select example study. So that's one that should have been included when you first made the account.
And then once you're inside there, after you've clicked where that first red circle is, you should be able to see clone study. And so example study is owned by Kim Scott, who is a developer and who developed Lookit. So essentially, in order to make edits to that, you'll have to clone it and make your own copy. So when you click clone, you should come to something that says copy of example study. And you can change that to say Maddie's example study, or Janelle's example study. And now that's your own copy. So that's kind of like a fork from that main study.
And the last thing you'll need to do before I kind of give the background, because sometimes this can take a few minutes, is once you're on the page for the example study-- in the bottom of my screen here-- you can see that you need to click build experiment runner. So yours is probably yellow on your copy study instead of green. So the first thing to do is click build experiment runner because that might take a few minutes. So that can kind of load in the background while I go through the other things.
So I'll just give people a couple minutes to make sure they can do that. I'll just go back one slide quickly so you can see the first step. So again, the example study, you're going to clone it. And then once you're in your copy, just click build experiment runner in that-- it will be yellow, I think, in yours. All right, did that work for people? It should say building or something like that. Awesome.
So now all that's going in the background. Hopefully by the time I get through the background, it'll be kind of ready for us to mess around with. So just for a quick note, the experiment runner is kind of what your study needs for it to go from the JSON code that we're going to write to actually presenting the study and being able to be interactive. So we just kind of have to build that so that we know how to interpret that code into what the participants will see.
So now I'll kind of go back to the beginning. So thanks for coming. And one thing I want to say is that I'm here as kind of a user of Lookit, not necessarily a member of the core team of Lookit. So I've been using it for a few months. And I think it's an amazing tool, and I'm excited to share it with you all. But there are definitely going to be some things that I'm not quite sure about or that I'll have to get back to you about.
So feel free to ask any questions. And I see Rico's on the call, who is one of the members of the team. He's a programmer on the team, so he can help maybe with some-- there, he's waving-- if you have any specific technical questions as we go along. But I just wanted to talk about the core team for a second.
So Kim single-handedly started this as a part of her thesis work. And now Rico and Mark have joined as well. So they're kind of the core team at MIT who is working on this now. Great, so just to start it off, so why put studies online? In addition to the obvious push towards online testing that we've needed due to COVID, Lookit was developed to address multiple barriers to large-scale developmental research.
So one of the biggest challenges in getting good developmental research going is the amount of time and energy we spend just getting participants to the lab to do a study. So as a field, that means we often end up trying to manage with very few participants. And online, we can at least in principle run much larger sample sizes. So this is really important because it's not just about more data for the sake of having more data. It also means we can adequately power studies so that other researchers can replicate and build on our work, and we can look for graded effects instead of just relying on binary comparisons.
It's also much easier to run longitudinal studies in this way. It's much more straightforward to bring families back to a website monthly, for example, rather than bringing them back to the lab. So people who have worked in developmental labs where you're calling families every day and reminding them to come in and working around their schedules will know that it's much easier to kind of send them a reminder to log on when it works for them.
In addition to that, you're not limited geographically, so we can recruit children with specific circumstances or diagnoses and test their behavior at home. It also increases our access to a much more representative range of families than those that we typically test in the lab or, for example, at children's museums that have a high cost of entry. And this platform also allows for an expanded number of people who are able to do the research themselves, because as long as you have ethics approval, you don't necessarily need to be affiliated with an existing developmental lab or have access to populations of children in order to run studies. And you also can be located anywhere in the world and be able to run them asynchronously.
So here are some other examples of how we really do get a much more typical and representative sample using online testing. So for instance, over half of our families have a family income under $50,000. It's racially diverse and we also have a really wide range of languages that are represented for families that are using Lookit.
And another facet of diversity that Lookit helps us improve on is parent education. So in one of our recent lab studies, we took a look and almost 60% of parents had a graduate degree, which I think everyone will agree is a pretty unusual population. So if you look at the US Census and estimate what that should have looked like given the age distribution of parents that were participating online, that census data is much closer to what we see with a sample from Lookit versus the in-lab testing. So that's just another aspect that we can kind of get a more representative population of kids in our studies.
So here's a couple videos from some families participating in Lookit. And one of the big concerns that people had-- I'll wait for the chimes to stop. [LAUGHS] One of the concerns that people had when Lookit was started was that the home is noisier than the lab. So we work really hard in the lab to make sure that during eye tracking studies kids are in a dark room with no distractions looking only at a screen. And so that might be a concern when you're thinking about moving that kind of study online.
But to be fair, we do see more distractions, but we also see natural behavior at home, so there's benefits as well. And there are also ways that we suggest researchers adapt to this, but encouragingly, we don't really see big differences in looking times across studies that are done in-lab versus online. We see kids paying attention and responding even though they're not necessarily in that pristine lab environment. And we see parents cooperating with instructions, even with minimal instructions and having them read them or hear them in audio from us.
So just a quick note about how families use Lookit. They register online. They can select and participate right in the web browser. The videos that we see, both for looking time and also for verbal responses and things like that, come from the webcam video recording from their own computers. And they're not required to schedule an appointment or download any software. They can just log in whenever they're free and participate in the study. So that offers a lot of great flexibility.
And finally, the parents are with the children during all of the studies, and they give consent verbally. So we'll see that when we go through an example. But children are often sitting on their parent's lap, or if it's a looking time study, the parents might be looking away from the screen and holding their child over their shoulder.
So in terms of how researchers use Lookit, which is what we're going to focus on today, researchers can define and control their studies from an experimenter interface. And you do need IRB approval from your institution as well as a short agreement with MIT if you're coming from an institution outside of MIT, but it's pretty straightforward to get approved. And then you can get to work on designing your study.
So all of the code from Lookit is open source and publicly available, which is something Kim and her team have been committed to from the beginning. You can find it all-- you can find all of it at github.com/lookit. And I'll share a bunch of links as we go, but that's one important one if you're interested in getting started.
So in addition to the code, the plans for project development itself are also really open if you're curious about that. All the code and the planning around new features is on GitHub. So if you go to issues, you can see bugs that people have encountered, new features that are planned, and so on. And you can also look at milestones to see kind of how the team is planning to address different updates and different bug fixes.
So I can talk a bit more about this later, but commenting on issues and reporting bugs is one really important way you can contribute to Lookit as a user. So there is that core team, but there also-- there's also, because it's open source, a big kind of community contribution aspect to using Lookit. So that's one way you can definitely help out.
So also on GitHub under research resources there is a Wiki with information about getting started as a researcher. And there's also very thorough documentation about each frame type. So as we go into talking about how the studies are constructed, they're built using pieces called frames. So there's great documentation about-- and with examples for each of those frame types, as well as a step-by-step tutorial that you can follow, and a Slack community of existing researchers that you can kind of lean on as you're going through that tutorial and through developing your studies.
So when the platform was first under development, the Lookit team started with a limited number of collaborations from folks across various institutions. So you can see here that even in the earliest stages, Lookit had a really wide range of domains, of tasks, and age ranges, and that's now expanded even more widely since the launch earlier this summer. So this was kind of the beta testers, but now Lookit is open and available for people to create and submit studies.
So you can see here there's things like storybook paradigms for older children. There's things where babies are interacting with their parents and that's captured on a webcam video, as well as kind of preferential looking tasks and things like that for infant studies. So the vision for Lookit is to create and maintain infrastructure for the field at large. It's being developed within an academic lab, but the goal is to help other researchers creatively address their own questions.
It's currently run by a small group at MIT, but as of the launch earlier this summer like I mentioned, it's available to any researcher who wants to use it. So people across labs can benefit from using and collaborating on the platform. And together, they can make Lookit a place with constantly refreshing interesting content and share in the benefits for recruitment and engagement.
So once a parent and family are registered on Lookit, they then can see all of the different studies posted by all the different labs. And so we all can kind of share the benefit of having a recruitment pool that's interested and engaged, and as your child ages, they can participate in all those different studies. And like I mentioned, the Lookit team is committed to open-sourced development, to encouraging responsible research practices, recruiting a representative participant pool, respecting contributions of families to make this work possible, enabling non-traditional developmental researchers, and to supporting work that benefits children directly.
So one quick note about open source development and open science. So the most obvious connection to open science are these first two, so that, of course, you can run your study online. There's no secret about your lab setup or something you forget to mention in your methods that has helped a lot of babies succeed in your task and things like that. You can just share the entirety of your protocol in order to support the replication and extension of your work. So you can share the entire Lookit script and people can make modifications or add onto it, and then be able to replicate or extend your work in a pretty straightforward way.
But in designing a platform for researchers, we also have the opportunity just for open science practices more generally. So we do that both by implanting tools and setting defaults. So for example, all Lookit studies ask at the end for parents to select a privacy level for the videos that they've recorded from their webcams and for whether it's OK to share that with Databrary.
So many parents are enthusiastic about sharing their videos, and that's why we have those cute examples you saw of those natural home scenes and things like that. And of course, although this is somewhat separate, we're also doing open engineering in the sense that you can see everything that's been done and everything that the team is planning on that GitHub like I'd mentioned.
So from here, the plan was to go and work a little bit on exploring the example study. But do people have questions so far? And also I'd love to hear if you're thinking about working on a particular study or a particular area that you'd be interested in hearing about Lookit, or whether it would be useful for your lab.
JANELLE: So how big is the participant pool? Like about how many-- is it hard to recruit participants? Or is it growing?
MADDIE PELZ: That is a great question. It's definitely growing now that it's opened up to other researchers. So now there's not just the beta testers are able to post. Rico is-- do you have up-to-date numbers on that?
RICO: Yeah. Yeah, we have around 5,377 active users.
MADDIE PELZ: Around that.
RICO: About. And that number is growing at a pretty decent clip. And we just released an announcement email feature too, which is basically like a reminder mechanism for currently registered participants. And it basically looks at what kids are aging into what studies and then sends out a reminder. So the participation is picking up.
MADDIE PELZ: Thanks, Rico.
AUDIENCE: And sorry, I think you said this, but is this-- are the people just from the United States? Or are they from other countries as well? The participants?
MADDIE PELZ: I think it's open to international participants. Is that right, Rico?
RICO: Yeah, it's open to international participants. And we have users registered from pretty much all over the world. At this point, I think though, mostly because of how it's kind of spread by word of mouth, it's probably-- I'd have to look at our dashboard, but it's probably more US tilted than anything.
MADDIE PELZ: Yeah, but it's definitely not limited to that. And it's also not limited to testing from the US as well. So researchers at any institution with ethics approval are able to post a study and recruit participants.
AUDIENCE: Do you have guidelines about how to pay them? Or are they not paid at all, the participants?
MADDIE PELZ: Participants are often paid. Most often I think it's like a $5 Amazon gift card or some sort of gift card like that. That's determined by each lab, so I'd say the large majority of participants are paid. But it is a possibility to post kind of like a volunteer study if you're interested in that. Some people have done that early on in terms of a short piloting study or something, but I think the recommendation is to pay participants for their time. But there's no one standard.
AUDIENCE: Right. And do you have recommendations for things like consent forms and information and things like that somewhere?
MADDIE PELZ: Yeah, so as we go through the example, there are frames that you can kind of use from the example studies. And those frames kind of have built in frameworks for how the consent is structured and things like that. So you can make small changes to that. But a lot of it is kind of consistent across every study.
AUDIENCE: Thanks.
PRESENTER: So we have two quick questions. One is from Rebecca, what is the largest age group in your platform, babies, young kids, or older kids? The second one is from Chris Kelly. Are we able to conduct repeated measures on the same participants in a longitudinal study?
MADDIE PELZ: The answer to the second question is definitely yes, and that's one example of a study that's been running on Lookit, which is a monthly longitudinal study on infants understanding of physics. Rico, do you know about the age ranges?
RICO: Not off the top of my head, but I can probably look really quick. Yeah.
MADDIE PELZ: Yeah, I mean with 5,000 participants, I think there are definitely both infants who have done looking time studies as well as kind of preschool and older age kids that have done storybook tasks. So I think for kind of a reasonable-sized sample, you'd be able to get the participants that you need. But yeah, I'm not sure of the breakdown exactly either. Yeah?
PRESENTER: There's one question from Randall. Do any of the studies focus on children with neurodevelopmental disorders?
MADDIE PELZ: Yes, so one really great thing about Lookit is that when you're recruiting for participants, because we're not limited by a geographic area, you can recruit participants with certain diagnoses or things like that. So that's something that you can include when you specify your study. You can put in kind of the population that you're looking for. So I can point that out where you would enter things like that in as we go through the example. That's a good question.
Great, so I'm going to stop sharing this. And when you're in your experimenter account, so here, you can see that this is-- I've logged in in the corner. And if you click on experimenter, it should bring you to this page that says manage studies. And I have a few going, but hopefully you have something called copy of example study, or whatever you chose to rename it. So if you click on that-- I think what we should do first is just go through that example study and see what it's like for a participant to do that.
And then we'll go through the code and see what's happening inside kind of step-by-step. So one thing to check is hopefully now your experiment runner says experiment runner built and is green. Hopefully we've given enough time for that to be ready to go. But if not, I did it beforehand, so I can show that on my screen. So let's just go through the study, and I can kind of talk through kind of what we're going through as we go.
So the way that you can preview a study as you're working on it is just clicking this button here. And the reason we had to make a child-- I have a fake child who I've named MP here. And that's so that you can preview-- you have to select which child you're using to preview the study. So this is my fake child here.
So as you see, it says your child is older than the recommended age range, but that's fine for previewing. So that would-- parents would see this warning if, say, their six-year-old was trying to participate in a looking time study, which we wouldn't necessarily use their data. So I'm just going to say preview now.
And the first thing that comes up is kind of a webcam setup. So now you can see me in this little webcam preview. And you can edit these prompts, as you'll see in the frames. But each thing will turn green as it goes.
So one thing they want to make sure is that when their webcam is turned on and off during the study, that it doesn't need to-- doesn't run into any issues of it being reloaded. So you just click reload. Make sure it works. And then it just wants to make sure-- since I'm talking, that's been enough. But if you're by yourself, you can clap or make a sound and get it to register you.
So the person that was talking about consent, this is kind of the built-in consent document. And you're able to edit sections of this, but other sections are un-editable, just to make sure that the things that we definitely need to tell people about their participation online is carried through. But clearly, you can edit. Why do babies love cats? This is just kind of, like, silly fill-in text for this example study. But you go through and fill in your name and your university and information about what parents will see and things like that.
So they would read through this consent document. Here's where you can mention what payment they might receive. And then I think this use of data by Lookit is what's un-editable, for example, because it's run on Lookit, and all the data is processed through that. So there's not really a way to get around that. So I can show you in the code where you would edit this, but that's the kind of general format of the consent that parents go through.
And then because we're not in the lab to have them sign a form, we just use verbal consent. So I'll just show you what that looks like. So parents, once they've read this document, they start the consent recording. And now it's recording this video. So I say, I have read and understand the consent document. I'm this child's parent or legal guardian, and we both agree to participate in this study.
So of course, as a-- oh, sorry. So then you stop recording. So of course as an experimenter, you don't necessarily have to say that to yourself every time you're previewing it. But that's-- those videos will then be uploaded, and you review those as an experimenter before you get access to the child's data.
So you need to go through, make sure you have that consent video for each parent, and then once you click accept for that consent, then you're able to access their data. And if you never got consent, then Lookit will just delete that information and you won't have access. So the next step is just to playback that consent video to make sure that it works.
[VIDEO PLAYBACK]
- And now it's recording this video. So I say, I have read and understand the consent document. I am this child's parent or legal guardian--
[END PLAYBACK]
MADDIE PELZ: OK. So then you submit. So now we're going to go through some instructional frames. So of course, these are all editable. These are just kind of text frames and instructional frames. So this is kind of explaining exactly why you might use this. And we can move on. So you can tell people here's what you're going to expect. Here's what you'll see. You and your child-- this is how you can sit, things like that.
So this page is an optional frame where in this study, we might ask parents to face away from the screen and hold their child this way so that their baby can see the screen, but the parents aren't kind of indicating to the child which way to look. And this is-- in a lab study, if you're familiar with looking time studies, often parents will wear goggles or a visor or something to cover their eyes. So this is just our way of being able to do that at home without any additional equipment. So one thing you can do is offer parents the ability to preview the videos before their child is with them, just so they're not surprised about what their kid is seeing. But right now, we can just skip that preview.
So this is an example study. It's not necessarily sensical, but they're just giving you examples of things you might want to ask. So here's one frame that's kind of a simple survey. So I'll say, like-- just filling that out about the dog. OK, great.
So then here's another instructional frame. So again you can see me. It's making sure that-- it's double checking again that you can see my webcam clearly. And you can give people additional information.
And then if you've ever run anything on MTurk or anything else online, one important thing might be to make sure that participants can hear and see videos in a way that's reasonable. So you can force people to play things. So here, for example, if I didn't play this audio and I tried to move to the next, it would remind me to play audio. So--
[AUDIO PLAYBACK]
- Ready to go.
[END PLAYBACK]
MADDIE PELZ: So that's just testing I can hear everything. And then this is a video that I don't have to play, so I could move on without it-- just kind of an optional test video. But these are just some example things you can do. So you can give them instructions, ask them to listen to instructions, or check their speakers, things like that.
So these are just additional instructions. You can see it's really important to make sure the webcam is setup in the right way. Here are different things to check. So just to make sure that the webcam is in the center, because if what you care about is whether an infant is looking to the left or the right of a screen, it's really important to make sure that you're seeing them from that center angle. So I'm going to say I did that.
And then I'm lying about this one, because you all are connected to an external monitor, but ideally you have-- so that you don't have this problem where I'm looking at you-- you have the webcam centered and on the computer screen that you're looking at for the video. And then they're asking me again to create a short recording. And now you can play that back and make sure that-- there I am again. OK.
[VIDEO PLAYBACK]
- All right. Go ahead and turn around so that you, the parent, are facing away from the screen, and your child can still see over your shoulder. Please avoid talking to your child about what might be happening in the videos. This is just so we can be sure the babies are choosing where to look on their own, rather than responding to subconscious cues from their parents.
If you need to turn around or take a break, that's absolutely fine. You can pause the study at any point by pressing the spacebar, and you can leave early by pressing F1. Now that you're settled, we'll start in 3, 2, 1, action.
- Look, this is a box.
MADDIE PELZ: This is an example of a test trial. That was like an attention grabber where the parent would be turned around. And here's an example of a preferential looking example. So we have two videos playing, one on either side. And the webcam has turned on and is recording what the infant is looking at. So they might be looking to the left and to the right.
And you can set duration for these. You can see this video is repeating. So you can do it based on the length of a video or a certain amount of time.
- Video two. Look, this is a shoe just like mine.
MADDIE PELZ: And this is a study about infants' understanding of physics, and so there are some that, you know, look sort of magical. So the idea is whether kids will attend to one or the other. If you're familiar with looking time studies, it's exactly one of those.
- Well done. You can turn around now. We have just a few final questions for you.
[END PLAYBACK]
MADDIE PELZ: And so that attention grabber is just one example that-- you can use any video to upload for that to get the baby's attention back on the screen. So all of those things are editable. You can use whatever videos you need. You can decide the placement of those videos, etc.
And so this is the last step. This is just confirming your child's birth date, which I think I said was something like that. 12, I guess. And like I mentioned before, we ask parents if they're comfortable sharing their data with Databrary just to make it even more open science-y.
And then we ask people what we can use their video clips for. So private would be just me or just the person on the study, their researchers working with them, and the authorized users who have used Databrary. And this is something you might be familiar with asking participants in general when they're in the lab, because sometimes we like to use their video in a talk, for example, and then also publicity.
So some people are really excited about sharing their videos. And we can use it on the Lookit home page or on social media when we're trying to use it for recruitment. So they have options for those.
And then there is an option for withdrawal of video data. So even if something went wrong, we encourage people to keep their information. But in case there is something-- you know, I think Kim said if your spouse was discussing state secrets in the background, et cetera, so if there's really a reason you need to withdraw your data, people have that option here. And then you can submit.
There's a quick thank you and a debrief just to give them information about the study that they participated in, and then you can exit. And it should take you back to-- so this is taking you back to the past studies that the family has participated in. So do people have questions? We're going to go into the code of that and break down what we saw and what's editable and everything. But do people have questions about kind of the structure of that task or anything?
So if I go back to experimenter now, and I go into that-- and you guys can follow along with this part now. You can look here. And I think I'll ask-- I'll kind of give us a little break to mess around with things and then we can come back again. But just quickly, I'm going to go through.
So on the homepage of your study, you can see the quick information. You can see the status, whether you're-- so right now, the study has not been submitted for approval. So we're just kind of editing and working on it. Eventually once you get feedback from the community and you feel ready to actually host it on Lookit, you can submit it for review using this dropdown.
I can add different collaborators. So for example, if I was working with an undergrad or collaborating with someone, I could add them to this and they would also have the ability to edit and update this study. But if we go to edit-- so here's where we kind of have the main information for the study. So obviously, you'd want to name it something a little more exciting than copy of example study if you were trying to get parents excited about participating.
You can choose an image to represent it. You can describe what happens, the purpose of the study. So what happens is just kind of a-- not too detailed, but just you and your child to watch will read a story and answer questions for 10 minutes, something like that. Compensation information-- and then someone who had asked about eligibility, so if you're looking for children certain disabilities or anything like that, this is where you would express that.
So for example, you can use these expressions to say Deaf or hearing impairment. So different things that parents would have filled out using the demographic survey or when they entered each of their children into their Lookit account, they would have specified those things. And so you can look at the documentation and find out the easiest ways to write that. But you can definitely recruit particular populations that way.
Then you set the minimum and maximum age cutoff. So that's how it can automatically decide, like Rico was saying, who to email in terms of these pushes to participate, and also just who you want to collect data for your study. Here, you'll definitely want to put your name here so that your contact information is there, and your lab here.
And so there is a new feature on Lookit where people are split into different labs. And so now-- I'm in a few different labs here, but I just want to make sure that I have that one saved, because I can change who has access to that study. The last thing here, we'll get-- the protocol configuration is where you're actually going to edit and make changes to your study. We'll get to that in just a second.
But the last thing to say is just the experiment runner. So we just all created our experiment runner for the study today, so it's automatically the most up-to-date experiment runner. But if you had used kind of an older version of Lookit to make your study as things were getting updated, you might want to update your experiment runner so you have access to new features and new bug fixes and things like that.
So it's kind of just like the version control for the experiment runner. Which you can copy this code and save it, and then if something new changes that messes up how you wrote your study, you can use that-- you can keep to that old experiment runner. But it's a good idea to keep up to date if it works.
Great, so let's go into this protocol configuration. And I'll just talk quickly about how this is setup. So it looks a little crazy when you first click it. So this is just kind of a block of text. But if you click this nice little beautify button here, it will kind of organize everything into nice chunks. Is my font readable here, or should I zoom in?
PRESENTER: Maybe zoom in a little bit?
MADDIE PELZ: Let me see if I can just--
PRESENTER: Yeah.
MADDIE PELZ: OK. So Lookit studies are written in JSON. So it's organized into two main things. So you can use these little arrows on the side to collapse and expand different things. So if I just collapse frames and sequence, you can see there are these two main components of a Lookit study.
So sequence is just a list, which includes the names of each of the frames. So what we went through you'll notice is we had the configuration of the video. We gave consent. We read instructions. We've previewed the videos-- or we had the option to.
We filled out that survey where I wrote about the dog. We saw those last little instructions. It checked our video quality. We actually went through the physics videos. And then finally we filled out an exit survey.
So this is defining the flow of the study, but each one of these is just a string. So it's just a list of words. It doesn't necessarily mean anything in terms of how the frames are created. So where we adjust that is in frame.
So for each of the sequence items-- each of the lists-- each of the items in the list of sequence refers to one frame. So for example, for each of these, there should be a frame that's defined here. So now within frames, I then minimize each of the different frames.
You can see that now each of these, not necessarily in the same order, but each of these refers to this. So frames is essentially a dictionary of those things that it can reference. So when you call the sequence, it will look for the frame that you've defined, and play in that order.
So for example, if we look at the survey, each of these is editable in different ways. So for example, you can change-- here's a simple example survey. You can say, please fill out this survey. And that will change what participants see as they go through. So you can change the list of the different options, things like that.
So for each of these frames, you're able to edit within them. So if we go to consent, you'll-- this is again in response to that question we got about the consent form. So here are the things that you have control over. So you'll remember there was that purpose thing. Why do babies love cats? We all change that inside of this frame here, and that will then play out. When we see that consent form, we'll be able to understand it.
So I think-- that's, like, the overview of how we structure it. And I think one interesting thing we might do is try to-- first, let me just show you. When you're working on your study, sometimes you don't want to have to go through all of that video configuration thing. You just want to make sure that your test study, your test frames are working properly.
So for example, this morning I was having an issue with getting those videos to play properly. So instead of going through the 10-minute study where I waved at the webcam five times-- which is really important for when parents are participating, but not necessarily when I'm just going through-- you can just leave all the frames how they are, all the definitions in that dictionary how they are, but you can move around the sequence as a way to just quickly test the frames that you care about. So for example, maybe I want to make sure that I have the instructions, just to make sure that I'm in the right place.
And then you can always click beautify again. But so now, it will go through the sequence. And the first thing we'll see is the instructions, and then the videos will play. So in that way, you can make small changes, and you'll be able to see them play out.
So just to make it worth it to preview this again, let's change a couple things in the instructions. So, let's see. So it says try editing this so it says something silly. So we'll just say, like, welcome to our magical study. This will make your child get into college for free. OK, that's everyone's dream who's participating in child studies.
So then, all you have to do-- now that frame is edited. You say close, and then at the bottom, you have to click save changes. So if there was an error in the JSON, so if I had done, say, an extra parentheses or something, it would give me an error and I wouldn't be able to save it. But now we know that it's fine. It went through. So now we can go ahead and preview that study again.
So I'm a little bit zoomed in now. But we can go in, say preview, and now it should take us right to those instructions.
[VIDEO PLAYBACK]
- All right. Go ahead and--
MADDIE PELZ: Then write into the-- we don't actually have to go through that.
[END PLAYBACK]
So when you press escape, it's set so that your study can pause. And then you can just X out of the screen there. So that's an example of how you can make quick edits and go through and make sure that your study is working the way that you want to.
But I think one thing we can do is kind of like an exercise in editing our study. So one interesting thing we might want to do is go back in and go to where we can edit. And one important website to get to know really well is this. I'll put it in the chat.
So this is the frame player documentation. I just added to the chat. Sorry, do people have questions about anything that I just showed in terms of the organization of the study? We'll spend more time in the code, but in terms of like the frames or the sequences? OK.
So if you take a look at that link that I just sent, now we're in the frame documentation. So the one it was set to was look at images audio, which is a really flexible frame type where you can show images for a certain amount of time. You can show a progress bar. You can kind of use a storytelling-- you can tell a story about these two pictures and have certain images highlighted at different times. And you can ask children to click on one of the images and provide feedback.
So I have a study that I'll show you after we're done going through this one where I use this framework as a storybook task. So I bring up images, I talk about them, and then I offer kids a choice. And that's kind of my data collection measure is when I show them one, which one do they say it matches the description.
So this in combination with the looking time paradigm that you saw in this example study I think covers a really good flexible amount of the types of studies that we run, although I'm interested to hear if people have experimental designs that they're thinking about trying to run online and ways that you can think about whether a frame might work for that or not. And there is the opportunity to create new frames, it just requires a little bit of programming knowledge.
So you can see along the left side here, these are all of the different frame types. So here we're in the images audio, but if we go to preferential looking, you can see that this is what was used for the physics videos that we saw in the example study. So there is an attention getter, there's some intro video, and then the looking time study is starting.
So on each of these pages, you have kind of an image. You have a description of what it's doing, ways to kind of specify different parameters. So here you have to specify where your media is located, for example, to make sure that it knows what videos to reference and play.
And then there's an example usage. So here's a whole trial that you can copy and paste into your study. So I think instead of using this one, I think we should maybe add an image audio study-- I mean an image audio frame because we already have a looking time one in the one that we have.
So if you go to images audio-- and this is one thing that you guys can try to do in your own account. So on this page, you see there's not just one example of a frame, there's actually a whole frame list that includes image one, image two, image three, these different examples. So go ahead and pick.
Let's see what they're doing. So in image three it looks like-- let's take that one. So if you just copy from image three, that's going to be a whole frame all the way down before image four. And so because it's a dictionary, each one is defined by these curly brackets. And so just want to make sure-- we'll make sure that the commas are taken care of after, but go ahead and find that images audio frame and copy image three.
And usually, you need to make sure that all of those-- the videos and the audio that you're referencing-- are in a place that you have control over. So you'll want to make sure you're using your own videos and things like that. So personally I use my MIT online Athena locker to hold my stimuli, and then I can reference that. So you can use Fetch or different things to kind of host your images online. And then you can reference them in your Lookit platform.
But here, Kim has this website with placeholder stimuli that she's used here. So go ahead and copy the image three frame. And then if you go back to your editing page for the example study, one thing I like to do when I'm copying in new frames is just minimize everything so I can really easily see the structure. Because one thing that does get tricky with Lookit is making sure-- especially if you're newer to coding-- that all of your commas and brackets and everything are in the right place. So it's just easiest to see when everything is minimized.
So like I mentioned before, it's not important the order of the frames here, it's only important that the sequence-- so as long as everything's there. So what we want to do-- because this is a dictionary, there's all of these commas to make it a list. So after final instructions, or anywhere you feel inspired, just put a comma so that it knows that there's another object. And then, paste in that image three.
And it might look a little wonky. But if you press beautify, it will expand everything, which is annoying. But it'll make sure that everything looks fine. So now you can see that image three is there. And when you minimize that, you can see that frames is still a complete dictionary and there's no hanging comma or anything at the end.
So are people able to add that in? Let me know if I need to slow down or speed up. OK, so if we-- here's a pop quiz. If we save this and played our study, would anything look different? No. OK, great. I see a few people-- OK, so great.
So yeah, we've added it to our dictionary of frames, but we haven't actually told Lookit to reference that entry yet, right? So one thing we want to do is make sure we actually add that to our study. So let's go ahead and add that to our sequence.
So the only important thing at this point is, again, to make sure your commas are right. And also to make sure-- if I did image_3, it would break, and it wouldn't know what I was referencing. So make sure that you're using the exact-- you can even copy paste if that's easier, but the exact reference string in there. So now we should see the instructions. Then we should see that image audio frame, and then we should see the video start playing.
So let's go ahead and close and save this. Oh actually, I wanted to show you-- see, it's unhappy with me. So I did something wrong. Where's my wrong JSON? Image three--
AUDIENCE: I think it's the image three double quotes.
MADDIE PELZ: Oh, thank you. Great. I was going to do something on purpose. Thanks, [INAUDIBLE]. Hi. So I was going to do something on purpose to show you what happens, but did you see how it wouldn't let me save and everything turned green? So make sure that all of your parentheses are closed, quotes are closed. Great.
So now I have image three, and it's in end quotes, and I can close and save. Perfect. So now if I preview that, we should see that new frame show up. So we can preview. So here's our instructions again.
[VIDEO PLAYBACK]
- Where's Randy?
MADDIE PELZ: And there's that new frame that we added.
- All right. Go ahead and turn around-- study paused.
[END PLAYBACK]
MADDIE PELZ: Great, so now, I think let's get a little bit deeper into how those frames are set up. So if we go into the-- great. So sorry, do people have questions about that? Were people able to get that working in their own survey? OK, great.
So now, I'm going to send you guys another link to this placeholder stimuli. So again, you would have this hosted in your own place, but right now we can just pick and choose from what Kim has provided. So this is just kind of a simple website where she has images, audio, and videos that you can reference.
So if we look at how-- if we look into this image three frame and we expand it, so again, you can just click this little arrow to expand it, the first thing you'll see in every frame is a kind, and that's defining which type of frame it is. So for example, in the instructions, you'll see that the kind is Lookit instructions. And you'll be able to find all of those types in this frame player documentation like I was mentioning.
So if you see here, there's X Lookit instructions. You can click on that and read about how to set instructions, how to edit different parameters, and have some examples to stick it in and test out. So what we can do is for this image, we have x Lookit images audio. And what you can see is each set of images is each-- there's a set with an image and an audio that you can set.
So there are some that are a bit more complicated, but we can start just by editing this. So just test out what happens using this index if you go into images and pick a different image. So you can pick whatever one you want, but I'm going to go with twocats.png. So what we can say is-- so here this image has-- it's a list of images, but right now there's just one image defined in that list.
So again, you can close and open things if that makes it easier to see. So here's the images it's referencing. And I just have one, so I'm going to tell it that the source is this title. And then I don't think ID necessarily changes anything, but that's just kind of how you're referencing it later. And then I'm going to have cats, so it doesn't really make sense to say where is Remi, so I'm going to go back into the index and find some audio that makes a bit more sense.
So let's see, maybe just some music or some chimes. So I'll use chimes.mp3. So one thing to note is that you don't actually need to say .mp3 here, because later on you've identified audio types, so that already has mp3 inside. So depending on people's operating systems when they're accessing this video, they can either reference this chimes.mp3 or chimes. If they need a different type of audio type.
So you can just say here's the audio that I'm referencing. Here's the image. And then the reason that we can just put in the word chimes and twocats.png is because we have this base directory defined here. So we've told it that we're in this placeholder stimuli folder. But if you have your own base directory, of course you need to replace that with your own URL. But then it would reference that.
And if you'd rather not use base directory, if you have things located in a bunch of different places, you can always delete that line and put the full link in the source location. So you can tell it exactly where each image is. And so while we're here, let's edit a couple more things. So right now, it's not turning on the camera to record this trial, but you can make it turn on if you'd like to record children while they're watching the image.
You can set the duration. So let's make that, you know, six seconds. And then you can change the text here. So you might have seen at the bottom there was a little text box, and you can say you are a parent. This box is for you. All right.
And then that progress bar you saw was just one thing that you can take in and out. So that-- someone decided that they wanted to include that. So either you've been doing the same things as me or you've been messing with your own. And so you can go ahead and close and save that. And now when we preview, we should be able to see that all of those things are updated.
So I'm just going back through. Now I see the instructions.
[VIDEO PLAYBACK]
[CHIMES SOUND EFFECT]
[END PLAYBACK]
MADDIE PELZ: Now I hear my chimes, and I see the picture of the cats, and there's this very important text box at the end. So did people get that to work? And did they use other inputs? OK.
So one other thing I wanted to do, just because I think it's really a common frame type, is go back in and edit and add in a different type of looking time frame. Because in this looking time frame, we can see what it looks like by going back into the edit pane. You can see it here for preferential physics videos.
This is a really specific thing that it's referencing, because those physics videos are hosted within Lookit itself. And so this isn't necessarily one that's easy to edit. So I think what we can do is go back into that frame base that we had and go to the preferential looking and just grab one of those frames and take a look at how it's working. So if you go to that page-- I'll just put it in a chat again.
And at the bottom, you can see there's something called sample trial. Oh and one important thing to note is that these names aren't special. So I'm just going to paste that at the bottom, but I don't want it to be called sample trial. I want it to be called looking time trial. That's totally fine as long as you reference it as that in your list of your sequence lists.
So now I should be able to close that. And again, just make sure you have a comma after each of these. So now I've just added that new frame, and I've added it to my sequence. And now when I close and save it, it should come up as another looking time example in the beginning of that study. So there's my instructions again.
[VIDEO PLAYBACK]
- Video two.
- Look, this is a book. The [INAUDIBLE] in the [INAUDIBLE].
[BEEPING NOISES]
[END PLAYBACK]
MADDIE PELZ: So this one has funny tones in the background. That might be something we want to change. But there is a looking time trial. So two images came up, and it's doing screen recording. This is a really erratic study. So this isn't necessarily something you'd want to put participants through, but this is just for examples of how the flexibility works. So were people able to add that frame and have it work?
PRESENTER: So I have a quick question, Maddie.
MADDIE PELZ: Yeah?
PRESENTER: Let's say we want to show a sequence of three frames with three images. Do we have to define those images separately? Or is there a way to do it--
MADDIE PELZ: There is a way to have it repeat. So one thing you can do-- there is documentation for this, which I can quickly point you through. But what you do is you define that one frame, and then you give it something called like a sampler. And you can tell it whether you want to-- you know, you could give it a list of eight images and say, I'd like you to pick three of those eight.
So you do have to define each of the images that you would like it to reference, but you don't have to copy paste the frame every single time you want to use it. So there are built-in randomizers and things for repetition. That's a good question. And that's really common, right? In randomizations and things like that, you'd want to be able to shuffle them or pick random items. Are there any other questions?
AUDIENCE: Yeah, in relation to that, how long is a typical study on Lookit? Like what is sort of the maximum amount of time that you can--
MADDIE PELZ: Yeah, that's a good question. I'd say similar to in the lab. So if it's an older kid and it's a really engaging storytime task, maybe 15 minutes. If it's a looking time task, there is definitely the ability to pause and come back. So if you need a certain number of trials, parents can do it over the course of a little bit of time. But I'd say like a few minutes for infant studies and maybe up to 15 for older kids.
And you can always do repeated designs where then, you know, a week later, you ask parents to come back and do it again. But it kind of goes along the same rules as lab work where kids get fussy and they don't want to look at the screen for too long unless it's really engaging. Great, so I just wanted to quickly go through and show you how that looking times frame looked.
So there's our image one-- oops. Great, so-- oh, I should beautify it. All right, I won't go through all of this. File instruction-- so you can see it's a bit hard to navigate, but there is our image one, and here's our looking time trial.
So you can see now, kind is set to this preferential looking. So that means that you can always look up that frame player documentation and make sure you know what all the inputs mean. But here, it's using a different base directory and different images here.
So one thing we could do is we can change that to our index. So let's go back. So I'm just going to take that website that we used before-- put that into the base directory. And then if I ran this, it wouldn't know what these images were, so you would just kind of get a white screen.
So that's actually one thing that might be worth demonstrating. So if it has trouble with that, one really important thing to know-- so it'll save if the JSON is valid. It won't know that until you try to preview it. But one thing that's good to know is when you're previewing a study, you can use your developer tools as a way to kind of try to understand what might be happening with your study when something goes wrong.
So I'll just turn that on now as I'm previewing this, and you'll be able to see-- hopefully it will tell us, like, this image can't be found or something like that. So I'm going to go-- and on Chrome, there is this little three dots here. You can go to more tools and developer tools. And I think it's similar in other browsers. But if you look up developer tools, you should be able to find it.
So one warning I get right away is that the first frame is not an x video configuration frame. So Lookit recommends starting with that frame, but that's not something that is like killing the Lookit, it's just a recommendation from the developers. That's kind of the ideal way to start your study, to make sure the webcam is working properly. So that's why it's kind of this yellow warning sign. So there's my instructions.
[VIDEO PLAYBACK]
- Video two.
MADDIE PELZ: So now it's going through this Lookit.
- Look, this is a book. Like [INAUDIBLE] in the [INAUDIBLE].
MADDIE PELZ: So here-- so now you can see, it's pretty clear actually that there is an error even in the screen. So you can see that it failed to find those images. So it did in fact move on, but it warned me-- sorry, this is a chaotic study.
[END PLAYBACK]
MADDIE PELZ: OK, so let's go back in. And if you're following along in your own study, go ahead and pick for that trial a few different images to pick for that study. So again, I'm working in this looking time trial frame. And the base directory I've already set, but now I need to tell it what this left image is. So in the original one, it was that stapler, but I'm going to go ahead and open this in a different side so I can see it.
So I'm going to just go in and see what those different-- let's go in. So let's say I want to see whether kids like to look at a happy kid or sad kid more, or which one they choose to look at more. So there's one called happyremi.jpeg. And then on the right side, I'm going to have sadremi.jpeg. And I believe attention grabber is also a video in this. So if I go into the mp4, there is in fact an attention grabber video. And in the audio-- so hopefully you can do this along in your own and add in whatever you like.
I'm going to make it say peekaboo to see if it gets people's-- could get their kid's attention. And then there is this announcement video where it said-- the video came on and it said, "Video two". And then there's also calibration audio.
So one thing you can do if you're doing looking times is you might have calibration swirl happening on the left side and then on the right side, so that your coders get a sense of what the kid's looking like when they're looking to the left or the right to make coding easier. So I'll make this chimes, and I'll keep the attention grabber on.
So I think-- we just have to change that tone. So I'm just going to say music. So these are all things that I'm looking at using this index from Kim's placeholder stimuli. So now hopefully that has all the information it needs. It knows the base directory. It knows the two images, and then it also knows the different names. Oh, let me make sure that cropped book is a video. It is, but we'll just change it so that we can see.
All right, do people have questions about this? So each of these things should be explained in the documentation and make it pretty clear about what types of things that you need to fill in. And it's always really nice to start with an example. So as I was first learning, I would just copy over those and then keep editing it until it was the frame that I needed it to be.
And we're working right now-- a group of people are working on an onboarding group for Lookit where we're trying to make the onboarding process a little bit smoother for people. And one thing that we're focusing on is getting a few examples of full studies that we think are really good examples of how to introduce parents to the study, how to get them all set up, and then also just good examples of the structure of the study itself. And so hopefully those will offer a chance to not just have individual frames, but a whole study that you can then work off of as an example.
So that's one example of it being really nice to have this be a collaborative space where people can share code and share advice across different labs who are using this. So let's go ahead and preview this one more time. So there's our instructions.
[VIDEO PLAYBACK]
- Peekaboo!
MADDIE PELZ: So now it says peekaboo.
- Look, this is an eraser.
MADDIE PELZ: There's my eraser video instead. And then there's happy Remi and sad Remi. OK?
[END PLAYBACK]
MADDIE PELZ: So it's pretty straightforward to make those changes for the things that you need.
JANELLE: It looks like there's a question in the chat for [INAUDIBLE]? So the question is, have people run studies in Europe? Do you know if there are GDPR issues?
MADDIE PELZ: What's a GDPR issue?
[INTERPOSING VOICES]
AUDIENCE: This question doesn't need to be asked right now, but there is, like, data protections in Europe where there's all these problems with who can access data. So I'm just wondering if--
MADDIE PELZ: Yeah, that's a good question. And I think the Lookit Slack, which I'll talk about in a second, is the right place to look at that. I know there's been some discussion of people having trouble with their IRBs getting things accepted. But I think it's been solved. And we definitely do have people in the UK who've been running. So I think it's something that's doable, if not just a little bit more difficult with dealing with IRBs and ethics [INAUDIBLE].
RICO: Yeah, so from my understanding, I think we are GDPR compliant, because that's really-- it's just a matter of being able to, like, respond to requests to delete information. So we have the ability to do that. Kim knows more about the specifics on the legal side, because I think we-- we basically have worked with MIT's legal team to ensure that we are compliant within reason.
Because I think it's-- some of the requirements are a little fuzzy. But yeah, I mean-- again, I think as Maddie said, Kim would be the right person to ask about this.
MADDIE PELZ: There's people who have gone through that for sure. So we can-- I can point you to the Slack at the end, and that's definitely a place you can get a lot of answers about a lot of things.
AUDIENCE: Thanks. Sorry to interrupt.
MADDIE PELZ: No, that's totally fine. Any other questions? This is a good-- or do people have studies in mind that they think those frames aren't necessarily covering? Or they're wondering if it might be relevant for Lookit?
AUDIENCE: I have a quick question, which maybe you're going to get to, which is what does the data look like when you get it back from the study.
MADDIE PELZ: Yeah, that is a good question. I actually don't have any data yet, because I haven't run participants yet. But what it looks like is-- I can open up the-- so this is the documentation. I'll send this to the chat too. It's just a very helpful thing to have.
So I just mentioned this quickly, but as you're going through-- as participants are going through your study, you can review their-- first, you review their consent videos to make sure you have a full consent recording for each participant. And then once you've approved those participants, then you're able to access their data. So what the data looks like are a lot of videos. So you can kind of batch download videos in order to code.
And then I am not actually positive-- sorry, this is where me being not a developer is coming through. I'm not actually positive the format that you get the clicking or things like that data out of. I'm guessing you can download it as a CSV or something very straightforward. If Rico is still here he might not be able to answer that. But I'm just looking for-- and there's a guide to--
RICO: Sorry, what was the question?
MADDIE PELZ: How does data look when it's not a video and it's instead, like, clicking information? Is it downloadable as a CSV?
RICO: Yeah, so all of the data that you collect is like-- there's like a response model, essentially. And it's got-- whether or not you collect video data, the platform is designed for that. But response data is pretty orthogonal. You can do survey stuff as Maddie was showing earlier. It's all downloadable either as CSV or JSON. Yeah.
MADDIE PELZ: Great, thank you. I just sent a page in the tutorial that's specifically about how to manage your data. And there is a full tutorial that Kim put together that's really amazing, and how I learned, essentially, to use Lookit in general. So you can see this is step six. So there's this whole thing about setting up a study. You can build a study, practice building it from the ground up, and then this page is all about data. So you can even use the practice study to create data and understand how it's downloaded and things like that.
I just saw another question in the chat about recording voice. And as far as I know, for my study, I recorded everything myself. And I know that that's Kim's voice in the examples. So I'm pretty sure that it is all just referencing mp3. So I think you'd have to use some external program if you were going to do that.
But I would worry a little bit about a computerized voice rather than like a nice, child-directed speech for this kind of thing in terms of just keeping kids' attention. But if you could find something that would work, then I think you'd have to do it externally. Great. Sorry about the dog bark.
So I think I was going to pause at this point and just kind of wrap up. But if other people-- I will definitely point you to the tutorial. That last link that I sent is the end of the tutorial. But if you go up to the front, it starts all the way at joining the Slack workspace and kind of getting started. So you've already gotten-- you've already checked off some steps by making your Lookit account and practicing a couple things. But I think I'm going to stop sharing this and just do a couple more slides.
This is one example from a study that I'm running right now. So I just wanted to show you a way that I was using the storybook task as a way to kind of go through a story and then ask participants to make a choice. So I just was going to play that. And hopefully the audio comes through. Let me know if not.
[VIDEO PLAYBACK]
- This is a family of aliens.
MADDIE PELZ: Can you hear that?
- They live on Planet [INAUDIBLE], and love going on trips around outer space, so they each want a new spaceship to fly. This is another family of aliens. They live on Planet Zork, and also love going on trips around outer space, so they each want a new spaceship to fly.
Look at the two families of aliens. How are they the same? How are they different?
This is a spaceship store. One of the alien families likes to shop for spaceships at this store. This is another spaceship store. The other family of aliens likes to shop for spaceships at this store.
Look at the two spaceship stores. How are they the same? How are they different?
Now I'm going to show you one of these spaceship stores and ask you to guess which alien family likes to shop at that store. Look at this spaceship store. One of these alien families likes to shop at this store. Can you point to the alien family that you think shops at this store?
[END PLAYBACK]
MADDIE PELZ: So I'll just pause that for a second. So you can see-- in combining those really simple frames where you just bring up an image and you say something, and then this is also, actually, another image audio frame. It just allows for clickable objects. Kids can point, and parents can click on different things, and it'll record that data.
And you might have seen when it was first loading this page it said video recording connecting or something like that. And that just means that during this frame, I had set that recording to on so that then I could get that video and make sure parents weren't encouraging their kids to point in a particular way and if kids were kind of offering a spontaneous explanation, just being able to capture that video as part of the data collection process. So that would be an example of kind of an older kid study you might use.
And then I just wanted to talk about a couple next steps. So there's this Wiki on the Lookit GitHub. If you just go to github.com/lookit, there's a lot of different resources. But that's a place to kind of start and read about if Lookit might be right for you as a researcher and how to go about that first process. And here's just a quick summary of the next steps that you can take.
So like I mentioned, you can join the Slack workspace. And that link is-- I can put that in the chat in a minute, but that's also included in all of these different sites. That was the first step in the tutorial, if you notice. We got some questions about the next thing. It's kind of understanding the legal and logistical steps of making sure that you have IRB approval to test on Lookit.
You can get to know the Lookit working group. So I mentioned those quickly just as a way to onboard new people. There's a core Lookit team at MIT, but also these working groups and the Slack are a way where the community members can contribute to this open source project. And I think that's one of the most important parts is that as a person using Lookit to collect data and help your research and get access to these free tools, it also really helps to kind of give back and support the open source nature of the project by contributing in different ways.
So that's one of the ways to get involved is to contribute to one of those working groups. You can review the documentation, go through the tutorial, which really from the beginning to the end will get you kind of set up with most of the types of studies I think you might need. You then design your study.
You can use Slack to get feedback from peers. So you can post it and say, can you take a look, and make sure it makes sense, and make sure all your instructions are straightforward. You submit it for official review after you've incorporated that feedback. And then finally, you can collect your data.
So I just wanted to kind of show-- this is the link to the tutorial. And again, I think I sent that just a minute ago. And then finally, just different ways to contribute. So you can help people problem-shoot on Slack. You can kind of review their studies and give peer feedback as an observer who doesn't know the exact thing that they're going for, but to make sure that it's clear for parents.
You can help with recruitment. So naturally, if you have a study on Lookit, anyone that you recruit to Lookit will also then have access to other people's studies. So that's the way that we can build up this great kind of large and diverse recruitment pool. You can join a working group.
And then finally, I wanted to highlight these especially because this is the computational tutorial. If you have programming experience and you'd like to help in concrete ways using those skills, you can also help to address reported bugs and issues in Lookit because everything is open source. And also, you can develop new frames. So there's always people that are looking for different new things to do and use Lookit for. And so it's really helpful to have more people programming and helping work on expanding all of those different directions.
So that's all I have. I'm happy to answer any other questions.
JANELLE: It looks like there's a couple of questions in the chat.
MADDIE PELZ: Let me see if I can look at that. OK, perfect. Quickly list kinds of stimuli studies. OK, so an automated gaze study. OK, yes.
So automated gaze coding is something that people are actively working on. It's not built into Lookit yet, so right now, people are doing hand coding. But it is definitely an area of interest and people are collaborating on Lookit now to use Lookit videos as training for those automated gaze coding.
And then the kinds of stimuli studies that Lookit supports are kind of-- it's hard to say everything, but essentially anything where you can use a mixture of video, audio, images, presented in different orders, clickable things-- like storybook questions-- forced choice tasks. Free response you can do because you can just record at any point. So you could just show a prompt on the screen and ask a parent to do something or ask a question of their kid and have their kid give you a free response.
And then as well as preferential looking and different studies like that-- so if there's a particular one that you're interested in, let me know and I can try to be more specific. But that's kind of like the broad range of things that are available right now. So that images audio frame and the preferential looking frame really do cover a really wide range of different study types.
AUDIENCE: Is that-- does it record for things where they're clicking responses? Does it include RT?
MADDIE PELZ: Yeah, you can. You can get things like that, I think. So you can set up both screen recording, and I think also it'll record the time. Rico, is that true? I think so. Can you get response times?
RICO: Sorry, what was the question?
MADDIE PELZ: Can you get response times for-- like, if the audio was played, can you then see how long it takes a kid to click one of the buttons or something? I think yes.
RICO: Yeah, there's timing implicit. And you can also-- if you make a branch of the frame player, which is I think what someone from either Yeshiva or NYU is doing right now-- they have like a programmer who's basically doing a branch of Lookit frame player that's doing some really complicated stuff where it's like tracking the position of a ball in the air on this custom animated thing. Yeah, I mean that's basically-- that the sky is kind of the limit. So if you can do it on a computer, you can pretty much support it.
MADDIE PELZ: And then something about payment. So people usually pay with digital gift cards. So they'll email participants gift cards after they complete the study. And that's handled within each lab rather than through a main Lookit source. So it is a mix of paid and free, but I think the majority of participants are getting paid. And so that's kind of the standard that's set right now.
So things can be posted for free, but you're also kind of competing with the participant pool, who might be more interested in doing it if there's also a payment. So, something to consider. Thanks. Any other last questions?
And again, feel free-- you can email me questions. But also it would be more of a direct approach, and you'd have a lot of researchers who are involved in Lookit to answer your questions if you go ahead and join the Slack. And then you'd have kind of a whole pool of different resources to access as you're getting going with your studies.