FindingFive: An online, non-profit platform for behavioral research
Date Posted:
May 4, 2023
Date Recorded:
April 28, 2023
Speaker(s):
Ting Qian, Noah Nelson
Loading your interactive content...
Description:
Tutorial on FindingFive
FindingFive is a non-profit organization dedicated to supporting behavioral scientists’ web-based research by making it easy and cost-effective to implement experiments and collect data. With FindingFive, researchers can easily implement web-based experiments, recruit participants (from both a public pool or Bring Your Own Participants!), give cash rewards or course credit for completed studies, and export data for analysis, all in a single vertically integrated ecosystem that supports collaboration.
As a nonprofit, our mission at FindingFive is to build the technological infrastructure that allows researchers to focus on the science, as opposed to the tedious challenges such as programming an online study, securing participant data, and staying compliant with ever-shifting privacy laws and regulations. In contrast to market-oriented survey tools, FindingFive tailors to the specific needs of behavioral scientists, allowing researchers to create experiments by describing experimental designs instead of programming logic, with built-in support for common paradigms such as 2AFC, priming, self-paced reading, and language production tasks. Compared to loosely-organized academic projects, FindingFive is a sustained effort that continuously develops and maintains the platform, so that research expertise can accumulate and grow among lab members of current and future cohorts.
In this talk, we'll discuss the ongoing efforts and challenges in our nonprofit's mission, and demonstrate how the current version of our platform has already enabled many researchers to collect data online and manage online research projects for entire labs.
MODERATOR: Thank you for agreeing to give us a rundown today about FindingFive, which is a platform for behavioral research. And I know a lot of people in this department are doing behavioral research. So we're excited to hear about it. And Ting is the president of FindingFive. And Noah, who's also on this call is the VP of research experience. And I'll let you guys take it away. Thanks again.
TING QIAN: Thank you guys for inviting us over, Greta and Fernanda for organizing this, and also a shout out to Roger Levy who helped to make this thing happen too. I am Ting I'm the president of FindingFive. But it sounds more serious than what I actually do for FindingFive. So we are a nonprofit that Noah will talk a lot more about.
We maintain and develop a online platform for behavioral research called FindingFive. So there are a couple of us here from the FindingFive team today. And the plan for today is that Noah will give the talk. And me, Maho, [? Alan, ?] and also Monica will be the panel answering your questions that you may have got during the talk, right?
So we want to try to make the talk happen within 30 minutes or so. Then we will have a lot more time allocated for answering any kind of questions. So with that brief introduction, I'll give it to Noah to start the talk. Thank you.
Let me give a slight other introduction so that the other team members can say hi. So Maho Takahashi actually works on [? research ?] [? support. ?] If you have questions creating FindingFive experiments online or have any kind of troubles, email our supporting email, the channel, you will get Maho and me, but mostly Maho.
Monica, who was [? there. ?] So she is our extremely experienced new volunteer, who works for us on grant and fundraising efforts. So Alan, today. There's Ellen. She is sick. So am I. I think we both have COVID for-- I have a kid, so I always have COVID. But Ellen is our user experience engineer, so she makes sure that the interface of FindingFive works as it should. Back to Noah.
NOAH NELSON: Yeah so I am the VP of research experience at FindingFive I'm going to dive in I don't know what Ting said to you guys while I was gone. So I'm just going to jump into the slides here. And I want to open by saying that we at FindingFive, part of our goal here today is to advocate, on some level, for the idea that experiments should be conducted online as the default way that a lab does research to the extent possible, especially in behavioral sciences. I think, especially in light of the pandemic and what happened, we learned a lot about what we can do online.
And I think for a lot of labs it's still something they do in addition to their normal thing. So that is going to be part of what we talk about, as well as who we are and what we can do. But I'm going to try to keep that stuff all pretty short and try to have a lot of time for questions and answers, since I think anyone who might potentially want to try using the platform is probably going to be more interested in asking about particular experimental paradigms and how would I make this a reality in FindingFive. And so we're going to try to leave a lot of time for that.
So let's start by saying what the heck is FindingFive? And we are a 501(c)(3) nonprofit. We kind like to think of ourselves as a sci-tech nonprofit because we are really focused on trying to advance behavioral research and social and cognitive psychology and sciences in general. And we have this great team of volunteers working on software engineering and content development for tutorials and documentation, grants, and business operations, and of course on the customer experience side, where we deal with the participant experience and the researcher experience. That is where I reside.
And we are an online platform. And we are really focused on trying to help researchers to empower their research and empower their research through online studies. And as part of that, I should note that the vast majority of our researchers, as we'll talk more about later, are ourselves researchers-- are volunteers, I should say, are ourselves researchers, right? And so we have this desire to be building FindingFive by us, for us, right? And so we hope that you will find what we do to be enticing for you. And we hope that you'll join our team.
So I want to open with a little bit about why we incorporated in 2018. Why did we make FindingFive into a reality? And this has to do with a number of things that were going on in the online research domain.
We didn't really like hacking together multiple toolkits, working on IT setups, doing all this nitty-gritty work just to run a study. And we really wanted to streamline that process and try to have something where you have this one-stop shop, this all-in-one platform taking care of the entire research pipeline, right? So we do everything from-- as we'll talk more about in detail, we do everything from helping you to design and build your experiment online to actually running it, recruiting participants, and compensating them, whether it's with course credits or actual monetary payment.
And we were tired of using clunky market survey software or things that were just geared to other audiences. And we really wanted something that was more native to academic research, so something that pays attention to those issues like the timing of participant responses, for example, that are important to us as researchers, but also something that just speaks our language and lowers that barrier to actually putting your study online and saying like, if I'm thinking in terms of trials and blocks, I want the software I'm using to be thinking in terms of trials and blocks as well.
And finally, we didn't really find a lot of support out there for collaborative work, for, say, a lab to work all together, and for that collaboration to be both internal and external. And so we wanted a more collaborative platform than anything that we could find out there. And this was a big part of our goal because no research is done in a silo, or at least no good research is, right?
So we need to collaborate with each other. We need to talk. We need to share. And in the ideal scenario, it's not just one person goes and handles all the stuff on Amazon Mechanical Turk, and someone else just crosses their fingers and says, I hope they know what they're doing, right? It's not the best or most productive way to do your research, in our opinion.
So these are a lot of the reasons for why we wanted to do it. But of course, the pandemic hit. Time has been passing. And there is a proliferation of tools. So why are we still doing it today? And one big reason for that is that the cost and therefore the data advantage of online experiments is diminishing for a lot of these platforms.
You look back at 2017, and recruitment platforms like Amazon increased their fees to 20% to 40%. That's a price tag that starts to get pretty hefty for large studies, especially prolific charges 33% flat. These fees may be manageable for labs that have large grants. But for, say, up-and-coming researchers or someone who works in a lab and wants to do their own research project and doesn't have the funding for it, that can really be a struggle.
And also research continuity is relatively hard to maintain when you have this proliferation of tools because everyone has their own thing they want to do. And of course, it's easy to do a one-time demonstration. But a lab needs to be able to repeat its paradigms. And if you have a consistent platform that you're using from A to Z in your research pipeline, that's a little better for trying to say, hey, RAs, you're going to use this platform. You're going to do it this way, and you have this continuation and continuity in the research paradigm. And PIs are naturally going to be hesitant, if they can't achieve that and if every student wants to do it their own way.
So that's a bit about how we started and why we're still doing it. We have here kind of a little chart to kind of highlight this a little further in maybe a more graphic way, right? There are a lot of platforms out there that do some or most of what we do. But nobody really hits all the marks in the things that are important to us.
And we hope that since we were researchers building this, we hope that you agree. So the all in one, I've talked a little bit about that. That's all about hitting the full pipeline, right? We're not just a recruitment platform or just a platform for coding the study to be performed online. We do all of it.
Where we have this academic focus, we use terms that are native and familiar and intuitive to researchers. And we fully intend to-- we're committed to that vision and intend to stay that way. We have built-in and explicit tools for collaboration. And we do everything that we can to keep our costs as low as possible because our goal is-- we're a nonprofit. Our goal is not to make as much money as possible. It's to do the most good for the research community that we can achieve.
So FindingFive, we see ourselves as a highly accessible platform for conducting research. It's a robust tool for building online experiments. It's a platform for collaborating and managing those experiments and rolling them out in studies. And it's a platform that allows for flexible recruitment as well.
So when it comes to these robust tools, where we have this FindingFive study grammar that really enables researchers to create their online studies very rapidly because it is built around the kind of language that you use for yourself and with your collaborators when you conceive of your studies. The collaboration and management of studies really allow you to manage the entire research workflow in this one place, which we think is a very important and powerful advantage. And our flexible recruitment allows you to get both-- use our public participant pool or bring your own participants from, say-- I don't know-- MIT Department of Psychology.
You might have a participant pool. And that might be where you want to pull your participants from. And to further help that, you can compensate these participants by either awarding course credits or cash rewards, right? So this is everything that we hope to do.
And one of the reasons why we're so focused on trying to do all of this is that we really want the barriers to online experimentation to melt away. And we want to make online experiments the default way for advancing research, not just an extra thing that the lab does. And this is important to us because we think our research is only effective if it's generalizable. And the populations that we are sampling when we do studies in the lab are oftentimes much narrower than we can achieve online.
So it's not the perfect solution. But we think it's the best one that we have. And so where studies can be done online, we want to advocate for that.
We have pursued this goal via nonprofit path. And I've touched on this already. But there are some reasons for this that I maybe didn't touch on as much as I would like. So I think this is a good thing to remind you of, that our idea is to stay closely connected with the researcher and participant communities that we serve.
So we view ourselves as servants trying to advocate for better research. And that involves participant communities and researcher communities alike. But we want them to be the driving force behind the directions that FindingFive takes.
Really new features that we add to FindingFive, they tend to have their origins in requests from our researcher users. So they come to us, and they say, hey, here's what I'm trying to do. Am I just missing something, or is FindingFive not capable of that yet? And if there is something that we're lacking that they want to do, we work together. We build a feature. We make it a reality, right?
And this is a really powerful thing that we can do, that if we were beholden to investors and stuff in a more for-profit enterprise, we might not really be able to achieve that. And furthermore, everyone at FindingFive is a volunteer. We don't have anyone on this call or elsewhere who's compensated at all for their time other than with hopefully a good experience and something that they can learn from it. And that really does help us to drive our costs down and make the nonprofit thing possible.
But it also helps to diversify our perspectives, our priorities, and so on. Many times we've been a little bit focused in one direction. And we take on some new volunteers who do research in other domains. And it opens our eyes to new things that we can do. And this has really helped to make FindingFive a more powerful platform.
And so we've had over 25 volunteers working with us over these past four years. And that is a great diversity of perspectives, right? So for the rest of the talk today, I want to introduce the landscape of FindingFive in broad strokes, like how do you put together an experiment? How do you manage a lab? How do you get participants for your studies? Things like that.
And then we will move to specific questions in the Q&A, like can FindingFive do this thing, or can it do that thing? I want to make a study like this. How do I make that? And then of course, I've already kind of alluded to this idea, what if my experimental design cannot be implemented on FindingFive yet? We obviously often work with researchers to make it a reality. That is a very real thing. And it's something that we would be happy to talk about with you guys here today and really anything else that you want to ask coming out of this talk.
So let's go ahead and dive into the actual structure of how you build an experiment on FindingFive. And this is going to involve that study grammar that I alluded to earlier. The FindingFive study grammar is a modular system for building an online experiment. And it has four intuitive components that I really think are natural and accessible to most researchers in the behavioral sciences-- stimuli, responses, trials, and a procedure.
I think it's pretty straightforward. We're trying to build our actual grammar for designing and building and specifying a study. We want it to mirror the way that you actually design and construct and experiment itself. So-- oops-- how do go back? Back. Sorry. It lagged on me.
So for stimuli, some examples of the kinds of stimuli that we offer, you can present text, audio, images, video to participants. We do all of that. And we have versions of many of these stimuli that are tokenized. And this comes down to breaking up a stimulus into individual tokens that can be presented either in an automatic pacing or in a self-pacing by the participant or, to do things like in the case of a tokenized image, to create customized gifts for your experiment, things like this.
In terms of responses, these are the sorts of ways that participants can interact with your study. We have choice responses for things like an AFC test, rating responses, which can actually get quite complex and be used for a number of different things, free response texts. You can have participants record themselves with intention. Like, if you have the kind of study where you want them to record something, it's OK for it to be a little rehearsed, but we also have background audio where you can record them more passively and get a more naturalistic audio recording-- key press responses, mouse position, and mouse reset, which are used for mouse tracking, and actually the photo response, which allows participants to take a selfie style photo which has proven shockingly effective at deterring malintentioned participants. So that was one that had this kind of extra benefit we didn't foresee very well. And it's proven really effective.
And then we organize stimuli and responses into trials. And in the study grammar, we use trial templates to do this. The basic idea is that like trials are generated from a common template so that you don't needlessly have to specify 100 different 2AFC trials or something like that, right?
And these templates really define the common property that a particular set of trials in your study share. So you make one template for this chunk of trials, which are all going to have something in common. You define that something in common, and then you plug in just the things that are different, the particular stimuli, for example, that you're going to be presenting on those trials.
And this leads to a lot less redundant text. It means less copy and pasting for you as a researcher. But this strategy, this really modular strategy with using a trial template, allows you to really create some cool, custom combinations of stimuli and responses. So we don't have to specify different trial types for you or give particular infrastructure for that. In most cases, it lets you do it yourself and makes it really flexible and adaptive for most research paradigms.
Then, finally, of course, we have the procedure. So we have those trials organized into blocks. And the blocks are sequenced into a procedure. And I think that's pretty straightforward.
But within this, we can do some kind of powerful things. We have features for things like grouping, like doing participant grouping. Or another way to think of it is making different participant lists. Features in there for doing conditional branching so making the procedure of the experiment dynamic based on how participants respond at an earlier stage in the experiment.
Things like this that we can do in powerful ways without you having to create several different versions of the same study. You just create the one study. And you get your four participant groups, each of which has two or three different options for how the study could evolve based on how they respond at a given point in the experiment, things like that. Oh, it went backwards on me. OK. There we go.
So this was our approach to this customized modular grammar. So I think it's worth taking a minute to ask why we took this approach. And the first is that we think it is a lot more intuitive for researchers, right?
We tried to use experimental logic to the extent possible, instead of programming logic, in dictating how our grammar is structured and therefore how you need to think when you try to design a study in FindingFive. It's more adaptive this way. By having it be modular, it really helps us to flatten the learning curve for a lot of people, for starters, because you only need to learn those modules within the grammar that you're actually intending to use right now for this study. And that has proven really helpful for a lot of our researchers who maybe start-- when they're going to try a new platform like FindingFive, they want to start small. So they conceive of a simpler experiment that might be a little easier to manage.
They find they're able to do that really easily. They launch it. They like it. They move on to more advanced stuff. And they're just dipping a toe in the water, getting a little deeper and a little deeper as they go, instead of having to learn, say, JavaScript to be able to advance, right?
But this modular approach also really helps us to seamlessly develop new features without disrupting pre-existing ones. Because everything is modularized, when we introduce something new, it just becomes another module in the larger system. And it has really proven effective for us in our engineering processes.
Finally, it's powerful this way because we're basically saying here are the pieces of any behavioral experiment. Do with them what you will. And it really ends up leaving that combinatorial degree of freedom to you as the researcher.
So another thing that some people have brought up before is, why don't I just use JavaScript or jsPsych or something like that? Why would I need to learn this study grammar? One thing that it does do-- and I alluded to this with flatten the learning curve. But it can really reach a much more diverse pool of researchers because coding know how, it varies significantly among researcher populations. And this can, in particular, be an impediment for groups that might normally be disadvantaged in the sciences to begin with, right?
But it also helps you save time when prototyping a study because you think of it in experimental logic terms. You implement it on FindingFive in experimental logic terms. And say, like, if there's a new, say, type of stimulus you're going to work with for this new study, you don't have to go out and study the JavaScript tutorials online to try to figure out how to JavaScript code this thing you've never done before, right?
And most importantly, the nuts and bolts are just-- they're there for you, and you don't even have to think about them at all. And then at the end of the day, it is JavaScript under the hood. That is the main driving force behind the engine. And so if there's anything that you would do in JavaScript that we can't do, we would like to work with you to talk about it. Like, what can FindingFive do? How can we make that more accessible to other researchers, that functionality that you're using in your own experiments?
So that was a bit about how we build it, how we build studies, this modular grammar, and why we took that approach to it. Now I want to turn to kind of talking about working with collaborators, whether they be in a common lab or otherwise, because this is maybe one of the greatest things that FindingFive does, I think. FindingFive has tools to really help the entire lab use it from the inception.
And, we've really, as researchers working in labs ourselves, designed it with the different lab structures that we've experienced ourselves in mind when we design these features. So we have some very neat, built-in collaboration features that include things like multiple accounts, viewing, editing, or launching a given study. So you can have a set of collaborators, give them the right permissions, and any of them could launch the study.
So you could design it and then have someone else that you're working with actually be in charge of launching the study, watching the recruitment, making sure that participants-- handling the participants, and doing all that while you move on to work on another study, say, for example. And as part of this, you can transfer ownership of a study from one collaborator to another. So kind of as an illustration of this, one structure that's very common, we've found, is for labs to have everyone involved have their own FindingFive account, but there to also be a centralized lab account.
And so you might, for example, have a PI who is conceptualizing a new study idea. And they're building it on their own private account without distracting everyone else in the lab. And then they might transfer that study once they think, like, I have something good here. I'm going to transfer that to the lab account. And now everyone can see it, and they can all give their two cents.
And then you might have a particular Ra in the lab who is really good at working with participants and managing that. And so they might be the one to launch the study, right? And all of this is seamless. And it's so easy to do on a platform.
The other thing that is really awesome about our built-in collaboration features is that if you use that structure, payment can be managed very easily by the centralized lab account. So when the RA launches that study, those fees, because if you have it set up right, where the lab account owns the study, but the Ra launches it, any fees generated go directly to the lab account so your RA doesn't have to, say, get reimbursed for paying participants or something like that. And they don't have to send an invoice to you or something like that. It's just done automatically by FindingFive to the lab account, where you could have, say, a credit card on file that pays for everything.
So it's worth noting that we are continuously working on new features and improvements to make it easier to manage this entire research workflow. And we have a few big things on the horizon that we want to point out. One is that we are hoping to have real-time collaboration and chatting through virtual lab meets. I think this collaboration angle is very important to us, and this is something we're very excited about.
And also, to implement a tool to actually visualize collected data. I think this is something that a lot of researchers have wanted to be able to get some initial visualization of their data before they, say, move on to the next stage where they're actually going to analyze data. And then of course, anyone who sees something that's missing and wants to see it in place, reach out to us. We love to work on features that come from actual researchers and to discuss how can we make this good for everybody and how can we make it good for you so. We would love that.
The last little piece of the puzzle that I want to try to talk about in terms of the real, core functionality is running participants. And-- why is it going backwards? There we go. And so we work well with both institutional and public participant pools because essentially all you really need to do is give a participant a link. So that makes it very easy to adapt it to do whatever strategy you want.
And we also have limited, but smart demographic features-- filters, I should say, to help you get the right participants for your study. And as an example of what we mean by smart demographic features, rather than having you specify, say, I want participants who were born between this day and that day, we actually calculate age on a rolling basis so that you say the thing that you actually care about, which is like, OK, for my study, I need participants between the age of 45 and 65. And I ultimately need-- I don't know-- 200 participants. I'm going to be running this study for a month, let's say.
If somebody's birthday falls on the second day of the month you don't want to just have them excluded. That person should be able to participate. I just turned 45. I'm eligible, right? So that is how we calculate age so that what you're getting isn't people born from this day to this day. You're getting people who at the time they participated were within the age range that you were seeking, right?
Participant compensation can be handled in two ways. First of all, you can compensate participants with course credits. And this just requires that you take advantage of a built-in credit management system that we have. So essentially, to do this, you have researchers determine how many credits an experiment is worth. The students, the participants, they come on to FindingFive. They participate in the study. They earn those credits. And they decide where those credits go.
So for example, they're taking these three courses, LIN 110, PSY 101 and COG 201. They participate in your study, and they get-- I don't know-- three extra three points of extra credit or however it works. And they decide, I actually want all three of those to go to linguistics 110 because I really suck at that class or whatever. That gives them the power to decide that.
And then in this setup, a department admin gets reports of when credits were redeemed and where they are and can use that to cross check and make sure that the credits are allocated appropriately. In addition, of course, to course credits, there's monetary payment. And we do this through three means, right?
When a researcher designs the study, you specify the amount that study is worth. And that's all you have to do. But the participants get to choose how they get that money. They can get it through a Visa-- what's the term for it? [INAUDIBLE] There's a term.
TING QIAN: It's a virtual debit card.
NOAH NELSON: Yeah, virtual debit card, that's what I was looking for. So through a Visa virtual debit card, through a PayPal account if they have one, or they can actually redeem it through gift cards, right? And then we partner with another organization who offers so many different options for gift cards. And that's actually proved to be a really popular option because when you go the gift card route, PayPal takes a tiny cut of every transaction, for example.
The participants are still going to get the amount you specified. But you would get charged a tiny bit. With the gift cards, that goes away, and it's just really seamless. Participants seem to be taking to it really well. We're actually really happy with that.
The one caveat to all of this is that having our public pool of participants be really robust is still a work in progress, right? And that's us laying it out there for you, right? As of Q this year, we have 33,000 registered participants. this has seems to translate to on average about one week to complete a 30-person session of a study. We'd really like to see those numbers go up and down respectively to have a larger and more robust participant pool and to reduce the time to complete a study.
But the fact of the matter is that our participant engagement features are relatively new. And they are still growing and developing. And so we think we've figured out a really great system. And we look forward to having more and more researchers helping to us to bring in more and more participants to that public pool.
So before I wrap this section up and move on to the questions, I just want to take a minute to talk a little bit about how you would get support if you were working with FindingFive and the different ways that we supply support not just to the individuals using the platform, but to the research community as a whole. So first and foremost, we have a lot of tutorials and documentation. These materials are just readily available for researchers new to FindingFive or experienced users who need to bone up on a particular skill they haven't used yet.
Our study grammar documentation is a really great reference book. It contains very complete details about that study grammar that we talked about earlier. It's rife with examples. So you can see it's not just a description, but you can see what it looks like. You can copy and paste that code into your own and tweak it from there to know that it's going to work, right?
And tutorials that we have, they really demonstrate things like particular experimental paradigms, right? So we have a tutorial on doing, say, a priming study, for example. And we also have tutorials for the more lab and research management side of things as well. Like, how do I add a collaborator? How do I transfer ownership of my study to that collaborator? Things like that.
And also, because not everyone is a fan of reading giant walls of text, we have a YouTube channel as well. Right now, we have a crash course video up there. The YouTube channel's very new, but we're pretty excited about it. There's kind of a crash course video on there that that's very short that you could watch, maybe even follow along with, and really kind of dip a toe in the water and have a little fun with FindingFive. And we're really excited to put out a lot more videos on that channel because I think it's going to be a nice medium for a lot of people to get the help that they want.
But in addition to these broader documentation, tutorials, videos, we offer individual assistance as well. We have this team of volunteers who have PhDs in social and cognitive psychology and linguistics and other behavioral sciences. We are very much, or we can be very much colleagues for you. We're knowledgeable about FindingFive and research. So we really help to bridge that gap and help you along.
And we support researchers through an email discussion forum, or email discussion forums. And we typically respond within 24 hours on weekdays. We're pretty quick. We're pretty good about this. And we're able to do that because the people responding speak your language.
So when you run into a problem, you don't have to try to think, hmm, how do I explain to someone who doesn't know anything about running this kind of study what the problem is I'm running into? You can actually just describe it like you would to a colleague, and we're right there with you.
And of course, we're happy to hear from researchers on a huge variety of topics. But some kinds of the things that we often do hear from researchers about are things like hiccups, glitches, various bugs in the platform that you encounter. We obviously love to hear about those because we want to eliminate them right away, right?
The most common thing I think of all is ideas for new features. That has been a lot of fun to hash out new features. When we come up with something that really is exciting and we work with a researcher, that's always been my favorite thing to do.
And then brainstorming about design issues is also really common. Like, hey, I am doing this study. I want to do x, y, and z with it. How do I make that a reality on FindingFive? And we work together and determine like, oh, actually there's this combination of tools that FindingFive has that will get you just what you need, or that'll get you something maybe even a little better if we rethink how we're approaching this design. And that's always fun to do, too, because it's fun to play with other people's toys, right?
So when researchers come to us and say, help me with my study, I'm always like, yeah! [? It's a lot. ?] The other way in which we support the broader community is that we like to support instructors who want to use FindingFive as a pedagogical tool. We found that teachers using this for not just experimental design, but experimental design and research methods or even teaching some basic coding, if the students are already a little familiar with experimental design. And whether it's in an undergraduate context or a graduate course contract context, we've actually had experience working with various instructors from the US and Germany on this. And we will, of course, provide free one-month licenses for these kinds of needs.
This is all part of our nonprofit mission so we're really on board with that. And so far we've had such very positive feedback from it. Students seem to get a really huge sense of accomplishment from seeing their research ideas turn into actual experiments, especially early stage students who don't already have some of this experience. And we've just found instructors who have done it have themselves seemed very pleasantly surprised by how well it has worked. So that's been really exciting.
And finally, we also like to support the broader community by supporting academic conferences through sponsorship, and while we're there, chatting with researchers about what they're doing and how we might be able to help. And so, so far, we've been participating in cognitive science for many years. We participated in Pyschonomics, the LSA conference, human sentence processing just this past March. And we're looking forward to doing more.
And if there's a conference that you know of that you think would be a good fit for us that you don't see on this list we're interested to hear about it, right? Because we just want to not only get our name out there, but have conversations with researchers and learn what we can from them, from all sorts of domains. So you'll have all of that.
Now that we've gotten everything out of the way in terms of what we really hope to talk about, I think it's worth talking a little bit about pricing because it's got to be on people's minds. So the cost of using FindingFive is extraordinarily low. If you want to just build and test experiments so you can decide whether FindingFive is a good fit for you, that is 100% free, right?
So you can jump in. You can use the full study grammar. We don't withhold any features. You can create studies, play with them, do all that stuff, preview your study, see what it looks like, print out the results of your preview if you want to run yourself or run a couple people in your lab, look at what those results look like, and say, is FindingFive for me? Do I like this experience? Do I think it could do what I wanted?
If the answer's yes, the only time that you would actually have to pay any money is if you start actually running participants. And so we have a per-participant entry session fee. So this means every instance of some participant going through the experiment, it's either $0.50 or less, depending on the plan that you sign up for, right?
There's also these plans, these subscriptions that you can do. So there's an optional per-month subscription fee. So we have the free plan. We have two other tiers of subscription that you can sign up for.
And one of the ways this can be handled is that an entire lab can subscribe under this system of the PI signs up for a subscription and then adds plus ones. So you can, for instance, have your own account. You have your lab. You add plus ones for, say, only the RAs that really need to get in there in the nitty gritty on your FindingFive study. And then when one of them leaves, you take them off the plus 1. And someone else comes into your lab. You can add them on and kind of have this rolling lab participation. And this lets you centralize the billing. And we offer discounts for plus ones and things like that to try to help make these costs more manageable for you.
And then the last thing that can possibly charge you money on FindingFive is the per-participant payment processing fee. So this is what I was getting at with, like, let's say you're using PayPal or the Visa or whatever. Then if you're offering cash rewards to participants, we are going to get charged for the transfer of funds. And we pass on that cost to you as the researcher.
So in the end, the costs can be very, very low, right? Because you can get up to the point where I now am ready to run participants without paying a single cent. And at that point, you can decide whether it's more cost effective for you to just pay the $0.50 per participant to run them in this study or if you like what you're seeing enough and you want to sign up for a subscription. And you do a subscription payment plan, and then that's the more cost-effective way for you to go. That's an option too.
And I believe that is all that I have. I want to say thank you for listening to me ramble all this time. And I really hope some of you have some questions for us that we can answer. Again I'm here. Ting is here. We have other people on the team here as well. So depending on the question, what we have to say, different people might answer. But thank you for your time. Yeah.
MODERATOR: Thanks a lot for this super helpful presentation, all of you. Incredible work that you guys are doing. Yeah, I think if somebody has questions, just unmute and ask whatever you want to ask. Otherwise I have a couple questions.
I will start with regarding the participants. How diverse is that pool currently? Some people in my lab study language from speakers of many different language families and ages and such. So I was wondering [? about that? ?]
TING QIAN: Yeah, maybe I can take that question, Noah, actually, while I'm talking, do you mind just opening a browser window, showing everyone what FindingFive the website actually looks like.
NOAH NELSON: Oh, right! That's a good idea.
TING QIAN: That'll give everybody a little bit of feel, right? So the current pool of participants that we have in the public pool, most of them or come from institutional pools. So when researchers run their studies with their own in-house [? solo ?] pool, then those students come to FindingFive and sign up for a participant account to do the study. Therefore they become part of our public pool, right?
And so therefore I think it's safe to say that a majority over half of the participants that we have are relatively young adults, right? So of course, we have been running this for a few years. And some of them are no longer in college. They have started working and all that kind of thing. But at the same time, we also have participants from alternative platforms. For example a lot of prolific participants do have their own FindingFive accounts as well, right?
So when they are not getting opportunities to participate in prolific studies, which is rare, I have to say, they might occasionally check out FindingFive studies. We don't have published stats on the overall demographics of our participants. So that I unfortunately cannot offer a detailed answer. But on the flip side of that, you can always implement those demographic filters to make sure that anyone who does participate in your study fall under the criteria that you want.
NOAH NELSON: Which includes things like language background, age, where they are.
TING QIAN: What language they speak, age, these kind of things.
NOAH NELSON: Gender identity.
MODERATOR: Oh, thank you.
NOAH NELSON: Yeah.
AUDIENCE: I also have a question about once you publish an experiment, how easy or hard is it to change a few things about that experiment and rerun it with a new pool of participants?
TING QIAN: That's very easy. Noah, maybe you can [? plug ?] into the research of FindingFive account and go to the Session dashboard to show that. You want to talk about it? Because-- I'm sorry-- I'm having a sore throat today.
NOAH NELSON: And that's fine. Yeah, so let's see. Are any of these-- we haven't-- because this is a dummy account we use for demo purposes and things like that. But let's say one of these sessions was done. It would appear up here in the recently launched area.
TING QIAN: Actually, the finished sessions, you can use any one of them.
NOAH NELSON: Yeah. So I don't know. We'll just look at this crash course demo, for example. Oh, wait, did I click the wrong thing?
TING QIAN: Click on the name. Click on the bolded name again. Anywhere. Like, summer 2020, you can just click on that. Yeah. There we go. So it's a toggle in between the internally encoded [? facing ?] the researchers or facing the participants.
NOAH NELSON: Right.
TING QIAN: Yeah.
NOAH NELSON: So this is-- yeah. Right.
TING QIAN: [INAUDIBLE]
NOAH NELSON: We're just toggling through their names. Got it.
TING QIAN: Yeah.
NOAH NELSON: Sorry. So from here, you can download your data. You can [? view ?] different details or manage participants. But you can also-- like, right now, it's [? in progress. ?] But we can-- yeah, so I guess Ting and I were having different ideas about your question and how to answer it.
So one way to do it is the study itself you can edit really easily because what we do is we have a centralized study section. So this is where you have your different studies. You actually write out the code in our study grammar.
And then that study is, like-- we think of it as might want to run multiple sessions of the same study. So we have this larger study that you build, and then you go to Launch Sessions. And from there, if all you want to do is launch a new session where you change some tweak not in the study itself but in the details of the participants that you're recruiting, say, like if one session is a certain demographic of participant and the other is a different demographic, or if you're trying to do a study where you're comparing, say, the public participant pool to the in-house participant pool, or something like that, or you want a dedicated session only for giving out course credits and another one for doing payment, all those kinds of changes you could achieve just by cloning a session, right?
And you can clone either the experimental setup and session settings or just the session settings. But if what you wanted to change was part of the core study itself, you'd edit the study. And then you go to launch your new session that way, right?
TING QIAN: Yeah. And--
NOAH NELSON: Ting, do you think it'd be better if I showcased one of these?
TING QIAN: Yeah. Just click on it. I think one of the features that we gave a little bit of thought into is that when you clone a session, we pull all the parameters from the previous sessions for you. But we still have you go through this what we call a new session wizard to make sure you are confirming each of those steps, right? So it's not blindly launching a new session with whatever things that you may or may not even remember from the past ones, right? And you can actually tweak the settings here. You don't have to commit to all of these sessions from the history.
AUDIENCE: This is really cool. Just as a follow up, is there does FindingFive keep track of a tree history of all the versions of the experiments that have been run?
TING QIAN: Yes. So that will be easier to see within a single study. So--
NOAH NELSON: What's a good one to use for this, the crash course tutorial maybe?
TING QIAN: Yeah, maybe. Let's see.
NOAH NELSON: We'll see.
TING QIAN: Click on Sessions.
NOAH NELSON: So we can look at active and finished sessions.
TING QIAN: Ah, this is a new one. Yeah, we probably need older studies or something like that.
NOAH NELSON: Yeah. I can do-- can I just invert it? Let's try [INAUDIBLE] because I think some of these are-- yeah, here we go, four years ago, the [INAUDIBLE] tutorial. Active and finished.
TING QIAN: Aha.
NOAH NELSON: So we only did one for this one. But this would just be your log. So it's the same view that we were looking at before, but filtered for this particular study, right? Yeah. And the ones that are active currently appear at the top. The ones that have finished in the past or in the back seat, you can look back at them that way. But those are going to be the record of the session itself. Like if what you're looking for is different versions of the code based on the different studies, that's something that you're going to want to do by basically copying your study each time so that you can have a record sitting in this studies list, where you might have version 1 and version 2 and version 3. Yeah.
AUDIENCE: Yeah, that is super nice. I feel like that's one of the things that a lot of the current platforms are missing, just like the entire history and being able to go back to that. So cool. Thank you.
NOAH NELSON: Yeah, because you can-- I'm trying to-- oh, no, sorry. Wrong view. I'm doing this from the wrong view. I keep doing that. You can from here just duplicate an existing study just like that. And it'll copy it exactly, and then you work on the changes that you wanted to make for this new version of your study. Yeah.
TING QIAN: We can copy it. [? Let's ?] [? have ?] people see that in live action.
NOAH NELSON: Yeah, sure.
TING QIAN: Right. So immediately copies the study and takes you to that copy. You can make changes to it.
NOAH NELSON: And then you can just edit the name.
TING QIAN: And then in fact, if we look at the database that we keep, so the exact code configuration, like the trial templates, procedure, stimuli, and responses, they were saved for each of the sessions you run, right? So if you ever have the need of looking at the specific code of a past session, we kind of have thought about making that possible, but it's such a niche feature. And it's likely going to be confusing to a lot of researchers, right?
NOAH NELSON: So we can do it, but it doesn't have a user interface for you to do it yourself. You'd have to reach out to us.
TING QIAN: Yeah, absolutely.
NOAH NELSON: But we're happy to do it. It's easy.
AUDIENCE: I have a question about, I guess, the capabilities of the study grammar. So one use case is sometimes you might want to, for example, collect norms on a large set of materials that's like far too large to pass all of them to any single participant. So you might have, like, 10,000 things, and you want, like, 10 examples for each of them to split across several hundred people.
And typically what this involves is in JavaScript tracking, complex sampling without replacement and having [? log ?] files for when participants open a certain run. But then they might not complete it and things like that. Are there ways to kind of automate those types of scenarios that kind of take into all that account from one unified back end?
NOAH NELSON: Yes. So I'm glad you brought that up. So I alluded to this participant grouping feature that we have. This is one-- I think probably the most effective way to achieve this, which would be that when you're defining your study, you might categorize your stimuli however you want to. And you might break them up into different templates to say these are-- well, there's different ways you could do it, right?
One of them is to break it up explicitly. I want these to go to some participants, and these to go to some participants. Another way is to have it done randomly or pseudo randomly. So there's different functionality for that part that we can get into.
But the core of what you're getting at would be handled through a the procedure where we can do participant grouping for you, where you can, say, make different options through blocks, where the different groups of participants each get a different block of stimuli. And you organize it to say, hey, FindingFive, I want you to assign participants randomly to one of these different groups. And you decide how many groups there are, right?
And we will start digging them up A, B, C, D, E, A, B, C, D, E and so on to fill that for you in just the one study. But in terms of doing things like the sampling and stuff like that, that's where the trial template part comes in. And you can define a set of trials. And you can have it sample x many of those trials in this block, rather than giving all of them. And there's a little bit of interface between the trial template and block layer that can handle some of that too. And I don't know, Ting, if you want to elaborate on the nitty-gritty details of how you would do that.
TING QIAN: Yeah. So Noah, if you could go back to the list of the studies. So I think there is a study that does have participant grouping. It's not exactly doing what Ben just asked. Let me see. It should be called Audio something something.
NOAH NELSON: Audio? I can do A to Z here.
TING QIAN: [? Audio ?] Simultaneous Audio Recording Demo study. Yeah, let's check out that one, I guess. Can we look at a procedure to see if-- no, not this one.
NOAH NELSON: No, not this one.
TING QIAN: But the other audio one? So now we open studies in new tab so you can [? close ?] [? one ?] [? of ?] [? them. ?] Let's see. Audio Barrier Simultaneous?
NOAH NELSON: That's the one I just looked at.
TING QIAN: Barrier for Tutorials, Conditional Branching-- and can we look at the second--
NOAH NELSON: Oh, sorry, which one?
TING QIAN: No, no, the second page, another page.
NOAH NELSON: Oh, participant grouping demo, here we go.
TING QIAN: Yeah, there we go. That's easy.
[INTERPOSING VOICES]
TING QIAN: There we go, yeah.
NOAH NELSON: So this is-- the block sequence is a portion of the procedure that you define, basically just a list of the blocks in order that you want to display to participants. But there's some fun stuff you can do with it. So like in this case, everyone gets the block called Begin. And everyone gets the block called End. But what happens in between is divided up into different groups or lists, right?
So we tried to give them clearly transparent names here. So there's between subjects group, list 1 and 2. And within subjects lists, 1, 2, and 2, 1, right? And these are different blocks that we've defined up here in terms of what trials they get and when. And what this does is it creates four different groups with these names.
You can call them whatever you want. And each group or list gets its own procedure of blocks within this chunk. So, say, if you broke your stimuli up across different blocks, then you would just define that many different blocks for the different chunks of stimuli that you wanted. You define however many groups you want and say which block that group is going to get.
And then when participants come in, we handle this random assignment of the participants to those groups. Does that make sense? And does it actually address the question? Because there was an element of not randomization, but--
AUDIENCE: Yeah, no, this is helpful. I guess-- yeah-- the-- I guess partially answers the question and that this is a good way to do it. I was asking also, if there's ways to, for example-- this is probably a big ask. This is usually stuff that gets done by the user, but things like, yeah, like setting constraints on things like, oh, I'd like to [INAUDIBLE] randomly sample, but make sure that I have at least some notion of crossing of certain variables that I might want to analyze later so that each participant gets at least some subset of some things. But then globally, even though each participant is only seeing, let's say, 1% of my total stimuli, globally, I'm filling that space in a way that's coherent and maintaining some type of covariance structure.
I think this is somewhat of a complex use case. I'm only asking because I'm currently working on something like this. But it's been difficult to set up. Yeah.
TING QIAN: Yeah. So I think that does fall on the responsibility of the researcher, right? So you can combine the trial templates and the blocks. So the way that we have designed it is at the trial temporal level, you can randomize the stimuli. To the block level, you can randomize the trial template.
So there's got to be, and there are different randomization methods simply with replacement, without replacement. There [? has ?] got to be some counterbalancing that you can do at both the block and the trial level to make sure that-- to achieve almost exactly, I'm going to say, right-- so almost exactly what you have in mind, yeah.
NOAH NELSON: Right. Yeah.
AUDIENCE: Well, yeah, this is helpful.
NOAH NELSON: It's going to be a combination of these features. And the other alternative that many people go for is to do it on their own, to predetermine how they want it done. That's another valid approach, right? If our particular sampling methods and pseudo randomization methods don't quite fit what you're trying to achieve, then you do that part, and then you define which trials go in which blocks. And we'll do the participant grouping part. We'll do the creation of the lists for you.
AUDIENCE: Yep, makes sense. Thanks.
AUDIENCE: I have a question Thanks so much for the talk. That was really, really interesting. I love hearing about this. I think it's a really great tool. And in particular, you were talking about the balance between the extensibility of being able to code your own experimental logic with something like jsPsych and then having this grammar that's more-- has a lower barrier to entry for people. And I'm just wondering, let's say you do want to have a bit more fine-grained control over some of the things. Is it still possible to use the FindingFive platform for some of these other great features that you talk about of the full pipeline, everything being streamlined, but then plugging in your own experiment that you made in something like jsPsych?
Or is it like-- I also know that you want it to be a full end-to-end pipeline. And I can understand why you might not necessarily want people to be doing that. So yeah, I just-- I don't know. How, I guess, compatible is it with wanting to just go a bit beyond the capability of the experimental grammar?
NOAH NELSON: It's a great question. Right now, it is not, right? We have no way for you to bypass our study grammar to plug in your own JavaScript and interface with us, right? We have nothing of the sort right now. But we have talked about the possibility someday of doing something like that, of having some version of what you produce with us that you can tweak further.
We've explored that possibility. We just haven't landed on a way to do it that makes sense to us yet. And we've also had a lot of other things that we've been excited about that we were working on that have pushed it down the pipeline. But it's a conversation we've had, though, because I think there are a lot of people who do have the know how, who do have the ability. And for people like that, they want to be able to control what they can. But, yeah, I mean, I don't know if you have anything you want to add to that, Ting, but it's--
TING QIAN: So Maho, maybe you can talk a little bit about the ticket with the grad student from University of Florida that we just helped with, right? So he wanted to do some kind of an experimental paradigm that he thought was not a possible with FindingFive. But we managed to find a way to help him, right? You want to elaborate on that a little bit here?
MAHO TAKAHASHI: Yeah. I think-- yeah, sorry. I actually don't really have much to add to that. And yeah, Noah and Ting have already said many times, we really appreciate whenever a researcher comes with a question or like the idea about what kind of study that they will implement. And if it is something the FindingFive is not capable of handling just yet, we recently established like a pipeline that converts a feedback from researcher into like actionable insights so that we can actually implement those feedback as soon as possible and report the updates back to the researcher.
So, yeah, I would say, yeah, just reach out to support. And we would be happy to work with you and see if it's something that we can help you implement right away. And if not, we will definitely keep you posted once it becomes possible to do what you want to do on FindingFive.
TING QIAN: Yeah. Thank you, Mao. I just want to add the perspective on our side, especially on my side as the head of the organization. There is always legal liability concerns on my part, right? So if we do allow researchers to insert any kind of a JavaScript code that they want to do, I'm sure 99.9% of researchers just want to collect data. But there's, one, the chance that the code may act in the way that the researchers do not intend them to act. Right? Two, there's this, maybe, 0.1% people who pretend to be researchers to do something malicious on FindingFive. And that would be a huge-ass-- sorry for the language-- legal trouble for us as a tiny nonprofit organization, right?
So part of the reason why we don't allow customize the code, especially in free-form JavaScript, is due to the security and legal compliance liability concerns, right? But as Noah said, we are trying to find ways to accommodate, to enable, I should say, empower researchers who do need advanced features, right? So if that kind of case has happened, I think our current default strategy is please talk with us, and we would love to let you be part of the FindingFive grammar development team and code your own feature, right? And we would be happy to have that feature made available to all the researchers who might use FindingFive. So that's our Default strategy. And we think by doing that, we will be able to grow our features platform as much as we can And benefiting--
NOAH NELSON: It also helps us to vet to vet the code, right? And so we're involved in the process there.
TING QIAN: Yeah.
AUDIENCE: Yeah, that makes complete sense. Yeah, I completely understand the rationale for that. And I do think it has a lot of benefits for the security and for, yeah, just accessibility to researchers without as much coding background. I guess maybe I just have a specific question of something whether it's feasible or not. And maybe you could tell me if it's possible.
But so let's say that you go through the experiment and you finish someone finishes a trial, and you want to give some feedback. I assume there's probably some way to display feedback on the screen, maybe if correct or incorrect. But is it possible to do something a bit more advanced than that? So let's say someone types in something, and then you want to assign a score to that with like some kind of scoring function. It sounds like maybe you don't want to allow people to execute arbitrary code on the platform. But is there a way to in the feedback, do something more than just compare it equals the right answer or not? Is it possible to apply some function to the previous response to then display to the user as feedback?
TING QIAN: So customize the function, no.
NOAH NELSON: Right.
TING QIAN: But we do have a variety of ways for providing feedback. We have a thing called a follow-up response, which is essentially on the same trial, depending on what the participant selected as an answer to a previous response, you can have another different response saying, if they said-- on a rating, for example. On a rating scale from 1 to 5, how did you like the previous picture?
Then if they said they didn't like it, then you can capture that level. And in the second response ask, oh, can you tell me why you said it was only 2, right? What's your reason here? So that kind of thing can be done. You can also do conditional branching, which is calculating-- so that may be close, but still not quite as flexible as a customized function, right?
So you can calculate, for example, on an accuracy across a block of trials on a particular response on a [? 1 ?] [? or ?] [? 2, ?] or actually x number of responses. Then you can say, if the accuracy reaches 75%, then on the next block, you are going to see this. If it's below 75%, you are either going to repeat this block, or you're going to see something else, right? So these general features are possible. But customized functions not yet.
NOAH NELSON: Right. Yeah, customized feedback is-- I don't see a good way to do it without opening up the code, like we were just talking about.
TING QIAN: Yeah.
NOAH NELSON: Yeah. But there are some things-- and of course, this is one of those cases where it rings true for other scenarios, where we've had researchers say, hey, I want to do something like this. And then we dig a little deeper, say, like, OK, so what's your study? What is the nature of the feedback that you want to give to them?
And we talk it through with them, and we determine is there some component of it that's generalizable or that we can actually determine on our end that would be generalizable to other researchers as well? And if that's the case, then we're happy to work with you to make it possible, right? Now, it's not always what everyone wants to hear, though, because obviously it takes us time to do that, right? And you might want to do this study tomorrow.
But it's great when it does happen because we get better. But we also establish these relationships with these researchers. People who do that are more likely to want to test new features when they're coming out with us and work together with us. And that lets them have a voice in how that feature unfolds on some level. Hey, this didn't work for me. Can we do something different? Yeah.
AUDIENCE: Yeah. Thanks.
AUDIENCE: Can I hop in? I'm Corey. This is an absolutely fantastic resource. It's really incredible what you've managed to achieve with an all-volunteer team. And I guess that is the crux of my question in terms of kind of long-term viability of experiments, of experimental programs built up around this, where if everybody running it has day jobs, I guess I think of, like, there was this scenario in like the machine learning world of the Theano library, which was a complete open source love passion project by a bunch of researchers that ended up just kind of dying in the face of competition from industry competitors. And then everybody who had written code using that was kind of in trouble. What, I guess, would you say in response to concerns like that that might serve as a barrier to kind of jumping in feet first into setting up experiments in this paradigm?
NOAH NELSON: Hmm.
TING QIAN: Noah, you want to take a shot first? Then I can add?
NOAH NELSON: Yeah. I think Ting's response will be more on the nose than anything I could say. But I don't know. So I've been involved with [? being ?] on, to various degrees, depending on the ebb and flow of my life. But we have kept it all alive through all of that. And we've been really blessed with a volunteer team that continues to grow.
But these are people who seem to be really motivated to help us. And we are motivated to help them to have a reciprocal relationship. And that helps us keep things going because we feel like we have something-- maybe it's educational or experience based-- that we can offer to our volunteers. And because it's been successful so far, we've had volunteers who've gone on, as Ting mentioned, into the professional spheres and can cite their experience at FindingFive as helping to give them the work experience they needed to succeed.
And then we have those stories to tell new people who are interested in volunteering for us. That helps us to keep-- although it is a rolling transition of volunteers, it helps us to keep enough of them to do what we need to do. And the other side of it is that we're not going to collapse because of lack of money on the same level that someone who has to pay employees is going to be, right? So we've been very blessed in that way.
There's always a risk. You never know what the future will hold. But it keeps our costs down enough that when we charge these minimal participation fees and session fees, it's enough to keep the servers running and to keep the lights on, to keep our team going, and to give us enough surplus to do things like sponsor conferences, send people to conferences to talk on our behalf, and do those sorts of things. But then Ting, I didn't want to speak for you. But you--
TING QIAN: Yeah. Thank you, Noah. I just want to add from the-- and Corey, that's a great question. So thank you for asking that. I've been waiting for somebody to ask it. So I just want to add from the financial side of things, so we did have-- there was a phase in our process when we were kind of curious about the possibility of VC funding these kind of things, whether nonprofit is the right approach or not, these kind of issues.
And at the end of the day, I think there was quite striking difference between what FindingFive wants to do and some of those machine learning, open source projects wanted to achieve. And one thing that-- it's going to sound funny if you think about it-- is that if FindingFive, by its nature, is a niche product. It does not have generalizable appeal to the overall market, right? So outside of the handful of behavioral researchers, market survey companies, they are not going to be interested in FindingFive because doing simple surveys are easy enough, right?
So that creates an issue for us, which is during our interactions with private funding and stuff, none of them are interested in funding-- I'm sorry. My dog is crazy. --in funding FindingFive as is, right? They are always saying, OK, what you have created is absolutely awesome. But when can you start doing focus group market research, right?
And the moment we tell them, no, we will not ever go into that direction, they lost interest. And so that, in disguise, I think, is a blessing to us because it almost is a guarantee that there is no private funding go into funding a complete competition to us, right? So we get to survive in the bubble that is left alone by market, by capitalism.
And we are very happy to be doing it in the nonprofit way like this, right? So we would like to get our names out there. We really want to interact with researchers. We really want to help more and more researchers do online studies in this way. And in return, by having more and more researchers, we will be able to grow to a point where we can keep maybe a five-person permanent team, plus volunteers so that we can keep the effort going, right?
And so that's our general development strategy. Like Noah said, we have enough revenue to cover all of our cloud servers, these kind of spendings and expand some. So that's not an issue, right? No one is a bit depending on FindingFive as their livelihood. So we are not going to starve because we work on FindingFive, right?
And also, Monica is here now. She is now going to help us perhaps get some kind of a grant of funding from the public government sector, right? But if that does happen, I think it will be helpful as well.
AUDIENCE: Yeah, great answers. Thank you very much.
TING QIAN: Thank you.
NOAH NELSON: So you're now 100% satisfied we're never going under. It's impossible.
[LAUGHTER]
AUDIENCE: Yeah. It's impossible to do that. But [? that's a very ?] reassuring response. Yeah. I wasn't so much worried about you going under, as just kind of life happening with whatever demands come from your day job. And then you can no longer support this thing to the same extent. And then it just becomes too much of a burden. And then it just kind of dies, either officially or unofficially. That's kind of the common life cycle of many volunteer-only open source projects.
TING QIAN: Yeah. So on that point, if I may just add a little bit, I actually have spent quite a bit of time and effort learning how to make sure that doesn't happen, right? So we as a team, not just to focus on getting work done, but also focus on having the right structure so that work can be done, right? So within the team, everyone who has volunteered within FindingFive-- maybe not everyone. But quite a few of them have commented to me that, wow, FindingFive is run better than my paid job, right?
So I think in a way that we are happy to hear that, that volunteers get-- we try to create an environment where those redundancy between every function, among the functions and the individual volunteers can grow and learn in this organization. And I think that is the basis for our not overwhelming, but strong confidence in keeping the project going.
NOAH NELSON: Yeah. And also a huge shout out to Ting himself. As the technical founder, so much of what FindingFive is was built from his hands alone. He's a very capable guy. And he has proven to be very motivated to keeping this thing alive, and that has been very motivational for a lot of us, I think, volunteering for the organization. His passion flows out to us.
And it's been a very rewarding experience to work with him on and contribute my tiny little bit to his grand design. And I-- yeah. So I think that's also part of it is when somebody's that dedicated to the thing that they're doing, they'll push through. But thanks for the question. It's really good.
AUDIENCE: Sure.
MODERATOR: So Fernanda had to leave. But she asked me to ask whether there's any Prolific integration on your platform?
TING QIAN: Sorry to be the person who answers a lot of questions. Prolific recently reached out to us, asking about integration strategies, right? So in a short answer, the answer is yes because you can always just launch a study on FindingFive and telling FindingFive that you are not offering rewards directly on FindingFive. You're offering credit rewards, which is possible, right?
So you're going to launch a study that does not pay anybody. And then what you can do is you can get the link of that study and post it in Prolific. Then participants will come to FindingFive and do the study. All the money transactions will be handled by Prolific, right? So that's how a lot of researchers already use FindingFive this way with Prolific.
So that can be done. And we are looking at a tighter integration, where researchers can potentially launch a study directly into Prolific, directly, right? So you don't have to manage two platforms on your own. But FindingFive will do the work for you. But I don't think that's going to happen soon because we're waiting on Prolific to make something possible. But, yeah, that's the long answer. So we are working together to make a good solution to come.
NOAH NELSON: Yeah. But I think people are used to copying and pasting a link when they work with Prolific or Amazon Mechanical Turk or whatever. So I think what we have has worked for a lot of researchers quite well. But there's always something that can be done better, right?
MODERATOR: [? That ?] would be fun.
NOAH NELSON: Yeah.
MODERATOR: So I was just taking a quick look at the quick start guide for the study grammar.
NOAH NELSON: Oh, thanks.
MODERATOR: Do you provide templates as well? Or do you just suggest using that as the first thing? What's the easiest way to get started?
TING QIAN: So just go to the study list.
NOAH NELSON: Yeah. So you looked like you wanted to say something, so I was going to wait.
TING QIAN: Oh, OK.
NOAH NELSON: Yeah. So we have some things over here that you can use, right? So I think like the crash course might be the most complete, right? Because it has-- right. So this is-- basically, it's a little bit of a dummy study, but it's a fully fleshed-out study of a particular design that you can find in the crash course materials that we have, right? And so right there, you essentially have a template that has something, and it just is up to you to edit stimuli, edit responses, trial templates, procedure, as you see fit.
And of course that includes things like a mouse tracking study template, one with conditional branching, an audio recording study template. And we're hoping to continue creating more and more of these. We hope to have a vast network of templates that researchers can pull from, even if all it proves to do is inspire you or teach you a tiny little thing about how to implement a particular paradigm, and then you end up playing with it so much that it's unrecognizable.
People seem to want that. And we're motivated to make them available. I remember-- it had actually been a while since I'd been on the researcher page in FindingFive because I hadn't had to do anything like this prior to this talk for some time. And I remember seeing it being like, oh, there's more than I realized. So we are creating them, and we're making progress there. And it's very exciting for us, yeah. I don't think I'm going to-- I can make a mouse tracking one. That could be cool. So we have a set of trials and the mouse tracking response.
TING QIAN: Oh, yeah. So--
NOAH NELSON: Mouse reset trial. [? Can we preview? ?]
TING QIAN: Yeah. People are signing off. But those-- the one particular aspect of the FindingFive modular system that always makes me excited is that you can have multiple responses on the same trial. So what people have done is that while they are-- on the surface, it looks like they are asking participants to click between choices. But they are also recording mouse positions at the same time.
And in some cases, which you will need disclosure at the beginning of the study, they may also be secretly recording your verbal responses. Like, what the hell? Which one am I supposed to click, right? So some researchers may need a combination of responses on a single [? trial ?] [? to make ?] certain experiment paradigms happen. And that is something that we can do so easily. You just list multiple responses on the same trial. That's it, right? So it's--
NOAH NELSON: Yeah. So this atypical response one is over here. And you can see it's a choice-type response. They're choosing between a fish and a reptile, right? But in this study, they are passively tracking their mouse when they go to make their choice. So if this stimulus, say, like typical target 2 happens to be the stimulus, if that's really incongruent with the options or it is misleading, makes you think it's a fish, but it's really a reptile or whatever, they can track that like little, whoops, I started going left, but I'm going to go right kind of thing.
And it's just so easy to do it. And then as we talked about earlier, because we have-- you can set up this template with just stimuli, just your list of responses. You can throw in here further specifications and more details, if you want, about how you might want to sample stimuli and responses from those trials if there's more advanced stuff that you want to do with it, and you don't just want to go through it linearly, right? So there's really a lot that you can achieve through this modular approach.
MODERATOR: Yeah. That looks great. Thank you. Any other questions?
AUDIENCE: [? A small ?] proportion of your users are people who are kind of forced to sign in for course credit. Do you find that any of those users wind up continuing to be participants, or do they just use it for the one-off course assignment?
TING QIAN: Sorry. So I think the majority of them are probably just one-off participants. And but there are-- it's a Zipfian distribution, or exponential, however you want to view it. There are a couple of them who come back all the time, right? But most of them don't quite come back. And I think if it's a naturally occurring behavior, it's going to have that kind of a distribution. So far, we have observed that.
NOAH NELSON: Yeah. And that is something that we want to pursue ways to try to encourage participants to come back through building of a community and maybe giving them some sense of really being involved in the research process. And we've got different ideas about how that maybe could be achieved. But at the end of the day, we're talking about college students, many of whom are trying to get extra credit in the course or trying to get the credit they need to pass a course to fulfill their requirements. There's not going to be a lot of motivation for many of them to come back. That's just the way it is. OK?
MODERATOR: OK. Then I'll say thank you very much for your time. This was super, super helpful. Yeah, I'm sure that several folks are going to be excited to check out the recording. And we'll send you a link when it's online.
NOAH NELSON: Excellent.
MODERATOR: Yeah, thanks so much.
NOAH NELSON: Yeah, thank you all for listening to us.
TING QIAN: Yeah, thank you. This has been a blast.
AUDIENCE: Thank you very much.
TING QIAN: Thank you.
MODERATOR: Yeah, [INAUDIBLE]. Have a good weekend.
NOAH NELSON: You too.
TING QIAN: Yeah, bye, have a good weekend.
MODERATOR: Bye.