Investing in Success: Leveraging Evidence to Secure EdTech Funding Webinar

Share

In the competitive world of edtech investments, understanding how to demonstrate and leverage evidence of impact is crucial for startups seeking to secure funding. This exclusive webinar brings together an expert panel from top VCs and foundations who are at the forefront of investing in transformative educational technologies. 

Share
Video Transcript
Hello, all who have joined, and welcome to this webinar on this lovely Wednesday. I believe it's Wednesday. They slip by me all the time. But Wednesday, October sixteenth, we'll get started in a minute or two. This will be recorded. So, anyone who joins later as we see people continue to trickle in, if folks join later, they will have access to the recording.

But bear with us for a minute or so, and we'll let a few more sneak in before we actually go live. We get a thumbs up from anyone if there's a, title screen up. Alright. Perfect. Alright.

Fifteen more seconds, and we'll go live. I see a few four few more folks joining. So we'll give fifteen more seconds, and then we will officially kick this thing off. Great. I don't know if that was officially fifteen Mississippis, but we're gonna go ahead and get things kicked off.

Welcome to today's webinar. Again, October sixteenth. On this lovely Wednesday, we're gonna be talking about investing in success, leveraging evidence to secure edtech funding. We have a wonderful panel, and I can't wait to get into the conversation. But first, I'll introduce myself.

I will be your moderator. My name is Danny Stanhope, academically trained as an IO psychologist. That's an industrial and organizational psychologist, and I was the founding researcher at Learn Platform, where we helped build this r and d infrastructure for the modern education ecosystem. Through acquisition, of course, we joined the Instructure family in twenty twenty two, where I now work globally to support partners as they build solutions, increase their adoption, and ultimately grow their impact. Today, I will be your host, your emcee, your moderator, if you will, and I'll do my best to choose my spots while ensuring the real stars of the show have the opportunity to own the stage.

Before I introduce those stars, I have a few housekeeping notes. One, engage, participate, converse, be part of the discussion. We wanna make this actively as participative as possible. And you can do so through the chat function. I believe if you switch the setting to speak with everyone, you can chat out your hellos, your location, whatever you wanna do there.

There's also, of course, a q and a function. So submit those questions at any time now throughout the whole, panel or at the end where we will actually dedicate a good ten, fifteen minutes to a q and a. But definitely ask your questions. We have a ton of, great insight and expertise on this panel, and we'd love to get your questions addressed. So enough about the rules.

Let's introduce our panelists. First, we have Malvika Bhagwat, who's the operating partner, head of outcomes at Owl Ventures. Malika, anything else you wanna say about yourself? No. That sums it up. But I've been in this space a long time.

Been in working in education and education assessment, learning sciences, outcomes and accuracy for over a decade. And prior to this was at Emerson Collective, and before that, was an early employee at Newsela building out their research and assessment teams. Fantastic. Next up, Gina Ricker, PhD, senior program officer at Bill and Melinda Gates Foundation, and I think you have something in common with Malvika, actually. But go ahead if you wanna introduce yourself.

Hi, everybody. Yes. Malvika and I, both worked in Newsela, not at the same time. But I've I've spent the last decade researching teaching and learning technologies before joining the foundation. At the foundation, I am focused on digital holistic student supports, where we're investing with with partners and technologies to drive equitable outcomes for students.

In in the higher ed space, there is there are some challenges with disparate, technologies being used on campuses. And so we're hoping to address that issue so that students are able to receive the supports that they need and deserve to be successful. So I'm excited to be a part of the conversation today. Fantastic. And last but not least, Matthew Berry, who's a partner at Learn Capital.

Yeah. Hey, everyone. I'm maybe the oddball out. Background's actually in urban planning. I tell people I was a data scientist before it was cool, and I worked for a family office in in Denver.

And my my work is really focused around impact evaluation across a lot of different social domains, all touching children in poverty. So looking at everything from housing and, recidivism to education, joined Learn Capital about three years ago and now focus primarily on learning outcomes and how do we drive learning outcomes through venture investing. Perfect. And with that, I will cut this slide and dig into the actual panel here. And I do wanna note as I can take this podium really quickly to set the stage, and perhaps this analogy is a bit overused or overly simplistic, but I think it holds.

In medicine or the pharmaceutical industry, you wouldn't dare seek funding or try and raise capital for a product that had no scientific basis or foundational evidence at the very least. You also wouldn't be able to find shelf space, gain adoption, or scale your business without evidence. Yet in education, we've historically seen the market flooded with products that don't necessarily have research to support their efficacy or effectiveness, or even theoretical rationale or scientific reasoning. Times have changed, and we've now seen increased demands for evidence based solutions and those demands are coming from the federal level as well as from local and state education agencies. We're definitely in the age of evidence.

And with that, I'll stop pontificating and like I said, let the stars take over. So let's get to the panel discussion. The first question I would love to ask, because I have that nerdy research background, is around metrics. And I'd like to know what specific metrics you prioritize when assessing the effectiveness of an ed tech solution. So if you're trying to understand the effectiveness of an ed tech solution, what types of metrics are we prioritizing? And we can start with whomever.

I mean, I can jump in. I think from a venture perspective, it's looking at, efficacy combined with reach. And so we're looking for, you know, not just outcomes, but outcomes that could produce at scale to a massive population. And so those are the two, like, dimensions that we primarily track against, reach, what is the extent this product or service can reach mass markets. In particular, we, from an impact perspective, underserved underserved markets in terms of geography, income, etcetera.

And then on the efficacy side, it really depends on the stage of venture that we're investing in. So if it's a really early stage, it's more about do they have an underlying theory of change and an understanding of, a research base that informs their product or service. As the ventures mature, we, I guess, expect and try to fundraise to support, third party research to validate that they are, in fact, creating outcomes. And so I think that's the the way that we look at it. And, of course, you know, for any one of our ventures that are, selling to government, school districts, etcetera, workforce development entities, outcomes is what they're selling.

So, you know, I think that's the beauty of being in this space. I'm sure, my colleague panelists would would agree, like, outcomes aren't secondary to the, business model, which makes it really fun to be in the space. Yeah. I can jump in. I think for us, it's very similar at all as well.

We look at scale for sure, access being the second bucket, I think, that Matt touched on as well, which is just who you're serving. I think in k twelve, it's a little bit more straightforward, especially in the US context because you already have national benchmarks against free to reduced lunch populations, English language learners, title one schools. But as we think about that internationally, we also look at sort of gender parity as well as income parity and then, affordability of the products as well. So not just, entering tier one cities, but potentially also being able to be accessed by, learners in tier two and tier three cities as well. And then I think diversity is another component for us that's pretty big.

We look at diversity metrics at the leadership teams. So that includes founders, CEOs, board members, as well as senior leadership teams, and that comes from the belief that the diversity of the teams building the products has to reflect that of the learners that they're serving. Otherwise, you will not there'll always be a disconnect there. And so we spend a lot of time looking at the senior leadership data and also actively, proactively helping kind of influence that where possible. And then outcomes, is very similar to what Matt described, which is it's on a spectrum, and, we expect it to be be a journey.

We don't always expect people to come in with, all the outcomes figured out, but we need some amount of intentionality and some metrics and indication around fidelity of implementation and, realisticness around what they want to sell, for us to feel confident in what they're doing. Yeah. I I love that. I I wanna expand on that and potentially take more of the the nerdy side of things that that Danny was alluding to. I think it's often overlooked that there are, really valuable proximal outcomes that can show early success.

I've seen all too often that that products are so focused on those high stakes long term outcomes, and it takes a long time to get there, it takes a long time to measure those things as well. And we miss the more, sometimes more valuable learnings along the way. So whether it's, looking at things like student engagement, how you're supporting educators in their role in the classroom, attitudes of students. It could be confidence. It could be preparedness.

It could be a variety of things depending on what the product is. So when we're when we're thinking about the metrics that we wanna prioritize and looking at effectiveness, that real world application, fidelity of implementation is a big one, that Malvika mentioned. But I also think that, beyond just looking at at reach, looking at what those early indicators of success could be that are logically connected to usage of the product. Again, we don't wanna pick things that are not in line with what the product can reasonably do. Those are those are things that I tend to prioritize when I'm looking at effectiveness.

Yeah. A lot of important stuff to unpack in there. You know, I heard Matthew mention, theory of change, and you just mentioned logical and the kind of which made me think of logic model. And part of that logic model, when you mentioned the more proximal outcomes, we have those proximal or short term outcomes as more as well as the more distal long term outcomes. And I think it's important to consider that entire outcome space.

But, I appreciate that piece. And as kind of a follow-up, we also mentioned scalability. So I'm curious how you would assess the scalability of a product's out, impact on learning outcomes. So maybe some outcomes that you see or some impact in early stage testing, but how do you assess the scalability and the potential for scaling that across more diverse audiences and whatnot? I think I oh, go ahead, Malvika. I was gonna say in venture, it's pretty straightforward.

I think for us, like, when you think about venture backable businesses, it's hard to think about venture backable businesses that don't have scale and don't have outcomes at scale. Right? Like, if you're not selling to k twelve districts, if you're not sending to the public school market, you effectively are not gonna have scale. And so I think Matt made this point earlier as well, but they go pretty hand in hand when you're looking at business models. So you're evaluating for, scalable business models, repeatable sales models, team, founder, fidelity of implementation, just what kind of research they're gonna undertake. But you also are looking at things like price points at which they're gonna sell the product.

They are looking at sort of what the usage looks like in classrooms. And all of these together end up telling the story of, overall, is this gonna scale? Because if a product comes in designed for a very niche private school audience, the chances of that being able to whittle down to a public school audience is gonna be much harder than the other way around. And you see that even when companies are trying to expand internationally. Right? Like, you have to have certain things built up front in your product to be able to scale into certain markets and emerging markets as well. And I so I I think in venture, it's pretty straightforward.

It has to go hand in hand for the venture backed returns plus impact. The only thing I might add to that is there are, I think, cases where when a product enters market, it's more expensive than might be available to all the users that it aspires toward. And as it reaches scale, that cost point comes down. So there is, like, a a potential kind of scale curve that's driven by pricing, that starts potentially more expensive. And then as it reaches the masses, it gets much cheaper.

So I think that's a that's more of, like, an impact dimension than a scale dimension. But I would agree with with both of those things. And, certainly, infrastructure implications, cost of implementation, are important. But I think, something else I've noticed in the market is that oftentimes novelty wears off very quickly. And so having evidence that the product is impactful beyond the novelty wearing off, I think, is is incredibly important to consider in in how a product can have a lasting presence in the market.

Yeah. Good stuff there. We, you know, talked about impact and demonstrations of impact in different metrics, And I'm sure some folks in the audience or others out there are wondering, how they so how can I, as an me as an my EdTech company demonstrate that my EdTech solutions do have an impact on student outcomes? So how can, a product company comp company, excuse me, demonstrate that their product does have impacts on student outcomes and demonstrate that to you in a meaningful or compelling way, or to other potential funders? I can speak from being in that world myself, not too long ago, and and evaluating products and and the research that they have at hand. I think I tend to find more compelling multiple research studies, early short cycle research studies than one one to two norm or randomized control trial that was done five, ten years ago. Building your evidence library along the continuum of evidence available, I think, is incredibly valuable.

And that's everything from starting with the logic model. What's what's the logical connection between the real world problem, this product that you're offering, and and the outcomes that you hope to achieve, and and testing that logic model along the different, hypotheses that you you have with those short cycle research studies. These these don't have to be expensive. They don't have to be months or years long. There's so much value in conducting several concurrent, short cycle studies on those more proximal outcomes, that have logical connection to those distal outcomes.

We'll get there in time, right, with time, money, resources, we can work on longitudinal research. But I I think it's incredibly compelling to have multiple short term research studies at the at the early stages. Yeah. Like that idea of the portfolio. I was gonna I was gonna add, you you know, I think from, like, the learn capital perspective, it's probably similar to OWL.

Like, I don't know that we see ourselves as funders, but we're investors. You know, and so we as as a firm and OWL is, extremely good at this, is building technical assistance and capacity support along the journey that a venture has. And so when we're making an investment, we're diligent seeing, you know, market fit in what we think, the company could do from a profitability standpoint, but also what type of impact they could drive. And so we look early into what theory of change or evidence do they have on day one, and, hopefully, their model is informed by that. But then we're kind of post transaction, we're on the journey with them trying to support them in every way we can.

And I differentiate between investors and funders because we're usually often trying to find a funder who wants to fund research, because we don't have, resources to, give grants or kind of nondilutive funding for that type of work, but we think it's critical. So it's kind of an ecosystem comment, I guess, that it's critical for a NedTech, venture, if they're venture backed to I'd say work with the partners at the at the funds that they are invested in them, but also find, philanthropic partners and others who really are committed to building the field of evidence. And particularly in ventures are very cool, like, if not nascent, unique space, to be thinking about those types of partnerships. Yeah. I think the last thing I might add is, at at least for people who are joining here who might be really early in their journey, I'll I'll double click on what Matt said a little bit, which is we're not expecting people to come in with outcomes.

Outcomes. Like, maybe a logic model would be good or, like, some idea of how it's gonna work in a classroom. But, if you think of the current wave and everything that's happening with Gen AI, all of these products are so nascent. They're so early. Folks are still like, their ideas on papers often, they don't even have product market fit.

And so it's very hard to think of it a portfolio of evidence at this stage. It can almost be really overwhelming, but that evidence can look bite sized. Right? Like, it could look like case studies that you've done with, like, two pilot districts. It could look like testimonials. It could look like survey data that you've collected from pilots.

So I don't think the expectation is that you're coming in with, like, a randomized control trial or even a quasi experimental trial. Because, honestly, if you came in with that, I probably wouldn't believe it because your product is way too early. It it doesn't have the users. Your product's gonna change in, like, five weeks from now. But I need something to know that, like, the users are responding, and you're solving a real problem in the market.

And so I would kind of reframe how we think about evidence, at least for the very, very early stage companies as well, which is a lot of what we're seeing right now in the market, given just what's happening within EdTech as well. And just super quick, I think Gina's point around kind of implementation and fidelity is Yeah. Really interesting in this space that someone actually can come with an evidence based intervention that they're trying to turn into a digital product and scale in an interesting way. And then the question isn't do they have research basis, do they actually have a technology or solution that's gonna take that research and implement it with enough fidelity that it's actually gonna drive the outcome that it produced in a a former instantiation. So I think that's also, like, a really interesting question in the in the venture space.

Yeah. Bridging that gap between research and practice, and having, you know, rigorous yet practical solutions, that's a a key focus as we continue on here. So a quick question here, and it's kind of a follow-up. So there may be some overlap, or some expounding to do. But can you share examples of evidence that particularly impressed you, with, you know, past investments or companies you've worked with or however you wanna phrase that.

So I guess examples or exemplary approaches to demonstrating evidence that have impressed you in the past. Happy to start. I think one in our portfolio that continues to impress me, which is a little bit more later stage, is a company called Amira Learning. I think they're unique in that they've been born out of twenty years of Carnegie Mellon Research Institute. So there's already a DNA and history of evidence kind of built into the product.

But then even as early as series a and series b, they've got sort of gone on to do all tiers of evidence. Right? Like, they've approached the funders to kinda work with the funders to start talking about research studies. They've tapped into government funding. They've, recently, the state of Utah did an independent study of all the EdTech products, unknown to any EdTech product, and AMITA was the only one that showed any efficacy and that equivalent to a human tutor. And for those that don't know, Amira focuses on teaching kids how to read, so reading fluency with an AI tutor.

And way before Gen AI was the buzzword that it is today. So that one continues to kind of grow and impress me, and they they have a very evidence. And and I I wanna differentiate between research backed and outcomes as well because I think they do both. They their product is backed in research. They look at academic data, academic literature, so every intervention is backed in data and research and academic literature, but then they also focus on the outcomes piece, which is kind of measuring the efficacy of the product.

So I'd say that is one that has been really interesting to watch. And then super early stage, I would say Chiron Learning is another one that's been really fascinating as well as a journey. Chiron Learning brings instruction and interactivity to universities and colleges, through video, but async, like, through AI generated videos. So, basically, a teacher could start teaching a course, but then hand off to that AI assistant and the AI assistant could jump in. But they've been very intentional and thoughtful about pilots, and that's kind of the point I was making earlier is that they have spent the time on pilots, to bring together pilot data to then inform the next steps.

And so they're thinking about research very iteratively, both from a product development cycle as well as impact. And so I think those two are ones that I would maybe call out for now. I could keep going, but, I won't. Yeah. One of, I guess, the company that I feature is called Newglow.

They operate in Kenya, Nigeria, India, and they now serve over two million users, so or students. So pretty pretty large company in terms of scale. And what's cool about their company is they essentially, are give tech enabled solutions to teachers. And so it's a it's it's a coaching and instruction tool for teachers in the classroom. And what's what's unique about it is because of that, they can actually do, real time AB testing on the way that they build their product out.

So it's a really agile approach to, implementing their product in in solution. And from an evidence standpoint, Michael Kramer, a Nobel Laureate, conducted a recent research that proved they were able to develop kind of, learning at twice the speed in English and three times in math. They achieved over an additional year of schooling. In one year, their peers would have in two years. So, I think for at least in Learn's portfolio, there aren't that many companies that have gone through, like, full RCT level research, with their existing product or service.

So that's the one that, I would just point to. Like, I'd love to see other ones do that. You know, it depends on I mean, I I I think for especially for this conversation, something we've been thinking a lot are, like, companies that have direct learner outcomes and companies that, have kind of tangential, impact. You know? So some are actually driving a learning outcome like New Globe. A lot of others are doing things in and around education, but aren't is e easily able to point to a student gain from point a to b.

So I think that's, not all of our companies can kind of easily do the same types of, like, RC RCT or quasi experimental research to prove outcome. Something I'd I'd love to elaborate on, is this idea of, product iteration that's informed by evidence. There is again, sometimes we just get so hung up on what are those high stakes, what's most impressive, what's the biggest number that we can put in front. But there's so much that underpins, the product itself that's as equally impressive, how quickly you're able to react to the research that you're doing, and and how that's data informed. I think something else, this this idea of the research backed design, I think there's also an element of, co designer collaborative design that, sets a product up to be successful in the early stages, when the when the product is designed with end users.

Not just end users in mind, but with end users. I think that speaks, incredibly highly of of a product that's thinking about impact from the jump. Yeah. Lots of good stuff there. You know, agile, iterative evaluation, and implementation, additional evaluation, the rapid cycle evaluation approach.

All of this is, really important stuff, especially with the iterative nature of EdTech. Oftentimes, those include in the models or the statistical analytical approaches to evaluating them quantitative measures or quantitative metrics. So I'm curious to hear, what role qualitative data play in your decision making process, whether those are through surveys, open ended questionnaires, classroom observations, whatever it may be, focus groups, interviews, however you collect those qualitative data. How what role does qualitative data play in a lot of your decision making? I I love this question. I'll I'll speak from, kind of the funder perspective versus the investor perspective.

Qualitative data is, very complimentary to to quantitative data. I think, neither one can stand fully on its own. Qualitative data is kind of the color to the picture. It provides that real world on the ground understanding of what the data is telling us. I think it's incredibly helpful when you're looking at implementation fidelity.

I know we've mentioned many times. I think it also helps us better understand the equity orientation of a product, especially when it's a diverse collection of qualitative data. How well the product or solution is actually, resonating with different users in different contexts. I but I would love to hear from from my colleagues on the on the investment side. I think I think that's spot on.

I think we think about it the same way. I think you need both to kind of tell the complete picture, and I I I kind of always think about this in any other context. Like, think about yourself buying anything else. Like, you look at the back of a label of a product and you'll get all the data. It'll tell you, oh, it has no sugar or it has this many calories.

Whatever it is, whatever. I'm thinking about buying a pack of chips for right now, but, whatever else that you wanna buy. But you still wanna ask the questions of, like, but will I like it? Like, how does it taste? And, like, that's not gonna necessarily come from the quantitative. Right? And I think evidence is the same. You get all the numbers and the data of effectiveness, but you wanna ask questions like, yeah.

But how many students were in there? Like, what was the sample size? What did they really think of the product? Was the issues that the it was a usability problem, or was it that there were no outcomes? Like, they're just a ton of questions that you wanna unearth, and I think a lot of that comes from the qualitative piece of it, whether that's focus groups, case studies, even testimonials. So we really, value both pieces of information. And especially in the absence of quantitative, I really strongly recommend at least having the qualitative. Yeah. I've only things I'd add is, you know, from a decision making process, as a VC, we're usually looking when we talk about qualitative and impact, it's usually around the problem that they're trying to solve in the market and what's the context that they're looking at.

And that gets into kind of scalability and market fit questions. It's less necessarily about evidence. It's more around the what surrounds the product or service we're trying to build. And I think, adding to what, the others said, you know, these companies are selling evidence, and it becomes a kind of marketing function that, you know, you can sell, quantitative numbers to some buyers, but other people, like the bag of chips example, you know, especially companies that are selling to, direct to consumer. Like, they wanna see themselves in the product, and they wanna know that, that product has worked for someone like them and that, you know, a a child or a parent is gonna buy that product based on that.

And so, yes, they wanna know it works, but they also want, like, that emotional connection to that. And I think that braiding of the qualitative impact and story, relative to, you know, what is the outcome this should or could produce, is what companies are doing in terms of how they're selling. So it becomes like a very and this is where I think as as VCs and probably foundations alike, we need to be careful that they're actually selling what's true, not selling, you know, too much fiction in terms of the outcome that they say they're gonna produce. Because that's one of the challenges in the market, which you pointed to at the beginning, Daniel. Awesome.

Yeah. And we had a question actually come in from the audience that I think relates to some of what we've already addressed here, and one of those is around, defining fidelity. So the question is, you know, the challenge we have as a curriculum based EdTech program is defining fidelity given that resources are accessible in both digital and print formats. So one of the beauties of EdTech, we have data coming from the digital, but we also, of course, still have a lot of print out there. So to continue the question, they have those resources in both digital and print, which limits the ability to know what is actually being taught used in the classroom.

So do you have any examples of organizations who have thoughtfully tackled this sort of fidelity ambiguity problem? Been there many times. So I I think there's, I think there's two parts to this question. The first is just defining fidelity. I think this gets back to where we started at the top of this conversation, the difference between effectiveness and efficacy. Efficacy is, what we consider lab conditions.

What is the ideal implementation, from that theory of change that we think is gonna drive impact? And having a sense of through an implementation study or studies, is that happening? What are the real world effectiveness conditions that are actually taking place? Is this product that's supposed to be a a comprehensive solution actually just being used in a supplementary way? I think that's that's the the first part. The second part, about the the print versus digital, we've tried to use proxies before. So if they're downloading, the print resource, we consider that as they engaged with it. But who knows if they're actually using it in in the classroom? That's where doing surveys, doing classroom observations, really help color in that, hard data side of things that doesn't tell the full story. The only I might I mean, I I think, at a micro level, all of that is very, very true, and I think we we've seen similar examples where people kind of have they track things like download feature or they kind of track, like, when if it has an offline mode, like, when it comes back to the Internet, what the syncing kind of looks like and how much data you can collect in that process.

So there are definitely ways to do some of that. And then, I think surveys is another really good way to fill it out. Like, you talk to the partners. I mean, I think, one of the things I dislike hate's a strong word. About the way we talk about districts and EdTech providers is that we call them, like, vendors.

And vendors makes the relationships so transactional. It's like I've I've done my job of, like, giving you the product, and now I'm walking away. But I think partnership is the right way to think about it. And, part of that partnership is that fidelity of implementation through professional development, talking to your schools and teachers and understanding what's happening on the ground. And one of the movements that I've been following a little bit more closely, and it's not new, but I think it's picking up some steam now is the outcomes based contracting work.

And I think that really gets to the heart of a lot of this. Right? It's like you come to the table as partners, and you agree together what those outcomes are going to be. Like, what is realistic, not just what is lab settings. And then let's talk about how do we move the needle and how we actually gonna see some sort of outcomes, not just the outcomes the tech company wants or the school district wants, but together, what is the right outcome we think we're gonna achieve by implementation. And so I think that's a movement worth keeping an eye on as well as you're kind of thinking about what might be realistic because I think more examples will come out of school districts on sort of how they're thinking about implementation as well.

Yeah. Makes a lot of sense. And I think, you know, from a research method standpoint, methodologically, I would do everything you can to collect all the data that you can. We've talked about quantitative and qualitative, and Malika mentioned the ability to, add different types of surveying methodologies or whatnot. But, yeah, different design elements, different ways to measure and collect the data you're trying to collect and to do so in an iterative way.

I think it helped you get at defining fidelity there. Great. So speaking of print versus digital, not that there needs to be a verse there, but, I think this question is a good that's a good segue to this question, and that is how do you find the right balance, between innovative ideas and proven effectiveness in your investment strategy or just in your decision making process? Because I think as we've already spoken to previously, super rigorous long RCT or randomized control trial or experiment design experimental design, requiring that may stifle innovation. So how do we find the right balance between innovation and requiring evidence? I'm happy to jump in if you want. I think it's important to think about the hats that all of us wear and what we're funding sometimes.

I think people think of VC, foundations, anybody that is sort of giving money in the same bucket, but I think we actually wear slightly different hats. And the way I think about VC is primarily one of the core jobs of VCs is to take bets on innovation. Right? Like, you're kind of seeing what's coming into market and you're making bets on whether or not this is gonna work. And then over time, kind of, some of that will and some of that won't, and that's inherent and built into the nature of this model. And so, I think I would say we're being not we're not doing our jobs while if we aren't taking bets on innovative ideas upfront because the VCs are not funding it.

Like, I don't know who is really funding those ideas upfront. And then, I feel fortunate for the role that I have specifically at OWL is I mean, my role exists because the assumption we're making is not every company is going to have, like, an inbuilt research person, at a company. Right? Like, it's just not feasible early on. And so so I can go in and kind of help them shape that story, but it's a step by step journey that we're gonna walk with the companies to evidence. And so it all goes back, and I think we've all said this is, like, that logic model, that initial metrics of how you're actually gonna build your product.

Is your vision something we wanna support? And those are the innovative ideas and bets that we're making investment in and then the willingness for research. Like, upfront in the first meeting, we're always talking to portfolio companies about, like, hey. When you work with our outcomes, it's gonna be a very big part of that conversation. We're going to work with you. We need to see evidence, but we we know that, like, in some cases, it can take as long as eight years to get to your first randomized controlled trial study.

So I think part of that's also being patient. But I think that's how we balance innovation versus effectiveness. And then over time, obviously I mean, even the market needs effectiveness. Right? So, I think effectiveness kind of kicks in or efficacy kicks in, especially for, products that are serving directly, the student population. I think it's less true maybe in the direct to consumer markets or enterprise or, even non student facing solutions.

Yeah. I love that answer. I I would echo all of it. I think, maybe the point I emphasize is from a venture perspective, like, the best thing we can do is set them on the course from day one, like, premarket, preproduct. If you can get, you know, the venture thinking about their theory of impact and how they're gonna use data, and can build that into the original core product.

That puts them on the path to do the things like RCTs downstream. One of the things that I found so interesting is early stage ventures are collecting business metrics, but they're not necessarily collecting the metrics that are gonna inform research. And so getting that set out of the gate, really creates the conditions for downstream understanding of impact. So I'd say that's where, like, the innovation research or innovation evidence kind of, nexus is a really cool spot for ventures to play. I think philanthropy is on the flip side, like, have much more expansive potential roles to play and really cool partnership opportunities with groups like market rate investors.

Yeah. From from the the funding, the research side, once they get to me, they've already demonstrated a commitment to to building evidence and and measurement and learning. But I I think, especially from, technologies more on the the innovative side, the nascent side, I think there's this distinction between, exploratory and confirmatory, and and moving along that continuum of what do we need to define on the exploration side because this is such an innovative and new area, where we might not have as clear as a a foundation of whether it's theoretical or empirical research to what do we need to confirm that builds up to that more efficacy research over time. That's usually how I view things when I'm when I'm working on building research proposals. Awesome.

And I think part of what was just spoken about plays nicely into this next question, actually. We have a member of the audience who is bootstrapping a company that's built on existing research as primary evidence. So there's extant research out there. However, the sales cycle and the go to market funding limits the development, or limits developing the long term funding narrative to raise. So what advice do we have? And I can repeat that if if needed there.

Yeah. Could you? Yeah. There's existing evidence there. Existing research is primary evidence, but the sales cycle and the go to market funding, limits developing the long term funding narrative to raise. So it sounds like there are some short term needs around, the sales cycle and go to market functions.

And how does that interplay with kind of the long term need to get funding for additional research? And how do you focus on the competing priority of the short term versus the long term is how I'm interpreting this. And there's a quick follow-up. Idea rich, cash poor. No. That's not an easy one there.

I think I'm and the reason I'm not responding is because I think it needs a little bit more context around, like, what the sales cycle and data funding limits funding limits is because in many ways, I mean, we see companies all the time that just have ideas on paper, and, we will kind of talk through the long term vision and fund them. So that really isn't the challenge for VC models. I think it's a little bit of is the bottleneck not being able to, focus on sales and go to market motions because there is no funding, which is slightly different than they have some evidence but can't ramp up, which is, again, different from, hey. We've scaled to an x point, but now we can't meet demand. And so I think that those are the nuances that I'm missing.

And so maybe, maybe it's something that we can chat about more offline or, if there's more context, I think that might help. But I don't know if Matt or Gina have other things to add as well. Yeah. I had the same reaction. My only initial thought was, like, pick a direction.

You know, if your idea rich, like, pick the one you like the most. Go down that path. I don't know if that helps, like, the timing question, but I think, yeah, I I would echo the without a little bit more context, it's hard to know how to answer. We got the context piece, and I know we've been talking about, don't worry about that massive research endeavor with the large sample and the long scale and all that. The quick iterative approach to rapid cycle evaluation and just collecting the data you have and making inferences and interpretations from them and continuing to let that feed back into the cycle of what you're doing from a business standpoint, I think, is critical.

Another piece potentially is aligning your research and evidence build with adoption and implementation. Maybe you can get that funded. I don't know what the product is or what the solution is, and I don't know the context. But if you can get that funded through those partnerships with the school or a district and help cover the cost of goods through that, then I think that can also support your effort to continuously build evidence and push toward that ultimate goal. So just some quick thoughts for me.

Great. So there's another question here. Also talking about bootstrapping. How do you feel about bootstrapped opportunities that have developed over time, offering evidence and proven outcomes, but that have taken a longer time to get there? So I guess less rapidly from an innovation standpoint, but have taken a longer time to get to the point of building out their evidence base. So how do you feel about those types of opportunities that have developed over over time is the question that came in here.

I mean, I think from a venture perspective, if we were if we were looking at investing in that today, we're kind of agnostic or about how long it took to get there. In some ways, that that the longer the journey, probably the more commitment to the the quest of of evidence. So I think, I mean, length I don't know that we would factor it one way or another, but I would say it's only a good thing. Yeah. And I think it's built into the assumption.

Like, if you're bootstrapped, it probably is gonna take longer. I mean, part of the part of of VC money is that it's gonna help fuel growth much faster. It comes with the sort of commitments that you're making to an investor as well. But, yeah, we've definitely had companies that have improved staff for a long time and then have, like, decided to go to market finally after, say, eight years, seven years, whatever. And we've, looked at them equally, as we would with something that's, like, new to the market and looking for funding right off the bat.

And I can think of several examples actually that have been born out of university research and academic settings that have built a really strong basis of evidence from the background that they were born out of, that did go to market and become more commercialized later on. That may even apply Malbeca to what to Amira, which you mentioned previously. But awesome. So another question here that I actually have are that will be applicable to the audience, are what are three actionable steps? Doesn't have to be three exactly. But what are three actionable steps that EdTech startups can take today to better position themselves for funding or for investments or for adoption or whatever it may be? And just to clarify, do you mean broadly or from an impact perspective? I think broadly, but with a kind of a research lens perhaps or impact lens.

I'll throw one out, and then I'll let others chime in. But I think for me, the one thing that I think about most and is really important is thinking about internal capacity for research and setting up the organization early from a data infrastructure perspective for research and impact. And I think that impacts sort of funding in all ways. People often don't when they're starting to build product, they don't think of sort of product development as linked with impact data. But if you think about it, a lot of the proxy metrics that you're collecting is actually coming from the product metrics that you're building on.

And so early on having those conversations of what engagement versus usage versus outcomes looks like, what your what your North Star metrics should look like, and where some of these functions eventually will set. Like, I know they start off with marketing or data or product, but eventually how you want the org structure to look like. I think to me, that that has always been one piece of conversation that I I find myself having often with our portfolio companies because they get too big and they're trying to retrofit what they then want to do from an impact perspective, and it's just harder to make those changes once you've hit, like, a series a, b, to go back and change something. So I think that's one that I'll throw up, but I'll let, Gina and Matt chime in as well. Yeah.

Happy to. And this might feel like a plug or redundant, but logic models. And it it speaks directly to to what you were referring to of how so many times organizations try to retrofit their logic models because we have places out there that are awarding badges, and now they're sometimes in RFPs, at districts. But what logic models do from a purist perspective is they orient your measurement framework and your measurement planning. They kind of provide a a road map or a blueprint for what your research agenda could look like and should look like.

And I think internally, it, because you should have product people involved in the logic model kind of process, It orients the organization towards a commitment of evidence building and a a commitment to research, and and measurement in general, making data informed decisions. So I can't, I can't express the importance of of logic models enough. Yeah. Building on both of those, I think very few founders actually build the product they thought they were gonna build when they get invested. You know? I think that's just the reality of being in such an early space.

There's usually I mean, it's not necessarily a ninety degree turn, but it's it's rare that a founder says they're gonna build the thing that they actually bring to scale. And so building on top of, logic models and building an evidence, or kind of research capacity on day one is also from an, I think, from an investor lens showing that agility and market responsiveness to how do you as you start to test the concept even premarket with, users or potential users, how are you showing your ability to kind of pivot to meet the actual market demand? And then how does that relate to the way that you're thinking about impact? So I'd say, like, it's almost a dynamic logic model that you would need to have in place because the logic model that you set on day one needs to be revisited, especially early stage pretty frequently because you may learn that the, your customer is wildly different than what you thought and what they what they're asking for. Exactly. Yeah. Our research team has built many logic models, and we always say they're formal, not final, because it is a living document and agree about this need to pivot.

Pivots are inevitable, and having an evidence based pivot is a much better way to go about it. And when, you know, I say evidence, I don't mean evidence from an RCT or some sort of, experimental design, but even as kinda Matt alluded to evidence, from your consumers and feedback from your users, is just as valuable. Great. One more question, before we go into a q and a, and I know we've kind of blended the q and a with the actual questions that we had in advance. But and this is a tough one.

In your opinion, what will be the most significant shifts in EdTech funding over the next, call it, five years? So most significant shifts in EdTech funding over the next five years, five to ten years. Broad question. I mean, I'll I'll start with one. I guess I had kinda thought of the question a little different than ed tech funding, but the ed tech, what's gonna work in EdTech? And I think it's particularly in post secondary agility of skills development from our kind of rapidly changing workforce. I think that's where we're spending a lot of time thinking is, you know, the traditional four year, degree is still relevant, but the just changing dynamic of how people are gonna work over the course of their lives needs to, accommodate rapid reskilling.

And so I think that's only gonna grow in terms of the types of solutions that are out there and the learner needs, as people kind of move from, you know, high school into the early days of their, participating in the workforce. That's, I think, just a significant shift that isn't just AI driven, but just like the global employment dynamics are driving a very different, way that people are gonna expect to be skilled. Yeah. I didn't I didn't wanna cop out and just say AI because I think we're all we're all saturated with that narrative at the moment. From from the foundation perspective, I may be biased here, but there are persistent achievement gaps.

And I think that, if I were to be hopeful and aspirational, I think that funding is hopefully going to focus even more heavily than it already does on equity focused solutions or solutions that are especially targeting those gaps, that just don't seem to want to budge. I'll I'll throw out, hope and a wish, I guess. I don't know if that's gonna happen, but I want that's what I wanna focus on is, from a from a research and impact perspective, I really, really hope we find more innovative and more agile ways for foundations, philanthropies, governments, and VCs to partner together to fund research at the scale of innovation. Because right now, innovation is just moving, like, five x faster than anything that we're funding, and then we keep talking about the RND and procurement problem where, you know, things are entering schools that don't have enough evidence. We're putting things in classrooms.

But I think the only way to tackle not the only, but maybe a big way to tackle that problem is really to rethink the way we think about these partnerships early on for rapid cycle evaluation and all forms of evaluation. Right? Like, there is funding actually more for late stage, research methodologies, so, like, randomized controlled trials and, quasi experimental studies. But we don't see a lot of that for, like, surveys or focus groups or interviews. And, we're all saying that it's equally important, especially at the early stage, and yet it's really hard to kind of build that into budgets for founders. So I think that's my one big hope and dream, as we move forward is, like, we just find more innovative, agile ways of funding research.

I love it. Yeah. That that would be very powerful. We had a chat come in, but it came to the hosts and panelists. So not everyone saw it.

But this, was in relation to one of the previous questions, that I thought was quite good here. So I'm gonna read it out loud verbatim. I've seen some early stage companies leverage relationships with high poverty schools to secure funding from foundations to both deploy their innovation and also obtain resources for small scale evaluation, which is often a requirement by foundations. So another strategy or tactic here when thinking about securing funding and incorporating research into the work you're already planning to do. So good comment there.

Keep sending questions our way. I do have one here that I want to read, but I don't know where it went. I have it up if you'd like me to take over that role. How, how frequently do you as investors in EdTech platforms, products, or content have conversations with the adoption decision makers in both k twelve and higher ed, the ones who determine which products will be purchased by districts or actual students, and how does that factor into your investment decisions? I think the short answer is all the time. I think we talk to them as part of the diligence process.

We talk to them later. We actually have sometimes we'll have superintendents or higher ed leaders actually sit on the boards of a lot of our companies as well so that they have, understanding of on the ground what's actually happening, and we consistently will also do webinars and roundtables for our portfolio from, industry leaders as well or so district leaders and school stakeholders. So, yeah, I think the short answer is all the time, and it plays a huge role. I mean, if if a if a superintendent is not gonna find value in a K-twelve product, it's very hard for us to make a sell and say this is what's gonna, win the market. Right? Like, they're gonna be the ones purchasing it, so it has to kind of fit into different, priorities that they have in place as well.

And I I just wanna go back to something I mentioned earlier, that idea of co design, whether it's from the product design perspective or even when you're doing the research and evaluation. You can co design research studies with the end users as well, as to the impact that they find most meaningful, not just, what funders or investors or the market might find meaningful. So, yeah, I just wanted to add that color as well. Awesome. And I know we have a few minutes left.

So one more question here. As a start up or any stage company, quite frankly, there are multiple competing priorities. So how should we think about prioritizing research versus product, versus engineering, versus marketing, etcetera, etcetera? And while you all are processing that question, I'll say, you know, we've spoken from an advisory or consulting standpoint with many, many, many orgs, and we're always trying to figure out where does research fit on the org chart. My response is always everywhere. It needs to be interwoven, and that's kind of a fallacy of false dilemma to think about one versus the other.

I think research and evidence building. And, again, it doesn't have to be an RCT or some rigorous research process or methodological approach, but incorporating evidence into your marketing efforts, your engineering and product efforts through the logic model to help define that path and, you know, equipping your sales team to speak about impacts. I think it all should be kind of interwoven into the fabric, but that's my personal bias. So any other thoughts on how you think through competing priorities from others? From being someone who is on several different research functions in, at these EdTech provider companies. Sometimes research is often seen as, like, a luxury and something that is is external.

So just doubling down on on what you said, Danny, that research is in a potentially underutilized tool, that is not just a good in and of itself, not just science for the sake of science, but it can actually inform product design. It can inform UX. It can support material for marketing, marketing tools that drive adoption and usage. There's so much potential for research to be a support and a tool beyond the the traditional rigorous efficacy studies, and even how those efficacy studies end up getting used beyond just being numbers on on a page, in the evidence library. So I I yeah.

I'm just gonna double down on on what you said. Awesome. Yeah. I I agree entirely with what you said, Danny. And I not a shameless plug for Instructure, but I think, like, there are third party partners, that I think can augment a a venture to make sure that, like, if it if they're feel like they're trying to do much, there are groups that actually can provide that third party support, which may be strategic early.

And at some point, like, third party objective research is best practice anyway. So I think, just adding to, like, you don't have to go it alone. I think, you know, from, Malvika's, comments in mind, like, lean on your venture partners too. That's something that we're eager to lean into, and be a part of the, problem solving around that. So, yeah, I I just say, like, leverage the leverage the network that it cares.

And maybe to the partner conversation on the district side, like, are there resources that the districts or the buyers actually have to contribute to that too because they care or should care a lot about the outcome? Yeah. The last thing I might add is, just take baby steps, to be honest. I I don't think the goal is to jump to whatever is the highest point first. Like, I don't think the goal is for you to go from nothing to an anonymized controlled trial. To be honest, that's gonna probably be really overwhelming.

Might backfire and, might not still drive sort of you. You'll have so many open questions going into that process of, like, what's my fidelity of implementation? What is the ideal dosage I wanna recommend? Who's the right audience for this that take the steps? Like, take the steps of going from logic model to the interviews and the case studies and the quasi experimental studies. I think that structure that ESSA has, every student succeed acts, is there for a reason if you're in k twelve. I'm not saying follow it by the book, but at least take a look. I think the guidelines are helpful, and don't ignore sales and marketing for research.

I don't think that's very advisable either. I think it's a fine balance. And so, making sure you're prioritizing all the different functions of the right stage of company continues to be important. Yes. Love all that, and I know we are at time.

So I just wanna thank all those who tuned in live. I know times are crazy, and, everyone's got some busy schedules. So thank you for tuning in live, and thanks for those who will tune in later, as we share out this recording. And thanks for the panelists. A lot of compelling insights.

I've been taking notes mentally this entire time. Good stuff. I do wanna share quickly. You see a QR code here. This is will take you to the education policy atlas, that we've built and shared as a resource for the for the market.

It was created for education institutions, LEAs, SEAs, and ed tech companies, and it outlines different funding guidelines as they pertain to the Every Student Succeeds Act or ESSA as Malefka just mentioned. Really led in across departments at Instructure, spearheaded by, Caitlin Summer, the inimitable Caitlin Summer, as well as informed by our research team, which is led by doctor Mary Stiers. Also become in a partner and join the EdTech collective community. We would love to see you as a partner and to help share, support as we go on this journey together. And finally, I encourage you to reach out to myself, and to other members of our team as we are here to offer that guidance and consultation and to connect you, to or with the people, products, and resources that'll ultimately provide support on your journey.

So thank you all for tuning in. A lot of compelling stuff here, and we'd love to continue the conversation with you offline. Thank you all. Thank you. Thanks, everybody.
Collapse