Evidence is a word that is said a lot in the world of education these days.
With so many edtech tools used by teachers and students, what evidence is needed to tell if those tools are actually working? How can districts take evidence and turn it into action to improve student and teacher outcomes and experiences? And, importantly, what are the first steps administrators can take to get started?
We recently hosted a Q&A session with Mary Styers, Ph.D., LearnPlatform by Instructure’s Director of Research, and Amanda Cadran, Program Director for Rapid-cycle Evaluation for LEAs and SEAs, about all things related to edtech evidence, rapid-cycle evaluations and how to get started with this work.
What is our team at LearnPlatform really talking about when we say evidence?
MARY: We all are familiar with evidence, especially educators, who work with vast amounts of data and utilize evidence-based decision making as common practice. When we talk about evidence, we are talking about these same things many people already know about, but with a specific lens on collecting a certain type of data to understand how products are helping students (or not).
In our world, we talk a lot about rapid-cycle evaluation as a way to get this evidence. Rapid-cycle evaluations (RCEs) are run, by administrators, using our own technology called IMPACT™. IMPACT helps education leaders generate practical, timely and relevant evidence to make data-informed decisions and adjustments around edtech tools to ensure that the tools are meeting the needs of all students.
RCEs are:
-
Rapid (yet rigorous) edtech evaluations.
-
Practical and relevant evidence on what is working and what is not and help to pinpoint areas for improvement.
-
Actionable and with a timely cadence.
-
Iterative, with a formative focus.
-
Inform the quality of implementation.
Are there any other types of evidence we are talking about, in addition to that districts or providers may get from rapid-cycle evaluations?
AMANDA: Yes! When we talk about evidence, we are also talking about a few things, including results from an edtech audit and teacher feedback. An edtech audit helps identify immediate opportunities for savings and helps provide recommendations for tech teams around tool procurement. Teacher feedback is of course super important, as it gives teachers the voice to share with administrators how tools are and aren’t working for their specific groups of students.
Oh, and it’s worth mentioning that when we talk about evidence, we’re talking about evidence that can align with the Every Student Succeeds Act (ESSA). This can range from supporting evidence that demonstrates rational to strong evidence.
All of this evidence together allows education leaders to build a body of knowledge of what works and what doesn’t work!
There is a lot of edtech evidence out there, and we get a lot of questions from districts as they through their data to really see if edtech is working. Is there a “right way” to evaluate products? What should/shouldn’t administrators be doing?
AMANDA: There is not one perfect recipe to understand if or how edtech products are working for students. Some products may work better than others across grade levels or at different usage (dosage) amounts. It’s usually not a binary answer (yes or no), but one with more context around implementation goals and expectations. What matters is understanding what the product was intended to do, and working with providers to get the best, most accurate and detailed data that would show student progress in a particular product. This can be done at the teacher level, too.
MARY: You’re right about that, Amanda. Getting a sense of product use across different student groups, like different schools, grade levels, IEPs, etc., alongside an understanding of what achievement looks like after using the product, is so helpful. When education leaders also look at the true cost of that product including licenses, professional development and more, they can begin to answer that question of: is this tool actually working and is it being used to the fullest extent?
Connecting all of this back to first-person educator feedback completes that picture, because we know teachers have that first-hand experience with the product in the classroom. And then, parent and student feedback can also be part of this as they can offer additional insights into product use and impacts, so that districts can get a well-rounded perspective on how the product is working for students in their local district.
What types of evidence can administrators get out of a rapid-cycle evaluation?
MARY: That’s a great question – there are so many answers! That’s the great part about this type of evaluation. There are three key areas that I want to hit on. You have:
-
Usage Analyses: where you look to understand how an edtech product is being used. Within this analysis, you can also explore if users are following recommended usage guidelines. Perhaps your district has an expectation that students use the product 30 minutes a day, you can explore what percentage of students are hitting this usage goal.
-
Outcomes Analyses: These are aligned with ESSA evidence levels. Outcomes analyses focus on understanding product impacts on student learning outcomes. Within outcomes analyses, you can explore whether greater use of a product relates to higher learning outcomes for your students (or greater SEL skills or any other type of outcome measure) and you can also examine if students who use a product outperform other students who don’t use that product.
-
Cost Analyses: Cost analysis allows you to understand potential cost savings for a product by exploring costs associated with edtech licenses that are underutilized.
By the way, administrators have run more than 1,000 of these RCEs to date on LearnPlatform, and have run two-times as many this year already than all of last year. We’re also working with edtech providers and enabling them to conduct ESSA-aligned, rapid-cycle evaluations (our Outcomes Analyses) so they can get evidence in their hands much sooner and in a much more cost-effective manner.
It really is amazing the different types of questions districts can ask and answer when trying to make an edtech ecosystem that is more effective for all students.
Data doesn’t really matter if administrators aren’t able to put it into action for teachers and students. How can they do that? Do you have any stories about actions that a curriculum and instructional coach or technology director took based on evidence?
AMANDA Absolutely – I mean, that’s the most important piece, right? That they can put it into action!
Here’s an example: A district thinks that there are major differences in how different ethnicity groups are able to access and use their edtech tools. This is important because they need to ensure equitable access to educational programs. Product dashboards typically don’t break down their usage data in this way. In some of those cases, they may learn that those different student groups don’t need the same amount of minutes or lessons on an edtech product to see positive outcomes, and they can continue to monitor that at intervals over time.
Another example: A district knows that the grade levels using a tool may have different needs, and that the teacher teams are using products in their own way. How can they better facilitate conversations and even determine whether or not we need more or less professional learning (or licenses) for a particular product? There are often grade levels that are clearly implementing the product to fidelity and experiencing success with it – a district can then go to those teachers and determine what their suggestions for best practices are, and share those out. Districts can also work with grade-level teams that may be struggling with their implementation of the product to determine if more supports are needed, and why that may be.
In these situations, a district’s priorities and implementation plans are at the forefront, and edtech providers can work with districts to determine how to measure that with the data they have collected.
One of the biggest hurdles we see districts face (because they’ve told us!) is that they don’t know where to get started. What do you think keeps administrators from jumping right into rapid-cycle evaluations or any other kind of evidence gathering?
MARY: This is so great, because, as a researcher and program evaluator, I want to demystify research and program evaluation and really make this whole RCE process accessible to all educators.
In my experience, I think we need to do a better job as a research community of avoiding technical language and jargon, because that language makes research feel inaccessible. Our goal here at LearnPlatform is to empower districts to understand research foundations and to equip districts to run RCEs that answer questions of interest to them, not to researchers or providers, and to be able to analyze and report on RCEs in LearnPlatform so they can take this data and make decisions. We are here to support districts every step of the way.
And if anyone wants to dig a little deeper, that’s what EdTech Evidence Fellows is for. It’s an opportunity to build confidence and knowledge around RCE with the ability to directly apply what is learned using a district’s own data. That is a four-month, asynchronous online professional development course that we offer.
Okay, getting back to hurdles. I’ve had districts tell me that they feel that if their data isn’t complete or perfect, then they cannot do the work… which is not true! The point is for RCEs to be practical and timely to inform decision-making, and there are many ways to construct evaluations that will result in more knowledge than before. I tell people: forget what you think you know about research and evaluation. We’re here to empower districts, and that’s really the most exciting thing for me about this work.
AMANDA: I can totally relate to this. When I first began engaging with research, it was in a more traditional setting, and I thought that I had to work within those structures which are often far more rigid, in order to get any type of valid result. What I have learned and experienced so many times now, is that you can still get evidence that you can use from rapid-cycle evaluation, often in a time frame that lets you help students while they are still in that year’s classroom – that is powerful, and it really is more attainable than it seems.
Empowering district teams to know what data they can ask their providers for, and how to use that is another piece of this work that is so important. Without complete data, districts cannot come to terms with the implementation of their priority edtech tools or other programs. One of the most meaningful conversations I can have with a district or provider is to ensure that we know what the data means and how to best use it. That is not something most of us learn in education or administrator programs. Edtech providers are partners in this work, and know their platforms and data best.
The last thing I wanted to mention is the importance of collaboration and teamwork. I think people get stuck trying to do this alone – don’t do that! A team effort will include curriculum and instruction, budget/finance, school leaders, tech directors and district executive leadership. Having a team that understands the purpose of this type of evaluation and how to message or share results is also important.
Ready to chat about evidence and how to get started in your education organization? Schedule a demo.
Related Content
- The Power of Formative Assessment 7 Ways It Can Benefit Learners.png
Blogs
- formative_assessment_1.png
Blogs
- integrated-learning-systems.jpg
Blogs