A Practical Pathway to Building Evidence: One Education Researcher’s Perspective

Table of Contents

    Share
    Share

    When I first started using instructional technologies in my own classroom well over a decade ago, I was drawn to dynamic geometry software environments. My students loved learning geometry visually and were thoroughly engaged. I had a nagging thought, though, that my tech-based lessons were not quite translating into “success” on our end-of-unit test or standardized assessments. Perhaps our typical assessments were not assessing the skills and concepts that were embodied in my geometry lessons but, at the end of the day, these were the instruments used by schools to measure proficiency… that’s a conversation for another blog.

    After a year of using my favorite software, when my head of department asked me, “Does the software work?” I didn't have a straight answer. In some ways, that question launched my research career. I started a journey studying teachers’ perspectives on using dynamic geometry software, later working as a doctoral student on a team that developed and commercialized a diagnostic assessment software for middle school level math. More recently (and serendipitously), it has landed me in a role where I work with edtech providers in asking and answering the same (but refined) question about their own learning solutions, “Does it work, for whom, and under what conditions?”

    Educational evaluation studies have typically been a multi-year, costly undertaking. There is a place for such studies in university-affiliated research programs, because they yield much more than just effectiveness research – they also help grow knowledge around content domains, student learning, and professional learning. However, for most commercially developed edtech learning solutions, and especially for those in the early stages, this is typically not a viable approach to effectiveness studies. Both the form and function of edtech learning solutions often evolve within a year, as most providers adopt agile software methodologies to meet the needs of end users (Confrey, 2019). On the K-12 administration side, it is neither cost-effective nor viable to make purchasing decisions based on obsolete evidence that is, likely, not relevant to their situational context. So, while various stakeholders in education continue to rally around the need for and demand evidence-based edtech learning solutions, the means by which to build this rigorous evidence base remains an open challenge.

    At LearnPlatform, we launched Evidence-as-a-Service, third-party research services that follow this continuum of evidence to meet market demands, adopting a responsive and agile approach to effectiveness research that aligns with the nature of edtech development.

    The base subscription is centered around a collaboration with the provider to review the foundational research of their learning solution in order to articulate its logic model, or theory of change. Think of a logic model as a product roadmap or a high-level program overview – it identifies how the learning solution aims to reach its final destination, translating inputs or investments into measurable activities that lead to expected results.

    Providers are expected to have a clearly defined logic model and a planned research design to meet the Every Student Succeeds Act (ESSA) Level IV evidence (Demonstrates a Rationale). This logic model defines inputs, core components or activities, and expected improvements in outcomes, allowing researchers to design an aligned research study that meets ESSA Level I, II or III standards. Getting clear on usage metrics and targeted learning outcomes is an important first step toward ensuring that any effectiveness study that is built upon the logic model is also well defined and set up to succeed.

    The next two subscription levels, Evidence and Replication, are designed to establish and grow a body of evidence for a learning solution with a full research study design, execution, and reporting for promising or moderate evidence (ESSA Levels III and II) in different settings. 

    For providers that have not had the opportunity to conduct any research on their learning solution, we first recommend a Level III study. In our combined experience as a team, we have numerous examples of why the use of a product does not necessarily translate to the intended outcome(s) right off the bat and, more often than not, unforeseen implementation issues may arise, rendering the tool less effective than expected. At other times, we uncover challenges with the learning solution that providers need to address before intended outcomes can be realized.

    It is important to address challenges early on, before, for example, conducting a quasi-experimental or randomized control trial study. Level III studies in two local contexts lay the necessary groundwork for more rigorous studies down the line. For providers who have a more established research base, the Evidence and Replication packages offer opportunities to conduct research in demographic settings that are dissimilar to any prior studies and show the replicability of results. 

    Ready to learn more about our services for solution providers? Join our next information session or contact a member of our team here.

    Stay in the know

    Don't miss a thing – subscribe to our monthly recap and receive the latest insights directly to your inbox.