![mastery connect data](/sites/default/files/styles/medium/public/image/2025-02/blogs_7.png?itok=_HyJ7Unz)
To make broad claims about Mastery Predictive Assessments and Mastery Connect, we must establish that the research studies' findings are generalizable to other contexts. Generalizability means that the conclusions from research conducted on a sample can be applied to the population at large (Polit & Beck, 2010).
Generalizability also allows us to predict the likelihood that something will recur based on the frequency of past occurrences. Our research team has strategically developed a body of studies with generalizability in mind to ensure the findings from our extant research apply to other contexts.
There are three requirements for establishing generalizability that we will explore together (Runkel & McGrath, 1972):
- Consistency in Different Places: The same results should be seen when the study is repeated in different locations.
- Consistency with Different Tools: The same results should be seen when using different methods or tools.
- Strong Sample Group: The study should be done with a diverse and representative group of people.
Let’s take a look at each of these.
Consistency in Different Places
The first requirement is that treatment, in our case, using Mastery Predictive Assessments or Mastery Connect, will produce the same results in other situations or contexts. Studies must be replicated in different settings with unique characteristics to establish this.
Evidence of replication:
- We replicated predictive validity studies showing strong correlations and classification accuracy between Predictive Assessments and state-standardized assessments across eight districts in four states over two years.
- We replicated efficacy studies (i.e., ESSA Level III) demonstrating that greater use of Mastery assessments (i.e., Predictive, formative, and individual items) was associated with increased state test scores across six districts in four states.
Consistency with Different Tools
The next requirement for generalizability is that the same results are found using different instruments. Our studies meet this requirement in two ways.
First, districts use Mastery assessments based on their local curriculum and instructional pacing. Because of this, the predictor measures and mastery assessments varied across the studies but still showed the same validity.
Second, the outcome measures, state standardized tests, varied since the studies were conducted across four states.
Strong Sample Group
Finally, it is important to establish that the sample is relatively large and representative of the population (Firestone, 1993). A minimum sample size of 400 is needed for robust estimations of generalizability (Atilgan, 2013).
Sample sizes:
- Predictive validity studies included samples of 79,446 students for studies conducted in 2022-23 and 81,697 students for studies conducted in 2023-24.
- Efficacy studies included a sample of 74,906 students.
- The student samples came from urban, suburban, and rural districts, ranging from 500 to 78,000 students.
Furthermore, the samples represented diverse populations regarding race/ethnicity, socioeconomic status, and special education status. However, despite that, the study results had no notable differences depending on students’ demographic characteristics.
Because our research studies meet all three criteria, the findings from both the predictive validity and efficacy studies can be generalized to all populations of students. We can confidently state that:
- Mastery Predictive Assessments are an accurate predictor of student performance for all subject areas, grade levels, and student populations regardless of race, ethnicity, age, or individual ability.
2. More use of Mastery assessments via Mastery Connect is associated with increased student performance.
Related Content
- infographics.png
Blogs
- Teaching-With-Tech-10-Benefits.jpg
Blogs
- blogs_7.png
Blogs