MATH 180: Hardin County Schools
At a glance
  • Moderate Evidence
  • Program: MATH 180®
  • Subjects: Intervention, Math
  • Report Type: Efficacy Study, Study Conducted by Third Party
  • Grade Level: Middle
  • Region: Southeast
  • District Urbanicity: Suburban
  • District Size: Large
  • District: Hardin County Schools, KY
  • Outcome Measure: MATH 180 course software use, Math Inventory, NWEA MAP
  • Evaluation Period: 2016–2017 school year
  • Study Conducted by: RMC Research
The Challenge
Hardin County Schools recognized their students struggling in mathematics needed additional support. It was clear the support they provided would only be effective if it addressed both the needs of the students as well as their teachers.

Hardin County Schools in Kentucky was implementing the MATH 180 program with 212 students in 16 of its schools. MATH 180 was being used to assist students in Grades 5 and up who were struggling with multiplication, division, fractions, and decimal operations. The district was interested in understanding the effects of the program on student growth in mathematics. The study involved the 212 students who participated in MATH 180 in the 2016–2017 school year and 212 matched comparison students from the same schools. The study sought to answer the following research questions:

1. What are the effects of MATH 180 on student mathematics achievement?

2. How does MATH 180 differentially affect subgroups of students?

3. What is the association between student mathematics achievement and program implementation—are changes in MATH 180 participants’ mathematics test scores associated with variation in program implementation?

The Solution

MATH 180 is an intervention program for struggling students in Grades 5 and up. MATH 180 Course I focuses on rebuilding students’ understanding of multiplicative thinking, division, fractions, and decimals as students progress towards algebra readiness. Built around nine blocks of instruction, each covering three topics, MATH 180 uses a blended learning model of instruction to build reasoning and elicit student thinking. In this blended model students rotate between teacher-facilitated instruction and personalized software that adapts to their needs.

The Study

The analytic sample included a total of 212 MATH 180 students in Grades 5–8 who received MATH 180 in 2016–2017 and 212 matched comparison students. RMC Research used propensity score matching to create a matched set of comparison students from a total of 3,639 possible comparison students who did not receive MATH 180 but were eligible for the intervention. The purpose of using propensity score matching was to identify a sample of non-MATH 180 students that most closely resemble the MATH 180 students. Students were stratified by grade and matched on baseline NWEA® Measures of Academic Progress (MAP®) scores (averaged across the 2014–2015 school year) and four demographic variables:

• gender,
• race (White/non-White),
• special education status, and
• eligibility for free or reduced-price meals.

The majority of participating students were White (75% of MATH 180 and 73% of comparison students) and eligible for free or reduced-price meals (71% of MATH 180 and 76% of comparison students). Almost one-quarter (23% of MATH 180 and 24% of comparison students) were special education students.

Baseline Equivalence

A baseline equivalence test conducted on the final analytic sample revealed baseline equivalence between the MATH 180 and comparison groups. Specifically, in the final analytic sample, no significant differences between groups existed on baseline assessment scores, gender, race, special education status, or eligibility for free or reduced-price meals. These results support the sampling goal of establishing a comparison student sample that resembles the MATH 180 student sample. Table 1 presents the baseline equivalence results.

Students using MATH 180 demonstrated significantly greater gains on the NWEA MAP assessment in relation to a matched comparison group; this finding was more pronounced for students designated as SPED.
Results

What are the effects of MATH 180 on student mathematics achievement?

NWEA MAP Scores: Fall 2016 to Spring 2017

Overall effects
An analysis of covariance (ANCOVA) model was conducted to test whether MATH 180 had an effect on student gains on NWEA MAP scores. Exhibit 1 shows that Hardin County Schools’ MATH 180 students’ gains on NWEA MAP scores from fall 2016 to spring 2017 were significantly greater than comparison group gains (F = 13.08, p < .001).

EXHIBIT 1. NWEA MAP Scores

Note. MATH 180 n = 208; Comparison n = 207.

How does MATH 180 differentially affect demographic subgroups of students?

Demographic subgroup effects
Additionally, moderating effects were examined to determine whether MATH 180 was equally effective across student demographic groups. ANCOVA results revealed no interaction effects for race (White versus non-White), gender, or socioeconomic status. However, findings revealed a significant interaction of MATH 180 and special education status on NWEA MAP. Specifically, the magnitude of the effect of MATH 180 was significantly greater for students in special education than those who were not in special education (F = 4.94, p < .05).

Further analysis examined effects of MATH 180 for the following subgroups: special education students and non-special education students. Exhibit 2 shows that Hardin County Schools’ MATH 180 students’ gains on NWEA MAP scores from fall 2016 to spring 2017 were significantly greater than comparison group gains for students who were in special education (F = 12.45, p < .001) and for those who were not in special education (F = 4.01, p = .046).

EXHIBIT 2. NWEA MAP Scores by Special Education Status

Note. SPED MATH 180 n = 45; SPED comparison n = 49; non-SPED MATH 180 n = 161; non-SPED comparison n = 158.

What is the association between student mathematics achievement and program implementation? Are changes in MATH 180 participants’ mathematics test scores associated with variation in program implementation?

High versus low implementation subgroup effects
Subgroup analyses were conducted to examine differences between high MATH 180 implementers (i.e., students who were more engaged with the software) and low MATH 180 implementers (i.e., those who engaged less with the software). RMC Research created an implementation variable based on students’ number of sessions on the MATH 180 software. MATH 180 students who had 50 or more sessions during the 2016–2017 school year were classified as “high implementers,” and those with fewer than 50 sessions were classified as “low implementers.” Comparison students were assigned the same high or low implementation designation as their matched MATH 180 counterpart. Results showed that MATH 180 students who were high implementers exhibited significantly greater gains between fall 2016 and spring 2017 on NWEA MAP math scores than their comparison student counterparts (p < .001). However, MATH 180 students who participated in fewer than 50 sessions did not differ from their comparison group counterparts on NWEA MAP math score gains (p = .939). Exhibit 3 presents the fall 2016 and spring 2017 NWEA MAP math scores for each of these groups.

Note. Low implementer MATH 180 n = 42; Low implementer comparison n = 42; high implementer MATH 180 n = 170; high implementer comparison n = 170.

Further analyses examined whether an interaction effect between MATH 180 and implementation existed—that is, whether the difference in NWEA MAP math score gains between MATH 180 students and their comparison counterparts (the MATH 180 effect)—was significantly greater for the high implementation than for the low implementation group. Though the subgroup analyses showed a significant effect of MATH 180 on high implementation students but not on low implementation students, ANCOVA results indicated that the interaction—difference in effect between the two groups—was not significant.

Math Inventory Scores: Fall 2016 to Spring 2017

Math Inventory gains
RMC Research conducted a paired t-test to evaluate the extent to which students’ scores on the Math Inventory® improved after participating in MATH 180 . Exhibit 4 presents Math Inventory scores for MATH 180 students from fall 2016 to spring 2017.

On average, MATH 180 students who had a fall 2016 and spring 2017 Math Inventory score (n = 143) experienced statistically significant improvement on the Math Inventory between fall 2016 and spring 2017 assessments, t(142) = 13.80, p < .001, d = 1.15. After participating in MATH 180, students scored an average of 218.71 points higher on the Math Inventory, a large improvement.

Conclusion

A sample of students in Grades 5–8 who received MATH 180 during the 2016–2017 school year in Hardin County Schools was compared to a statistically equivalent sample of students who did not participate in MATH 180 in the same district. The findings showed that MATH 180 students made significantly greater gains than the comparison students on NWEA MAP scores. Additional analyses of subgroup differences on NWEA MAP scores also revealed significant findings. Specifically, MATH 180 special education students made significantly greater gains on NWEA MAP than comparison special education students, and MATH 180 non-special education students made significantly greater gains on NWEA MAP than comparison non-special education students. Further, these MATH 180 effects were significantly greater for special education students than for non-special education students. A similar subgroup analysis conducted on high versus low MATH 180 implementers as defined by number of MATH 180 sessions revealed a MATH 180 effect (i.e., a significant difference between MATH 180 and comparison students) on NWEA MAP score gains for high implementers but not for low implementers. A test of the interaction—difference in MATH 180 effects between high and low MATH 180 implementers—was not significant. Analyses of Math Inventory scores were also conducted for MATH 180 students only and revealed that those students made significant gains on the Math Inventory between fall 2016 and spring 2017.

References
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers.

  • Sawilowsky, S. S. (2009). New effect size rules of thumb. Journal of Modern Applied Statistical Methods, 8(2), 597–599.

Because comparison students did not complete a Math Inventory assessment, only MATH 180 students are included in this analysis.

The average amount of improvement can be classified as large to very large based on the calculated effect size (d = 1.15) (Cohen, 1988; Sawilowsky, 2009).