Mathematica’s National Study of the Effectiveness of Educational Technology Interventions was a rigorous evaluation of the efficacy of technology applications designed to improve student learning in math and reading in grades K-12.
Congress has spent billions to give schools access to technology and online learning opportunities. But research on the effectiveness of using technology in the classroom has lagged behind technology's growth.
Mathematica’s National Study of the Effectiveness of Educational Technology Interventions, funded by the U.S. Department of Education's Institute of Education Sciences, was a rigorous evaluation of the efficacy of technology applications designed to improve student learning in math and reading in grades K-12. The study, which found few impacts on achievement, assessed the effects of four reading products—Destination Reading, Headsprout, Plato Focus, and Waterford Early Reading Program—on reading achievement in first grade, and two products—Academy of Reading and LeapTrack—in fourth grade. It also looked at effects of two math products—Achieve Now and Larson Pre-Algebra—on math achievement in sixth grade, and two high school algebra products—Cognitive Tutor Algebra I and Larson Algebra I—used mostly in ninth grade.
A classroom-level random assignment design was used to estimate impacts. The study recruited a geographically diverse set of 36 districts and 132 schools from both urban and rural settings. The study also focused on schools that served low-income students, were interested in implementing one of the interventions, had the technology infrastructure to support the intervention, and were able to implement grade-level random assignment.
Two types of student achievement measures were used. The first set assessed reading achievement and math achievement. The second set used data collected from school records and included such outcomes as student attendance and promotion to the next grade. The study also examined the conditions and practices under which educational technology is effective for each school in the study. The Year 1 sample included 526 teachers and slightly fewer than 12,000 students.
Findings
Key findings about one-year effects showed that:
- On average, after one year, products did not increase or decrease test scores by amounts that were statistically different from zero.
- For reading products, effects on overall test scores were correlated with the student-teacher ratio in first-grade classrooms and with the amount of time that products were used in fourth-grade classrooms.
- For math products, effects were uncorrelated with classroom and school characteristics.
Findings that compared first- and second-year effects provided mixed support for the hypothesis that an additional year of experience using the software products improves their effects on test scores:
- For the grades 1 and 4 reading products, the first- and second-year effects did not differ by an amount that was statistically significant.
- For the grade 6 math products, the effect in the second year was more negative than in the first year (the effect was negative in both years) and the difference between the two negative effects was statistically significant.
- For the algebra products, the effect in the second year was positive whereas the effect in the first year was negative, and the difference between these effects was statistically significant.