Can More Teachers Be Covered? The Accuracy, Credibility, and Precision of Value-Added Estimates with Proxy Pre-Tests

Can More Teachers Be Covered? The Accuracy, Credibility, and Precision of Value-Added Estimates with Proxy Pre-Tests

Working Paper 64
Published: Aug 16, 2018
Publisher: Princeton, NJ: Mathematica Policy Research
Associated Project

Value-Added Analysis Services for the State of Oklahoma

Time frame: 2013-2018

Prepared for:

State of Oklahoma


Dallas Dotter

Albert Y. Liu

Value-added models used for evaluating teachers typically rely on controls for previous-grade student achievement to isolate teachers’ contributions to students’ current achievement. As a consequence, these models are most commonly used for subjects in which students are tested in consecutive grades. However, states and districts have little information about how value-added models perform in grades when tests in the same subject are not available from the previous year. In those grades, proxy pre-tests—prior-grade test scores from other subjects—are often used as controls. Using proxy pre-tests could allow states and districts to increase the number of teachers for whom value-added models can be used (for example, by including science teachers, rather than only teachers of math and English language arts). In this paper, we use statewide data from Oklahoma to investigate whether value-added models that rely on proxy pre-tests can credibly, accurately (with limited bias), and precisely measure teachers’ contributions to student achievement. We find that not incorporating same-subject pre-tests affects value-added estimates much more than does excluding other student background characteristics. This difference appears to be a result of more bias in the proxy pre-test estimates rather than less precision. Despite evidence of bias, we discuss how these estimates may still reflect important information about a teacher’s effectiveness. We also note that empirical Bayes shrinkage, an approach typically used to address precision, might also be used to address bias so that value-added estimates that rely on proxy pre-tests can be given more appropriate weights in teachers’ evaluations.

Follow the Evidence

Interested in the most current findings from Mathematica? Subscribe to our bi-weekly newsletter, Evidence & Insights, to stay up to date with the issues that matter to you.

Sign Me Up