Measures of school performance—whether used for high-stakes accountability or for lower-stakes diagnostic purposes—have become increasingly complex and technical in the past three decades. State accountability systems that initially examined only student proficiency rates in reading and math now incorporate a wide range of additional measures, including achievement in other subjects, graduation rates, chronic absenteeism, school climate, postsecondary readiness and enrollment, and student achievement growth (or school value-added) alongside proficiency. Meanwhile, many local school districts and charter school authorizers have their own approaches to measuring school performance, which are often equally complex and multidimensional.
There are good reasons for the increasing complexity of school performance frameworks. Schools aim to promote student knowledge in more than just reading and math. Growth and value-added metrics provide data related to the school's contribution to student achievement, helping to level the playing field among schools serving widely varied student populations. Graduation and postsecondary enrollment are important signals of students' preparation for work and life. Students' and teachers' perceptions of school climate provide a valuable window into the learning environment. In short, all of these additions to accountability and performance management systems seek to make the systems fairer and more comprehensive.
Even so, the additional complexity of the systems increases the likelihood that they might produce unintended consequences that could be overlooked. When the District of Columbia's Public Charter School Board (PCSB) sought to revise its framework of performance measures used to evaluate DC's 135 public charter schools, PCSB recognized this risk and consulted REL Mid-Atlantic for analytic support.
Over the past three years, PCSB has been working, in collaboration with charter school leaders and other constituents, to develop its new Annual School Performance Index Report and Evaluation (ASPIRE) framework. REL staff reviewed literature on school performance measures, participated in meetings, and developed presentations to help ensure that the component measures included in the system would be valid, reliable, and robust.
- Valid measures can support the inferences drawn from them: They actually measure what they claim to measure.
- Reliable measures are stable: They do not bounce unpredictably due to random error.
- Robust measures remain valid even after they become consequential: They are not easily manipulated by gaming strategies.
The REL's advice to PCSB was informed by years of work with state and local agencies under the aegis of its community of practice on accountability and school performance measures. That work led to the development of a comprehensive framework for understanding school performance, focused on student outcomes, school impacts on student outcomes, and processes inside schools.
As PCSB considered options for the ASPIRE framework, REL staff provided input on validity, reliability, and robustness.
For example, the REL encouraged PCSB to get an indication of the reliability of each proposed school performance measure by applying the measure to historical data and examining the correlation within schools from year to year.
Continue reading on the REL Mid-Atlantic blog.