Thursday, August 22, 2013

What Do We Know About Using Value-Added to Compare Teachers Who Work in Different Schools?

Moodle Guide for Teachers
Moodle Guide for Teachers (Photo credit: st0nemas0nry)
by Stephen Raudenbush, Chair, Committee on Education, University of Chicago, Carnegie Knowledge Network: http://www.carnegieknowledgenetwork.org

Highlights
  • Bias may arise when comparing the value-added scores of teachers who work in different schools
  • Some schools are more effective than others by virtue of their favorable resources, leadership, or organization; we can expect that teachers of similar skill will perform better in these more effective schools
  • Some schools have better contextual conditions than others, providing students with more positive influences - peers who benefit from safe neighborhoods and strong community support. These conditions may facilitate instruction, thus will tend to increase a teacher’s value-added score
  • Value-added models control statistically for student background and previously demonstrated student ability. But these controls tend to be ineffective, and possibly even misleading, when we compare teachers whose classrooms vary greatly by these factors
  • There are several methods for checking the sensitivity of value-added scores to school variation in contextual conditions and student backgrounds
  • If value-added scores are sensitive to these factors, we can revise the analysis to ensure that the classrooms being compared are similar on measures of student background and school composition, thus reducing the risk of bias

Introduction

This brief considers the problem of using value-added scores to compare teachers who work in different schools. My focus is on whether such comparisons can be regarded as fair, or, in statistical language, “unbiased.”

An unbiased measure does not systematically favor teachers because of the backgrounds of the students they are assigned to teach, nor does it favor teachers working in resource-rich classrooms or schools. A key caveat: a measure that is unbiased does not mean the measure is accurate. 

 An unbiased measure could be imprecise - thus inaccurate - if, for example, it is based on a small sample of students or on a test with too few items. I will not consider the issue of statistical precision here, having considered it in a previous brief.[1]

This brief focuses strictly on the bias that may arise when comparing the value-added scores of teachers who work in different schools.

Challenges That Arise in Comparing Teachers Who Work in Different Schools

In a previous brief, Goldhaber and Theobold showed that how teachers rank on value-added can depend strongly on whether those teachers are compared to colleagues working in the same school or to teachers working in different schools.[2]

This discrepancy by itself does not mean that between-school comparisons are biased. However, previous literature identifies three unique challenges that arise in comparing teachers who work in different schools, and each brings a risk of bias.

First, some schools are more effective than others by virtue of their favorable resources, leadership, or organization. We can expect that teachers of similar skill will perform better in these more effective schools.

Second, some schools have more favorable contextual conditions than others, providing a student with more favorable peers - those who benefit from strong community support and neighborhood safety. These contextual conditions may facilitate instruction, thus tending to increase a teacher’s value-added score.

Third, value-added models use statistical controls to make allowances for students’ backgrounds and abilities. These controls tend to be ineffective, and possibly misleading, when we compare teachers whose classrooms vary greatly in the prior ability or other characteristics of their students.

This problem can be particularly acute when we compare teachers in different schools serving very different populations of students.

It can also arise when we compare teachers who work in the same school but who serve very different sub-populations, as with teachers in high schools in which students are tracked by ability.[3]

After describing each of these challenges, I consider ways to check the sensitivity of value-added scores to variations in schools’ contextual conditions and students’ background.

If value-added scores are sensitive to these factors, we can revise the analysis to ensure that the classrooms being compared are similar on measures of student background and school composition, thus reducing the risk of bias.

In this revised analysis, the aim is to compare teachers who work with similar students in similar schools.

While policymakers may debate the utility of such comparisons, such comparisons are better supported by the available data than are comparisons between teachers who work with very different subsets of students and under different conditions. Thus they are less vulnerable to bias.

To read further, go to: http://www.carnegieknowledgenetwork.org/briefs/comparing-teaching/
Enhanced by Zemanta

No comments:

Post a Comment