This week the New York Times published teacher rankings of 18,000 New York city teachers.
The ratings, known as teacher data reports, covered three school years ending in 2010, and are intended to show how much value individual teachers add by measuring how much their students’ test scores exceeded or fell short of expectations based on demographics and prior performance. Such “value-added assessments” are increasingly being used in teacher-evaluation systems, but they are an imprecise science. For example, the margin of error is so wide that the average confidence interval around each rating spanned 35 percentiles in math and 53 in English, the city said. Some teachers were judged on as few as 10 students.
Sound fair to you?
Evaluators deserve a failing grade on their "value-added" system, but it seems only teachers must be held accountable for the work they do. How do these geniuses rank teachers, and remember this is a ranking system, not a scoring system? In ranking teachers, someone has to be at the bottom and top, everyone else falls in between. Ranking does not give teachers an A, B, C, D, or failing grade based on desired criteria. The late Gerald Bracey explained the difference in Some Common Errors in Interpreting Test Scores. Of course, the article only discusses the evaluation of students, schools, districts, and states. At the time this article was written, Bracey could not have conceived of government officials ranking teachers with a rating system as draconian as value added evaluations.
Accumulated here are articles explaining the ridiculous value-added system that has for some inexplicable reason gained legitimacy. Also, included are responses from teachers and parents.
The teacher and the consultant
"Value-added measures" to judge a teacher's worth — what's that all about? If we would only listen to teachers.
Merit Pay, Teacher Pay, and Value Added Measures
Value added measures sound fair, but they are not. In this video Prof. Daniel Willingham describes six problems (some conceptual, some statistical) with evaluating teachers by comparing student achievement in the fall and in the spring.
Measuring student growth
Pearson wants to control the world's curriculum and testing. (Gag factor -- ipecac)
Using Teacher Evaluation to Improve School Performance
Within the first moments of the presentation, the presenter says, "We have no idea what good teaching looks like. We're not educators, we're economists." Then he goes on and on for nearly an hour to explain how to evaluate a good teacher. Jonah Rockoff, the Sidney Taurel Associate Professor of Business at Columbia Business School, and Douglas Staiger, the John French Professor in Economics at Dartmouth College, discuss research used to identify the effectiveness of teachers in achieving student outcomes by using a value-added approach, and the use of these measures for teacher evaluation and screening, in their presentations at the Social Enterprise Program's first annual Nonprofit Leadership Forum, Measuring and Creating Excellence in Schools.
Evaluating teacher evaluation... Critique of 'Value-Added' assessments (VAMs) shows that they are based on 'beliefs' rather than evidence
Illinois is rushing headlong into VAM (value-added-modeling) for teacher assessment, behind the cheerleading of many of the Astro-Turf "School Reform" groups ranging from Stand for Children to Advance Illinois. Under the PERA, Chicago teachers will begin being evaluated using so-called "value added" methods during the 2012 - 2013 school year. But as virtually all the credible research shows, VAM simply doesn't work!
NYC to Release Teachers' 'Value-Added' Ratings: Why It's Not Fair
Normally, I respect The Nation for it's forward thinking, but it has taken a giant step backward in quoting Bill Gates and Wendy Kopp for their opinions on teacher evaluation. "For what it’s worth, I agree with Gates and Kopp: value-added is a promising tool, but must be further refined and deployed with extreme caution." Neither Gates nor Kopp has any education expertise. Period. What they do have is a huge stake in disclosing the rankings. What if many Teach for America recruits are in the bottom percentiles? What if teachers at the charter schools, generously funded by Bill Gates, are in the lower percentiles? Of course, they don't want don't want rankings published.
In Teacher Ratings, Good Test Scores Are Sometimes Not Good Enough
The New York City Education Department on Friday released the ratings of some 18,000 teachers in elementary and middle schools based on how much they helped their students succeed on standardized tests. The ratings have high margins of error, are now nearly two years out of date and are based on tests that the state has acknowledged became too predictable and easy to pass over time.
You can’t principal-proof a school: Why top down evaluation systems are doomed to fail
As everyone in the education world already knows, the New York Times won a lawsuit that forced the New York City Department of Education to publish the teacher-level value-added data it has been collecting as part of its accountability system. The result? The public unveiling of the work product of an expensive system that is confusing, unreliable—and apparently—error-riddled." Don't be fooled by the introduction, the Fordham Institute has a schizophrenic moment as it tries to rationalize value added teacher evaluation.
Shame is not the solution
The height of hypocrisy. Bill Gates, and Bill Gates's billions alone, is responsible for the farcical evaluation system now being used to publicly persecute teachers. I suppose he figures that by poo-pooing the publication of the scores, he will gain favor in the eyes of the public and perhaps even in the eyes of educators. Who says you can't have your cake and eat it, too.
City Teacher Data Reports Are Released
The reports are now available on SchoolBook, posted on the individual pages for the elementary and middle schools whose teachers’ ratings were released. You can search for a school by using the search module on the left.
Evaluating Value-Added Models for Teacher Accountability
Evidence that the idea has been around for a while, this 191 page document by the oh-so-conservative Rand Corporation published in 2003 espouses analyzes of early models.
Originally posted at Great Schools for America