Edujournalism and Eduresearch Too Often Lack Merit

Paul Thomas
3 min readApr 25, 2017

--

What do Marta W. Aldrich’s Teacher merit pay has merit when it comes to student scores, analysis shows and Matthew G. Springer’s Teacher Merit Pay and Student Test Scores: A Meta-Analysis have in common?

Irony, in that they both lack merit.

Let’s be brief but focus on the nonsense.

Well, as Aldrich reports about Springer’s research, a meta-analysis (this is research-speak that is supposed to strike fear into everyone since it is an analysis of much if not all of the existing research on a topic; thus, research about research), we now have discovered that merit pay in fact works! You see, it causes [insert throat clearing] “academic increase … roughly equivalent to adding three weeks of learning to the school year, based on studies conducted in U.S. schools, and four weeks based on studies across the globe.”

Wow! Three to four weeks of learning. That is … nonsense.

So here are the problems with our obsession with the hokum that is merit pay.

First, to make the process of giving teachers merit pay in order to create greater student learning, we have to have a metric for student learning that is quantifiable and thus manageable. Herein is the foundational problem since all of these studies use high-stakes test scores as proof of student learning.

This is a problem since standardized testing is at best reductive — asking very little of students and far more efficient than credible.

Next, very few people ever question this whole “weeks (or months) of learning” hokum — which is a cult-of-proficiency cousin of the reading grade level charade.

Researchers should explain to everyone that “weeks of learning” can often be a question or two difference on any test. In short, it is something that can be done statistically, but means almost nothing in reality. Three to four weeks out of a 36-week academic year.

Finally, and this is hugely important, merit pay linked to standardized test scores codified as proof of student learning necessarily reduces all teaching and learning to test prep and fails due to Campbell’s Law:

The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.

Notice here “corruption” and “corrupt.” Merit pay is guaranteed to corrupt the evidence and the entire teaching/learning process.

Similar to the obsession with choice and competition, the media and research fetish for merit pay is mostly about ideology — some believe outcomes are mostly about effort (thus, teachers are lazy) and are committed to merit pay regardless of the evidence or the unintended consequences.

As Mark Weber Tweeted about the claims of the study:

Jersey Jazzman @jerseyjazzman

@plthomasEdD Effect is 0.035 in the US: moves students from 50 to 51 percentile. Absurd to portray as meaningful.

6:59 AM — 24 Apr 2017

“Absurd” seems here to be an understatement, but, yes, this reporting and meta-analysis are themselves without merit and yet another example of the folly that is edujournalism and edureform in the U.S.

--

--

Paul Thomas
Paul Thomas

Written by Paul Thomas

P. L. Thomas, Professor of Education Furman University, taught high school English before moving to teacher education. https://radicalscholarship.wordpress.com/

No responses yet