Dive Brief:
- In a new essay published in The Washington Post, Maja Wilson, author of “Rethinking Rubrics in Writing Assessment,” argues that automated essay scoring machines miss the nuances of language, like sarcasm, and are therefore an ineffective and unfair choice for the grading of high-stakes exams.
- Wilson argues that emotional intelligence is necessary to accurately assess an essay and that robots lack this since it comes from "awareness of our own feelings."
- In March, testing consortium PARCC gave a practice writing exam to a million public school students with the intention of using the responses as a way to "train" an automated essay-scoring tool that will be used next year when non-practice tests are rolled out.
Dive Insight:
For Wilson, PARCC's decision to use an automated essay scoring tool is ridiculous. She writes, "In other words, programs that cannot read or understand text, sarcastic or not, will be used by the school reform movement to make high-stakes decisions about the futures of millions of students, teachers, and administrators."The teacher educator and writing assessment expert is wary of PARCC's scope - 14 states will use its assessments - and how an increasing number of student writing will be subjected to these automated scoring tools.
The tools are not unique to PARCC. Many schools have the opportunity to purchase writing programs that score students' work online. Wilson's essay is a must-read for school leaders who are considering purchasing these services.
Recommended Reading:
The Washington Post: Sarcasm, Scarlett Johannson and why machines should never grade student writing
No comments:
Post a Comment