Study: Computers nearly as good as teachers in grading English composition

Yet when it comes to English composition, the question of whether computer programs can reliably assess student work remains sticky. Sure an automaton can figure out if a student has done a math or science problem by reading symbols and ticking off a checklist, writing instructors say. But can a machine that cannot draw out meaning, and cares nothing for creativity or truth, really match the work of a human reader?

Advertisement

In the quantitative sense: yes, according to a study released Wednesday by researchers at the University of Akron. The study, funded by the William and Flora Hewlett Foundation, compared the software-generated ratings given to more than 22,000 short essays, written by students in junior high schools and high school sophomores, to the ratings given to the same essays by trained human readers.

The differences, across a number of different brands of automated essay scoring software (AES) and essay types, were minute. “The results demonstrated that over all, automated essay scoring was capable of producing scores similar to human scores for extended-response writing items,” the Akron researchers write, “with equal performance for both source-based and traditional writing genre.”

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement