Bored out of my brains marking a large pile (84) of exams, I decided to spice things up a little bit by timing how long it took me to mark each question. I happen to favour the style of marking where you mark every question 1, followed by every question 2 etc.
It seemed clear to me that the speed of marking should depend on the order the paper was marked in: as I become more and more familiar with the kind of weird stuff students write in response to my questions, and as I solidify in my mind how many marks I feel particular answers are worth, it makes sense that I should speed up.
I also wondered whether bad answers are more difficult to mark than good answers.
Looking at this, I can see I’m right about my first assumption:
The duration clearly decreases on average as the marking exercise goes on.
But what about the crucial question: is marking a bad exam answer harder than marking a good one?
The answer is a fairly resounding “no” (Specification 1). The low R2 also makes you think there’s something more important going on here: namely handwriting: I’d guess that’s the big factor.
But like a good social scientist, I wasn’t happy with leaving it there. Maybe there was some kind of nonlinear effect: very, very bad answers are easy to mark (since there’s usually little or nothing written), so are very good ones.
Specification (2) shows that there’s not much evidence that this is the case. (in fact p is less than 0.1 for both marks and squared marks in (2), so in some social science contexts, I’d award myself a little star!)
You might think: this is the most boring result imaginable. Why is this worth a blog post?
Well, you’re right of course…. but the struggle against publication bias just claimed one small victory!