Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm willing to take bets that in 15 years there will be a scandal about faked data by at least one of the researches in this paper.

It smell just like every other interesting psychology result that at best is a fluke.



I think it’s maybe less likely since this is looking at actual grades and not some kind of survey or experiment. But certainly it’s always a concern in social sciences until we get reproduction.


Unlikely. If you talk with anyone who's done grading, this will likely jive with our experience and make us data aware of the outcomes. Like anything, with grading you can get into a flow, and the more you process an assignment, the more answers you've seen and those can change how you grade future answers


Not really taking a position on this one way or the other, but I would say that "this jives with my experience" is near to being a prerequisite for junk science. Somebody saying something controversial is going to be challenged -- confirming biases is precisely how you peddle junk.

For instance the Journal of Personality and Social Psychology [1] is a terrible journal, with a replication success rate in the 20% range. Yet it's ironically well regarded. Both can probably be explained by the exact same phenomena - go read their articles and reads like a stream of bias confirmations for those of a certain ideological orientation -- the same orientation that's clearly widely shared amongst social science researchers.

[1] - https://psycnet.apa.org/PsycARTICLES/journal/psp/126/2


I absolutely observed my own biases and created techniques to mitigate... a few that come to mind

1. Grade problem by problem. This actually makes grading sooo much easier on your own mind

2. Take a second pass to look for outliers in consistency

3. When possible, craft problems that can be automatically graded for correctness. This leaves more time for commentary on the quality of the solution

(I taught computer science, which lends itself to some of this)

The harder bias to handle is the one you develop for students one way or another through the course of a semester or course. Perceived effort shifts grades


I really doubt you can notice a 0.6% discrepancy anecdotally. They only detected it in the study because of the massive amount of data they used.

Classic confirmation bias.


Anecdotally, I would go back and adjust grades on individual problems from earlier in the stack.

I can very easily notice my own over strictness from early in the stack.


For sure. I also find I have to update my rubric to give more/less part marks, which also requires going back. It takes about 10-15 papers grades before things settle down.


The result seems pretty intuitive to me. The test is easy to re-run, unless the data have been "lost," which is not mentioned.

Most importantly, none of the researchers is a psychologist or behavioral economist or any kind of "social scientist."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: