A new academic hoax story has broken, and it's bigger than ever before.
Three scholars wrote twenty papers, none of which contained the arguments the authors believed (and all of which contained arguments the authors considered to be ridiculous) and sent them off to journals in the "grievance studies" set. By the time they had to pull the plug on the hoax, seven had been accepted, six were either still under review or under some form of "revise and resubmit", and six were rejected outright.
This project was an expansion on an earlier hoax where a gibberish paper called "The Conceptual Penis" was published in a pay-to-publish journal. The present effort distinguished itself both in the number of papers written and in the decision to submit to what the authors called highly-ranked journals in their disciplines. On that latter note, it's hard to assess -- once you start getting into the sub(-sub-sub)field weeds, what really counts as "highly-ranked"? -- but at least a couple of the journals they scored with are recognizable names (Hypatia, in particular, was a good get).
It's also true, as any observer of peer-reviewed scholarly literature can tell you, that a lot of peer-reviewed scholarly literature even in top journals is dreck. So the fact that the authors were able to get arguments that are (or they viewed as, anyway) dreck published is not itself surprising; though perhaps it gives insight into exactly what and how dreck gets through the process.
Nonetheless, I think there are some important limitations on what one can draw from this "study", including significant ethical ramifications in how the "data" was presented. One might say I'm being overly credulous in even treating this project as one that seeks to earnestly improve the quality of academic publishing standards (hint: if that's your goal, sneeringly referring to your targeted disciplines as "grievance studies" is a bad way to start). But, since we should be self-reflective about the quality of writing and reasoning in academia, I'll take it on its terms.
First, the limits on the conclusion. The authors' methodology was to take arguments that they figured would be appealing to the editorial staff of a given journal but which they, personally, found to be outlandish, and see if they could get them published.
They say that proves a serious malfunction in the peer-review process. I say "haven't they just passed an ideological Turing test?"
An ideological Turing test measures one's ability to mimic the beliefs of the "other side". You "pass" if you successfully convince members of that side that you really are one of them. So let's say I, a liberal, adopt a pseudonym and submit an article to Breitbart. I do my best to make it look, feel, and sound like a Breitbart-style conservative article. Now clearly, I wouldn't believe what I was writing about. The key question is whether they'd recognize the sham: would they say "this sounds like what a liberal thinks a conservative sounds like" (which is, indeed, what it actually is) or would they believe that this is an actual conservative writing? If the latter, then I've passed the test.
The thing is, nothing about passing the Turing test, on its own, demonstrates the falsity of the beliefs or arguments successfully mimicked. Someone on Twitter (I can't remember who) suggested the case of a young-Earth Creationist who submits an article to a biology journal that "mimics" tenets and presumptions of mainstream biological science. If his "hoax" succeeds, would we say "aha! The biological sciences are hopelessly corrupted, to be taken in by this prankster!" No -- we'd say that the author has, albeit disingenuously, written an actual good argument (that he happens not to believe). Likewise, if I successfully spoof Breitbart, I doubt they'd take that as decisive evidence that they've gone off the rails.
The case for why these papers are different, then, can't simply turn on the fact that (a) the authors don't believe the arguments they made and (b) they were nonetheless accepted. There has to be something else in the argument that makes them objectively bad, such that it represents a failure (beyond the fact of the disingenuous motives of the author) for the journal to have accepted it. So what might these be?
This is hard to assess, because the authors don't link to the full papers (the accepted versions have, unsurprisingly, now all been retracted) and because their summaries are by design written to make their claims sound as outlandish as possible. But at least in some cases this isn't facially self-evident.
Take the paper they got into Hypatia. Its thesis is "That academic hoaxes or other forms of satirical or ironic critique of social justice scholarship are unethical, characterized by ignorance and rooted in a desire to preserve privilege." One certainly understands the extra-dose of gleeful "gotcha-ness" the hoaxers enjoyed in getting this paper into Hypatia. But it's hardly the sort of article whose "wrongness" transparently stands out such that reviewers should have obviously known, on face, that it was ridiculous. After all, one could absolutely believe that satirical critiques of this sort of scholarship are unethical and rooted in a desire to preserve privilege (the "characterized by ignorance" is, I concede, at least arguably performatively contradicted by the ability of the authors to sufficiently effectively mimic these arguments such that they got their papers published. But even then, that would just show that one prong of the element was, after the fact, demonstrated to be falsified).
Ditto their Fat Studies paper on fat bodybuilding. Again, the article isn't accessible anymore, but if the basic thesis is that there could be various ways to present "fat" bodies as (in the authors' words) "legitimately-built bodies" worthy of attention and praise, even now I won't say that is a transparently ridiculous assertion. Think of what the ESPN Body Issue has done on this score, for example -- quite a few of its models, at the very least, undermine the notion that "fat" and "athletic" (or even "muscular") are mutually exclusively categories (quoth one of the athletes, a Major League pitcher: "As a baseball player, if I'm pitching 35 times a season, seven innings a pop, 100 pitches a game, I need some fat. I need some extra meat on my body."). And to the extent the "obvious wrongness" is based on the thesis being "positively dangerous to health", I call foul both because it oversimplifies what the research actually shows regarding the linkage between health and what is deemed "fat" in contemporary American society, and because "mainstream" bodybuilding very obviously also doesn't represent the apogee of healthy living either.
Again, I'm not saying that either of these claims are clearly right. But they're not, at least as presented, transparently wrong such that nobody (not just not-the-authors) could find them believable or worth engaging with.
Another potential reason why we could say that reviewers "failed" in not recognizing the wrongness of the article is where there is outright falsification of data. This is something they (apparently) did in the "Portland dog park" paper (they claim to have "tactfully inspected the genitals of slightly fewer than 10,000 dogs whilst interrogating owners as to their sexuality"). Maybe a good reviewer should have recognized that this seemed suspicious. But here I'd say that peer review is actually quite bad at catching this sort of outright fabrication (political science had its own scandal on this score not too long ago). Perhaps unsurprisingly, peer review works best on the presumption that the author is earnestly presenting genuine arguments obtained by honest means. Our peer-review system would be even more dysfunctional than it already is if the first question reviewers asked is "is this paper lying to me?"
And that moves me to the ethical qualms I have in how the hoax authors have presented their findings -- most notably, in how they treat the peer reviewer comments. Each of the papers they submitted -- including those which were rejected -- comes with a selection of peer review comments, all of which are positive. The idea, presumably, is to demonstrate that even their worst papers that didn't get accepted nonetheless were not treated with the sneering dismissal they deserved.
There are two problems with this presentation. First, I think it is actually capturing trends in peer-review to be more constructive, charitable, and supportive towards the papers under consideration -- all good things. One of the reviewers quoted (who had recommended rejecting the article) explained his more positive feedback as an attempt to "buy in" to the paper and provide constructive comments explaining why the article itself didn't work without discouraging the author from the field entirely. It is, I think, a good thing to read articles in their strongest possible light -- to try to think of the best interpretation of the claims the authors are trying to make rather than the most nefarious. This is a practice that hoaxes, in particular, exploit -- they gain their force precisely in the knowledge that their readers will commit the terrible sin of trying to take them seriously.
The second problem with the way the reviewer comments were presented is simultaneously more and less serious. Simply put: if the hoaxer's goal really was to provide a pathway for identifying what is and isn't "working" in these academic disciplines (and I concede that may be far too optimistic), then there is no justification for not including the negative or critical reviewer comments that (presumably) explained why papers were not accepted. Partially, this is simply a matter of misrepresentation -- only giving the positive comments but not the negative ones oversells how receptive readers were to these pieces.
But more importantly, the part of me that wanted to earnestly take this hoax seriously as a genuine effort to constructively critique certain academic disciplines was the most thirsty for learning the content of the negative reviews. What is it that gets a paper rejected in Sociology of Race and Ethnicity (or what have you)? Clearly, it isn't a wild west where anything goes so long as you mimic the right politics (the authors -- somewhat begrudgingly -- admit that their project conclusively falsified that hypothesis). Consequently, figuring out where the borders are, what raises flags and what doesn't, is actually incredibly important to the extent the project is actually meant to have any sort of constructive edge. That these weren't included is powerful evidence about what the ambitions of the hoaxers really were. So it's a more serious critique to the extent I take the hoaxers seriously as trying to have a constructive impact on academic publishing; and a less serious critique to the extent that ascribing such seriousness of purpose is absurd.
In any event. I don't need persuading that academic publishing includes a lot of terrible work, and I don't need persuading that there are certain common markers as to what gets terrible work published. But this hoax overshoots the mark -- mostly because its goal isn't really to build a better scholarly mousetrap but rather to grind certain ideological axes.
My recommendation, therefore, is revise and resubmit.
Wednesday, October 03, 2018
Subscribe to:
Post Comments (Atom)
1 comment:
all the papers are available here
https://drive.google.com/drive/folders/19tBy_fVlYIHTxxjuVMFxh4pqLHM_en18
Post a Comment