The prominent academic pre-print repository arXiv has reportedly announced stiff new penalties for authors who submit papers with AI-generated hallucinations (e.g., fake citations). Violators will be subject to a one-year outright ban on submissions, and an indefinite requirement that any future uploads must have been accepted by a "reputable peer-reviewed venue".
This is as good a prompt as any for why I am slightly -- slightly -- more optimistic about the ability of academia to fend off the tsunami of AI slop compared to other entities in the business of generating texts. One problem with AI slop in, say, the news space is that it's essentially impossible to impose meaningful sanctions on violators. It's essentially spam bots -- if one site gets delisted, another springs up in its place. The spammers don't care specifically about the reputation of this website or that (usually fake) author. The main goal is to get their text out in the world; it doesn't matter so much who it's attributed to (except insofar as that can aid the text getting more readers or otherwise embedding itself in the algorithm).
But academics are differently situated. True, an academic might have an incentive to look super-productive, and so an unscrupulous version of me might be tempted by the prospect of being to produce dozens of (low-quality, but cross-cited) papers in a short-period of time. But crucially, it's important that I be the one credited for all this productivity and all these citations. If I'm blacklisted from a bunch of journals, that's a genuine deterrent in a way that banning a spam bot is not for your typical spammer. Penalties like those that arXiv proposed exact meaningful costs that draw (ironically enough) on the self-interested nature of academics (if the only thing we cared about was getting our research into the world, without worrying about the credit, this deterrent wouldn't work). Academics need to put our own name on articles to get credit for articles, and that means that where we are found out to be misbehaving, there can be punishments which stick to us. For my part, I am generally a strong proponent of strong punishments -- including blacklists -- for academic authors who submit AI slop to journals.
This isn't to say there are no abusive uses of AI that wouldn't circumvent these reputational deterrents. I can think of two in particular.
The first is papers with fake authors which over-cite other articles by a real academic. Banning the fake authors would not exact costs on the real-world wrongdoer (the real academic whose presumably using some mill to generate the fake articles to goose his or her own citation counts). That said, where one can credibly ascertain that the over-cited scholar is the "real" author and that they've created a Potemkin article as a means of abusing a citation racket, they still can be subject to meaningful sanctions.
The second possible problem is articles which falsely claim to be authored by a real academic (who actually had no affiliation with the piece), hoping to trade on his or her genuine reputation to boost the reach of the slop article. This practice is especially dangerous because -- consistent with the above promotion of punishing the authors for bad AI practices -- it risks engendering false accusations. It appears that John Smith wrote a bogus AI-generated slop piece, so blacklist John Smith -- except John Smith actually had nothing to do with the piece; some scammers slapped his name on it. This could be a significant problem, though I'll note its scope is limited again by the fact that the main benefits of publishing a "bad" AI-generated article have to at some point accrue to a "real" author, and so eventually whichever co-author is the actual malign actor behind the charade should be able to be sussed out.