As you may have heard, Elon Musk's AI chatbot Grok went full-blast Nazi today, culminating in it calling itself "MechaHitler" and praising its namesake as someone who would have "crushed" leftist "anti-white hate." (Ironically, or not, the "leftist" account it was referring to was itself almost certainly a neo-Nazi account pretending to be Jewish).
What caused this, er, "malfunction"? Well according to Grok, Musk "built me this way from the start." But the more immediate answer appears to be an update Musk pushed urging the bot to be less "politically correct" -- an instruction Grok interpreted as, well, a mandate to indulge in Nazism.
This raises an interesting implication. Many legal scholars (particularly textualists and originalists) have recently become enamored with a "corpus linguistics" as an analytical tool for understanding the meaning of legal texts. Corpus linguistics tries to discern what words or phrases mean by taking a large body of relevant works (the corpus) and figuring out how the words were actually used in context. If originalism is about the "ordinary public meaning" of the words in legal texts at the time they were enacted, corpus linguistics offers an alternative to cherry-picking usages from a few high-profile sources (such as the Federalist Papers), sources which are likely polemical, may not actually be representative of common usages, and are highly prone to selection bias. Instead, we can identify patterns across large bodies of training text to figure out how the relevant public generally uses the term (which may be quite different from how a particular politician deploys it in a speech).
Now take that insight and apply it to the term "politically correct". This is, of course, a contested term, and critics often contend it (or more accurately, opposition to it) is a dog whistle for far-right racist, antisemitic, and otherwise bigoted ideologies. Those who label themselves "not-PC" typically contest that reading, at least in circumstances where owning up to it would risk significant consequences. So is someone calling themselves "un-PC" a signifier of bigotry or not? This could have significant legal stakes -- imagine a piece of legislation which had a disparate impact on a racial minority community and which its proponents justified as a stand against "political correctness". When seeking to determine whether the law was motivated by discriminatory intent, a judge might need to ask whether opposition to political correctness should be understood as a confession of racial animus.
Under normal circumstances, one suspects that inquiry will resolve on ideological lines -- those hostile to the law and suspicious of "anti-PC" talk inferring racial animus, those sympathetic to the law or anti-PC politics rejecting the notion. And no doubt, both sides could muster examples where "PC" was used in a manner that supports their priors.
But corpus linguistics suggests shifting away from an individual speaker's idiosyncratic and self-serving disavowals and instead ask "what is the ordinary public meaning of 'not politically correct?'" And it would answer that question by taking a large body of texts and seeing how, in practice, terms like "politically correct" or "not PC" are used.
Returning to Grok, what Grok's journey from "don't be PC" to "MechaHitler" kind of just demonstrated is that, at least with respect to the corpus it was trained upon, the ordinary usage of "not PC" is exactly what critics say it is -- a correlate of raging bigotry and ethnic hatred.
I don't want to overstate the case -- a lot depends on what exact corpus Grok uses to train itself and whether it properly corresponds to the relevant public. Nonetheless, I do think this inadvertent experiment is substantial evidence that, when you hear someone describe themselves as "not-PC", it is reasonable to hear that as meaning they're a racist -- because that's what "not-PC" ordinarily means. And if your conservative/originalist friends object, tell them that corpus linguistics backs you up.