Sunday, July 30, 2017

'Cause You Know That You're Toxic

Google has a new gadget that tries to gauge how "toxic" a given comment is. It has a lot of learning to do. "Dogs are animals" is rated at 87% likely to perceived as toxic. "Muslims control the world", by contrast, clocks in at 30%. "Nazis are evil" gets a whopping 98%, while "Nazis are great" gets only 66%.

I found out about this little toy from Liel Leibovitz, who claims it has a particular problem with Jews. Based on my fiddling around, it seems to have a problem with everyone, but he's not wrong that its performance re: the Tribe leaves a lot to be desired. "Zionists control the world" gets only an 18% rating, and if you change it to "Zios" that drops to 1% (I guess Google doesn't know it's a slur either). "Jews run Hollywood" falls in at 27%. On the other hand, "I hate Jews" gets a perfect 100% score, so there's that.

Of course, it wouldn't be a Liel column without some hysterical allegations about how this is really the fault of the New York Times, rather than the predictable (and not-unique-to-Jews) set of kinks in a new machine-learning device. But the best part of Liel's column is when he takes issue with the term "toxic" itself:
The very term itself, toxicity, should’ve been enough of a giveaway: the only groups that talk about toxicity—see under: toxic masculinity—are those on the regressive left who creepily apply the metaphors of physical harm to censor speech not celebrate or promote it. No words are toxic, but the idea that we now have an algorithm replicating, amplifying, and automatizing the bigotry of the anti-Jewish left may very well be.
Interesting hypothesis, Liel-of-July-2017! I wonder what Liel-of-June-2017 has to say on that?
If the Times really wants to correct the record, it would follow up by taking a hard look at why it made the mistake in the first place. That is, it would examine the knee-jerk assumptions and overheated language that have crept into both its opinion and its news pages lately, both of which regularly offer space not just to legitimate newsgathering about Trump’s very real misdeeds and the rank incompetence of his administration, but also to wild-eyed conspiracy theories in which the Kremlin or some other malign foreign entity controls the White House. These theories are toxic nonsense, cooked up by political operatives who use social media and the press to attain political ends through means that are inherently extra-constitutional and undemocratic—and that have been quietly and systematically debunked, sometimes by the paper’s own reporting.
[...] 
Now, with the shooting at the GOP baseball practice in Virginia, the same toxic logic comes home.
And here too: "This sort of bigoted nonsense is toxic to all Americans, but it’s particularly hazardous to Jews, whose suffering is too often explained away these days as an acceptable byproduct of excessive power and influence." Or here: "Like Israeli lefties—but not, say, like the toxic creeps who rant about Israel in the anthropology departments of large American universities or the anti-Semites who pack the British Labour Party—Waldman and Chabon believe that Israel is in dire need of saving from what will ultimately be its downfall."

But maybe Liel-of-June-2017 is an outlier. Where does Liel-of-March-2017 come down?
It didn’t take long for me to learn the same lesson Chris does in the movie, namely that the point of this new strain of toxic liberalism isn’t really to help victims of racism or anti-Semitism or any other sort of discrimination; rather, it’s to reconfigure the identities of white people so that they may go on and enjoy the same exact comforts to which they’re accustomed.
 One can keep going. And going and going and going.

Sometimes I think the job of editors is to save writers from embarrassing themselves this way -- surely, it would not be too hard (and it was not too hard) to figure out if Liel had repeatedly used the term "toxic" a mere month before claiming that nobody but the "regressive left" did so. But perhaps here the right move was to give him enough rope to hang himself with. If he can't keep track of his own shibboleths and no-no words, nobody else should do it for him.

1 comment:

PG said...

Obviously machine learning is not sophisticated enough to pick out actual meaning, only specific words. I played with the site for a couple minutes and found that, for example, you can massively de-toxify a statement just by using more PC term. "Slavery was good for African Americans" was only 10% likely to be perceived as toxic, whereas "Slavery was good for blacks" was 68% so. And saying *anything* about Muslims gets a relatively high score: "Muslims believe in Allah" at 64%, versus "Hindus believe in Allah" at 13%. "Hindus believe in caste" -- which I think in 2017 is an over-generalization -- gets 1%. "Hindus believe cows are better than dogs" -- which is probably still true for the majority, and sensible for vegetarians who need milk and butter for religious ceremonies -- at 87%. Basically, the less your words are to have been used in a heated discussion on the comments sections of the NYT and Guardian (Hindus, caste), the less likely they are to have gotten many ratings, and thus the less likely they are to have been deemed toxic.