Monday, October 06, 2025

AI Über Adderall


Another day, another AI hallucination story -- this time involving mega-consulting firm Deloitte, which just refunded a big chunk of change to the Australian government after a report they did was found to contain inaccurate and likely hallucinated citations.

Every time I see one of these stories, I always am left asking "Why? Why did you do it?" The risks have to be well-known at this point. And getting caught seems like it's close to career suicide. What's happening?

404 Media did an interesting interview with attorneys who had been caught using AI (and who failed to catch AI hallucinations), and the general theme (aside from "a subordinate did it and I didn't check") was some variation on being overworked and under a ton of pressure.

Now, perhaps I'm overthinking this. But I am wondering if there's some interplay between the historic hard-charging atmosphere of the big consulting firms and use of AI. Companies like Deloitte have a bit of a reputation vis-a-vis their work culture, which basically boils down to "if you are willing to be worked to death, we'll make you richer than God." Younger hires, in particular, are hit with truly unfathomable workloads and time pressures (with sometimes predictably tragic consequences). The historic implicit expectation, if one was in such a situation, was basically to wink at "drink your coffee, take an Adderall, stay up all night, bang it out." I have to assume the work product generated in such circumstances was not always outstanding, but it was at least a human employee's substandard, bleary-eyed work product.

But imagine it's 2025 and you're in that impossible Kobayashi Maru situation. Instead of using Adderall as your crutch, doesn't AI feel a lot more attractive? If we throw out any sort of professional concern about putting out good work product -- and in the imagined situation, there's no way not to; actually performing to expectation is functionally impossible -- then why not roll the dice with AI? The work is going to be bad either way, but at least you can (literally) sleep at night. 

I don't know -- it's just a theory, and I have no evidence that this is going on. But it doesn't seem implausible, no? Maybe another sector AI is disrupting is the ability to "rely" on overcaffeinated and drugged up twenty-somethings to kill themselves on consulting assignments to squeeze a few more dollars out of the bottom line.