Another day, another AI hallucination story -- this time involving mega-consulting firm Deloitte, which just refunded a big chunk of change to the Australian government after a report they did was found to contain inaccurate and likely hallucinated citations.
Every time I see one of these stories, I always am left asking "Why? Why did you do it?" The risks have to be well-known at this point. And getting caught seems like it's close to career suicide. What's happening?
404 Media did an interesting interview with attorneys who had been caught using AI (and who failed to catch AI hallucinations), and the general theme (aside from "a subordinate did it and I didn't check") was some variation on being overworked and under a ton of pressure.
Now, perhaps I'm overthinking this. But I am wondering if there's some interplay between the historic hard-charging atmosphere of the big consulting firms and use of AI. Companies like Deloitte have a bit of a reputation vis-a-vis their work culture, which basically boils down to "if you are willing to be worked to death, we'll make you richer than God." Younger hires, in particular, are hit with truly unfathomable workloads and time pressures (with sometimes predictably tragic consequences). The historic implicit expectation, if one was in such a situation, was basically to wink at "drink your coffee, take an Adderall, stay up all night, bang it out." I have to assume the work product generated in such circumstances was not always outstanding, but it was at least a human employee's substandard, bleary-eyed work product.
But imagine it's 2025 and you're in that impossible Kobayashi Maru situation. Instead of using Adderall as your crutch, doesn't AI feel a lot more attractive? If we throw out any sort of professional concern about putting out good work product -- and in the imagined situation, there's no way not to; actually performing to expectation is functionally impossible -- then why not roll the dice with AI? The work is going to be bad either way, but at least you can (literally) sleep at night.
I don't know -- it's just a theory, and I have no evidence that this is going on. But it doesn't seem implausible, no? Maybe another sector AI is disrupting is the ability to "rely" on overcaffeinated and drugged up twenty-somethings to kill themselves on consulting assignments to squeeze a few more dollars out of the bottom line.
1 comment:
My theory is that it's competition that pushes this. AI screwups and hallucinations are super high profile and visible, but the question is how common they are. I suspect that AI hallucinations are somewhat like self-driving car wrecks-- they happen, and they're ugly, but on average, they may be safer than human drivers. But we don't really accept self-driving car wrecks as a cost of the technology because those wrecks don't look like wrecks involving humans-- they behave like robots, so they're jarring to our sensibilities in a way that your typical wrecks, horrifying as they can be, don't.
So the way that applies here is that if you're a Deloitte consultant and your coworkers are using AI, and your AI using coworkers' work product is, generally, better than yours, your incentive is also to use AI. Because if you don't, you're underperforming, and you won't progress, and sooner rather than later you'll be pushed out. Now, if AI screws something up that your coworkers do, they'll be unceremoniously fired, and their career may be over in a way that it won't be if your work product is generally mediocre in the way that work product tends to be when you're sleep-deprived and under intense time pressure. But that's small comfort for you if it happens three years after you've been let go. So there's real pressure to use AI because, if I'm right, AI is less a shortcut than it is a way of staying afloat.
Post a Comment