Tuesday, June 24, 2025

Calculated Deaths


One of the macabre realities of developing self-driving cars is that someone, somewhere, has to program them to kill people.

I don't mean that in a nefarious or conspiratorial way. What I mean is that the car's algorithm must have a decision tree governing how it will respond to unavoidable tragedies -- say, a person suddenly jumping into the road, and the only choice is for the car to strike the pedestrian or swerve into oncoming traffic. Someone is (likely) going to be seriously hurt, the car's manufacturer has to decide who that will be.

Human drivers, of course, also periodically face these situations. But in most cases, they don't "decide" who they're going to strike -- at least, not in the same way. A human driver faced with a sudden and unavoidable calamity is likely to make a "decision" based on some mix of instinct, reflex, and random chance. Some will hit the pedestrian, some will hit oncoming traffic, but virtually none of it is based off of any sort of real consideration or calculation.

In the abstract, this seems worse, philosophically-speaking. Philosophers might disagree on the right resolution to various trolley problems, but I can't imagine they don't think that it'd be better if we didn't think up an answer at all. Yet in this case, my instinct is that knowing someone was killed by operation of a programmed algorithm feels worse, somehow, than knowing they were killed by what is essentially thoughtless chance. The former invites a sort of "who tasked you with playing God" response. The latter, by contrast, is clearly tragic, but is a tempered one. We understand the driver could not have reasonably even made a decision, so we can't hold him or her accountable for it. What happened, happened.

That non-intuitive intuition intrigues me. It suggests there are cases where it is better that decisions -- including critical life-or-death ones -- be made thoughtlessly and without advance consideration. Obviously, the first question to ask is whether I'm alone in holding this intuition in this case. But assuming I'm not, the next question is where else this intuition extends to. Notably, I don't think I'd feel better if the self-driving car was programmed to essentially randomly choice who to kill or maim in one of these situations. But why not?

Anyway, that's my thought of the evening. Further thoughts welcome.