Two AI developments in rapid succession have me, I'll be honest, a bit rattled.
The first development was a study where an AI player managed to perform very well in an online Diplomacy tournament. Diplomacy is a board game which prioritizes negotiation and manipulation to achieve one's ends (so it's not like a game like Chess, which is purely a matter of raw calculation). Leaving aside my general discomfort with "we've successfully trained an AI to lie and manipulate humans to achieve a goal of world domination!", this represents a huge leap in AI's ability to effectively use natural language and discourse in a way that I think, until recently, would have seemed beyond the realm of possibility.
The second was using OpenAI to answer short answer prompts that might be found on a law school exam. It did ... decently. Not perfect, and sometimes quite wrong, but not awful either, and sometimes more or less correct. For example, here's what it gave me when I asked "How did Twombly and Iqbal change pleading rules?"
Twombly and Iqbal changed pleading rules by raising the standard for the sufficiency of pleadings in federal civil actions. The Supreme Court held in Ashcroft v. Iqbal that "a complaint must contain sufficient factual matter, accepted as true, to state a claim to relief that is plausible on its face." This ruling effectively raised the bar for pleadings in federal civil actions, meaning that plaintiffs must provide more detailed facts in their pleadings than they had to under the old standard. The Court also held in Bell Atlantic Corp. v. Twombly that a complaint must contain more than mere labels and legal conclusions, and must instead contain factual allegations that show that a claim is plausible. This ruling established a two-pronged standard for the sufficiency of pleadings in federal civil actions, requiring plaintiffs to provide more detailed facts in their pleadings than they had to under the old standard.
That's not perfect, but it's not terrible either. And since OpenAI is available to the public, I assume there are more proprietary AI models out there that can do an even better job.
My colleagues are worried about what this means for open book law school exams (students can enter the prompt and have a decent answer spat back out). But I'm not worried about having to adjust my exams (I don't use short answer prompts anyway). I'm worried about what this means for needing human lawyers. Not quite yet, but we're getting there.
All of this, in turn, brought to mind two articles by Kevin Drum on the issue of AI development. The first made the point that once it comes into full bloom AI will not just be better than humans at some jobs, it will be better than humans at all jobs. This is not a problem that is limited to "unskilled labor" or jobs that require physical strength, deep precision, or even intense calculation. Everything -- art, storytelling, judging, stock trading, medicine -- will be done better by a robot. We're all expendable.
Article number two compared the pace of AI development to filling up Lake Michigan with water, where every 18 months you double the amount of water you can add (so first one fluid ounce, then eighteen months later two fluid ounces, then in eighteen more months four fluid ounces, and so on). Both "Lake Michigan" and "18 months" weren't chosen at random -- the former's size in fluid ounces is roughly akin to the computing power of the human brain (measured in calculations/second), and the latter reflects Moore's Law, the idea that computing power doubles every 18 months.
What was striking about the Lake Michigan metaphor is that, if you added water at that pace, for a long time it will look as if nothing is happening ... and then all of the sudden, you'll finish. There's a wonderful GIF image in the article that illustrates this vividly, but the text works too.
Suppose it’s 1940 and Lake Michigan has (somehow) been emptied. Your job is to fill it up using the following rule: To start off, you can add one fluid ounce of water to the lake bed. Eighteen months later, you can add two. In another 18 months, you can add four ounces. And so on. Obviously this is going to take a while.
By 1950, you have added around a gallon of water. But you keep soldiering on. By 1960, you have a bit more than 150 gallons. By 1970, you have 16,000 gallons, about as much as an average suburban swimming pool.
At this point it’s been 30 years, and even though 16,000 gallons is a fair amount of water, it’s nothing compared to the size of Lake Michigan. To the naked eye you’ve made no progress at all.
So let’s skip all the way ahead to 2000. Still nothing. You have—maybe—a slight sheen on the lake floor. How about 2010? You have a few inches of water here and there. This is ridiculous. It’s now been 70 years and you still don’t have enough water to float a goldfish. Surely this task is futile?
But wait. Just as you’re about to give up, things suddenly change. By 2020, you have about 40 feet of water. And by 2025 you’re done. After 70 years you had nothing. Fifteen years later, the job was finished.
If we set the start date at 1940 (when the first programmable computer was invented), we'd see virtually no material progress until 2010, but we'd be finished by 2025. It's now 2022. We're almost there!
That we might be in that transitional moment where "effectively no progress" gives way to "suddenly, we're almost done" means we have to start thinking now about what to do with this information. What does it mean for the legal profession if, for most positive legal questions, an AI fed a prompt can give a better answer than most lawyers? What does it mean if it can give a better answer than all lawyers? There's still some hope for humanity on the normative side -- perhaps AI can't make choices about value -- but still, that's a lot of jobs taken off line. And what about my job? What if an AI can give a better presentation on substantive due process than I can? That's not just me feeling inadequate -- remember article #1: AI won't just be better than humans at some things, it will be better at all things. We're all in the same boat here.
What does that mean for the concept of capital ownership? Once AI eclipses human capacity, do we enter an age of permanent class immobility? By definition, if AI can out-think humans, there is no way for a human to innovate or disrupt into the prevailing order. AI's might out-think each other, but our contribution won't be relevant anymore. If the value produced by AI remains privatized, then the prospective distribution of wealth will be entirely governed by who was fortunate enough to own the AIs.
More broadly: What does the world look like when there's no point to any human having a job? What does that mean for resource allocation? What does that mean for our identity as a species? These questions are of course timeless, but in this particular register they also felt very science-fiction -- the sorts of questions that have to be answered on Star Trek, but not in real life, because we were nowhere near that sort of society. Well, maybe now we are -- and the questions have to be answered sooner rather than later.
No comments:
Post a Comment