Pages

Friday, December 02, 2022

We Are Not Ready for AI

Two AI developments in rapid succession have me, I'll be honest, a bit rattled.

The first development was a study where an AI player managed to perform very well in an online Diplomacy tournament. Diplomacy is a board game which prioritizes negotiation and manipulation to achieve one's ends (so it's not like a game like Chess, which is purely a matter of raw calculation). Leaving aside my general discomfort with "we've successfully trained an AI to lie and manipulate humans to achieve a goal of world domination!", this represents a huge leap in AI's ability to effectively use natural language and discourse in a way that I think, until recently, would have seemed beyond the realm of possibility.

The second was using OpenAI to answer short answer prompts that might be found on a law school exam. It did ... decently. Not perfect, and sometimes quite wrong, but not awful either, and sometimes more or less correct. For example, here's what it gave me when I asked "How did Twombly and Iqbal change pleading rules?"

Twombly and Iqbal changed pleading rules by raising the standard for the sufficiency of pleadings in federal civil actions. The Supreme Court held in Ashcroft v. Iqbal that "a complaint must contain sufficient factual matter, accepted as true, to state a claim to relief that is plausible on its face." This ruling effectively raised the bar for pleadings in federal civil actions, meaning that plaintiffs must provide more detailed facts in their pleadings than they had to under the old standard. The Court also held in Bell Atlantic Corp. v. Twombly that a complaint must contain more than mere labels and legal conclusions, and must instead contain factual allegations that show that a claim is plausible. This ruling established a two-pronged standard for the sufficiency of pleadings in federal civil actions, requiring plaintiffs to provide more detailed facts in their pleadings than they had to under the old standard.

That's not perfect, but it's not terrible either. And since OpenAI is available to the public, I assume there are more proprietary AI models out there that can do an even better job.

My colleagues are worried about what this means for open book law school exams (students can enter the prompt and have a decent answer spat back out). But I'm not worried about having to adjust my exams (I don't use short answer prompts anyway). I'm worried about what this means for needing human lawyers. Not quite yet, but we're getting there.

All of this, in turn, brought to mind two articles by Kevin Drum on the issue of AI development. The first made the point that once it comes into full bloom AI will not just be better than humans at some jobs, it will be better than humans at all jobs. This is not a problem that is limited to "unskilled labor" or jobs that require physical strength, deep precision, or even intense calculation. Everything -- art, storytelling, judging, stock trading, medicine -- will be done better by a robot. We're all expendable.

Article number two compared the pace of AI development to filling up Lake Michigan with water, where every 18 months you double the amount of water you can add (so first one fluid ounce, then eighteen months later two fluid ounces, then in eighteen more months four fluid ounces, and so on). Both "Lake Michigan" and "18 months" weren't chosen at random -- the former's size in fluid ounces is roughly akin to the computing power of the human brain (measured in calculations/second), and the latter reflects Moore's Law, the idea that computing power doubles every 18 months.

What was striking about the Lake Michigan metaphor is that, if you added water at that pace, for a long time it will look as if nothing is happening ... and then all of the sudden, you'll finish. There's a wonderful GIF image in the article that illustrates this vividly, but the text works too. 

Suppose it’s 1940 and Lake Michigan has (somehow) been emptied. Your job is to fill it up using the following rule: To start off, you can add one fluid ounce of water to the lake bed. Eighteen months later, you can add two. In another 18 months, you can add four ounces. And so on. Obviously this is going to take a while.

By 1950, you have added around a gallon of water. But you keep soldiering on. By 1960, you have a bit more than 150 gallons. By 1970, you have 16,000 gallons, about as much as an average suburban swimming pool.

At this point it’s been 30 years, and even though 16,000 gallons is a fair amount of water, it’s nothing compared to the size of Lake Michigan. To the naked eye you’ve made no progress at all.

So let’s skip all the way ahead to 2000. Still nothing. You have—maybe—a slight sheen on the lake floor. How about 2010? You have a few inches of water here and there. This is ridiculous. It’s now been 70 years and you still don’t have enough water to float a goldfish. Surely this task is futile?

But wait. Just as you’re about to give up, things suddenly change. By 2020, you have about 40 feet of water. And by 2025 you’re done. After 70 years you had nothing. Fifteen years later, the job was finished.

If we set the start date at 1940 (when the first programmable computer was invented), we'd see virtually no material progress until 2010, but we'd be finished by 2025. It's now 2022. We're almost there!

That we might be in that transitional moment where "effectively no progress" gives way to "suddenly, we're almost done" means we have to start thinking now about what to do with this information. What does it mean for the legal profession if, for most positive legal questions, an AI fed a prompt can give a better answer than most lawyers? What does it mean if it can give a better answer than all lawyers? There's still some hope for humanity on the normative side -- perhaps AI can't make choices about value -- but still, that's a lot of jobs taken off line. And what about my job? What if an AI can give a better presentation on substantive due process than I can? That's not just me feeling inadequate -- remember article #1: AI won't just be better than humans at some things, it will be better at all things. We're all in the same boat here.

What does that mean for the concept of capital ownership? Once AI eclipses human capacity, do we enter an age of permanent class immobility? By definition, if AI can out-think humans, there is no way for a human to innovate or disrupt into the prevailing order. AI's might out-think each other, but our contribution won't be relevant anymore. If the value produced by AI remains privatized, then the prospective distribution of wealth will be entirely governed by who was fortunate enough to own the AIs.

More broadly: What does the world look like when there's no point to any human having a job? What does that mean for resource allocation? What does that mean for our identity as a species? These questions are of course timeless, but in this particular register they also felt very science-fiction -- the sorts of questions that have to be answered on Star Trek, but not in real life, because we were nowhere near that sort of society. Well, maybe now we are -- and the questions have to be answered sooner rather than later.

Thursday, December 01, 2022

The Judeo-Christian's Junior Partner

It's hardly a revelation at this point to observe how the "anti-CRT" style bills have quickly become tools to censor Jewish and Holocaust education. A recent story out of Florida, where a school district cited Florida's "don't say gay" bill to block a parent from giving an educational (but non-theological) presentation to teach students what Channukah is, wouldn't even be especially noteworthy (the district did eventually reverse itself). But there were some details in the story that I thought were illustrative about the location Jews are perceived to occupy in religious pluralism discourse versus the position we actually occupy.

The first thing to note about this district is that it is not some sentinel of secularism. The schools reportedly are replete with "holiday" decorations that are very much tied to Christmas. Nonetheless, when the parent tried to schedule her yearly Channukah presentation, the district demurred on the grounds that if the school allowed such an event, "“they would have to teach Kwanza and Diwali."

To which the Jewish parent replied: "I think that would be awesome!"

What we see here is how "Judeo-Christian" renders Judaism the (very, very) junior partner. Christians won't actually give Jews equal standing with Christians in terms of holiday exposure; as the "junior" they're not entitled to such largesse. But Christians assume nonetheless that Jews remain partners in the desire to maintain "Judeo-Christian" hegemony against upstart interlopers like Hindus or African-Americans. The idea that Jews would not be horrified by, but would in fact welcome, greater inclusion for other minority faiths and creeds -- that Jews actually identify more with other minority faiths and creeds than they do with hegemonic Christianity -- is incomprehensible.

The reality is that this unequal partnership is a creature of the Christian, not Jewish, imagination. Even if "Judeo-Christian" ever actually were a relationship of equals -- and I can scarcely imagine it -- the fact is Jews do not see ourselves as part of this "Judeo-Christian" collective with a shared interest in standing against other minorities. That religious outsiders might be included is for us a feature, not a bug.

Tuesday, November 29, 2022

An Alum Reaches Out With a Question

It is the nature of being a law professor that one sometimes fields questions from alumni, who naturally turn back to their alma mater's faculty whenever they have a question about some burning issue of the day. I'm typically happy to get these emails, as it is nice to stay engaged with the broader university community and help provide what insight I can to the areas I'm claim expertise in.

For example, just today I received an email from a Lewis & Clark Law grad, class of '80, who had the following inquiry (reproduced below in full):

Question: Why is it considered ant-semitic [sic] to point out Jewish domination of the media, international finance, and Hollywood? 

It's so nice to be recognized as a subject-matter expert.

The email came from the guy's official law firm email address, which for some reason I find tickling. (I also find tickling that he was given a stayed suspension from the practice of law in 2020, which follows a censure and a reprimand all within the past five years). Nonetheless, I think I'll decline to reply to this one.