by Kenneth Grady
My children were young at just the right time for the Harry Potter books and some of the Harry Potter movies to attract them. My wife did a heroic job of keeping pace with what our children were reading, so all four of them read all the books and as a family we went to the early movies (by the time the later movies came out, my children had moved on). Even if you or your children were not part of the Harry Potter mania, many memes from that world have entered pop culture. I am going to focus on the defense against the dark arts.
For those of you who missed the whole Harry Potter thing, this is what you need to know. Harry Potter and his friends attend Hogwarts School of Witchcraft and Wizardry. They must take several core courses, one of which is “Defence Against the Dark Arts” (DADA) [no typo, the books are by an English author, after all]. Professor Severus Snape (played by the late actor Alan Rickman) is one of the professors who teaches DADA.
“Your defences must therefore be as flexible and inventive as the arts you seek to undo”
— Professor Snape discussing defence during a 1996 lesson
AI And The Dark Arts
Those who practice the dark arts today are the experts who use computer technology against society. They engage in cyberespionage, use bots to grab control of our computers, produce ransomware, and turn our friends the Internet of Things against us. For every way we find that computers can help us, the practitioners of the dark arts can find ways to harm us with computers.
Enter the ill-defined world of artificial intelligence. AI is, to abuse another term from the fantasy world, a shapeshifter (though here I must point out that AI doesn’t need to carry the negative baggage that shapeshifters in the fantasy world must carry). We can’t even say we know AI when we see it, since many argue that when we see it, it no longer is AI (the “AI effect”).
AI is a bag of methodologies, techniques, approaches, theories, and algorithms and come together to form tools. This vague description is part of what makes AI so attractive. Almost anyone can claim at any time that what they are doing with a computer involves AI. Unless you know specifically what they are doing and how they are doing it, the assertion becomes hard to refute. An AI-enabled coffee maker can mean a device that learns the pattern of when you make coffee in the morning and starts turning itself on in anticipation of your arrival. Or, it can mean a coffee maker connected to the internet drawing data from hundreds or thousands of sources to “invent” new and improved ways to brew your coffee.
For lawyers, the world of AI can be reduced to one word: risk. AI has the power to affect risk. It can greatly increase or decrease risk for clients. It also has immense influence over the future of legal services, another type of risk for lawyers. The intersection of legal services risk and client risk creates a third form of risk: societal risk.
If AI takes over too much of, or the wrong aspects of, legal services, the risk that governance will become brittle, unable to handle the vagaries of the human condition, becomes high. A rules-based society works well when we can comprehend and follow the rules. It will have more challenges (risk) when those rules become embedded in software, where they are harder to find and follow.
But, if we fail to embrace AI advantages, we risk wasting resources on things that need doing, but do not need or benefit from humans doing them. Lawyers are just starting to glimpse this lesson. Litigation may require document review, but computers outperform humans at massive document review. Worse, document review (or due diligence) drives humans away from the profession. No one wants to spend seven years of post-high school education training to compete with AI on mind-numbing tasks.
The argument for AI savvy lawyers is the same as the general argument for lawyers always has been: lawyers help manage risk in society. Better yet, lawyers “keep” risk at a manageable level. Our tools are and have been governance control mechanisms. Statutory laws, common law, constitutional law, and all the attendant rules and regulations. This broad system — what Gillian Hadfield calls the “legal ecosystem” — helps us stay away from spinning out of control.
If you want to check this assertion, then look at what happens when the legal ecosystem is out of balance. People start poking their fingers through the protective legal ecosystem. Lack of respect for the ecosystem rises and we experience behavior inconsistent with a well-functioning system.
Steven Pinker, the noted Harvard psychologist, says violence is declining in the world. It would seem, however, that anger levels may be rising. If we use anger as a proxy for frustration with governing institutions, it is not a long walk to get to anger with the legal ecosystem.
Lawyers who are not AI savvy cannot effectively keep the legal ecosystem tuned. They can’t determine how to use the ecosystem to reduce risk from lack of access to justice. They struggle to advise clients how to reduce the risk from the threats and benefits of AI. They also fail to use AI to drive better legal services delivery, taking too many resources away from counseling and advising to spend time on rote tasks.
It is difficult to tell a tipping point at that moment when you are on the fulcrum. Before the fulcrum, all you see is a steep hill. Looking back, you can see when you were on the fulcrum. At this moment, I think we are on the fulcrum — at that point in time when being an AI-ignorant lawyer passes from point-of-pride to detriment. For society, we may be at that point where having AI-ignorant lawyers raises risk in the legal ecosystem beyond an acceptable level.
If Professor Snape was correct, then our defenses to those who practice AI dark arts must be as flexible and inventive as the arts we seek to undo. Our greatest art against society veering out of control and those who would steer it in that direction, has been the law. We may have just reached the point where having AI savvy lawyers moved from nice to have to need to have.
This post was also published on Medium.