AI in Law: Definition, Current Limitations and Future Potential

Artificial intelligence is on the rise and has already become a buzzword in the legal industry. So far, the discussion around the use of technology in the legal industry focuses on the battle between humans (lawyers) and machines (robots) – and the possibility of the latter taking over the jobs of lawyers. This short article focuses on the underlying technologies behind the paradigm.

What Is Artificial Intelligence?

Artificial Intelligence (AI) was famously defined by John McCarthy as “the science and engineering of making intelligent machines.” AI could also be defined as “cognitive technologies.” However labelled, the field has many branches, with many significant connections and commonalities among them. The most important fields are currently machine learning including deep learning and predictive analytics, natural language processing (NLP), comprising translation, classification & clustering and information extraction.

There is a lot of buzz around the term AI. It is an idea that has oscillated through many hype cycles over many years. AI seems almost magical and a bit scary. In 2015, a group of high-profile scientists and entrepreneurs warned that AI might be the last invention of the human race. In his bestseller Superintelligence: Paths, Dangers, Strategies (2014), Nick Bostrom warns about the potential threat of AI. He warns that an intelligence explosion through AI could lead to machines exceeding human intelligence. In Bostrom’s view, superintelligent AI systems would quickly dominate human species.

Weak vs. Strong AI

Even though the discussion of “superintelligence” is extremely interesting and sometimes mind-boggling, it has nothing to do with AI in law (at least at the moment). If the use of AI in law is discussed it is important to bear one distinction between “weak” and “strong” AI in mind. AI used in the legal industry is commonly referred to as “weak” (or “shallow”) AI. It seems intelligent, but it still has defined functions. It has no self-awareness. Weak AI has to be distinguished from “strong” AI, also known as artificial general intelligence (AGI) or “deep” AI. Strong AI would match or exceed human intelligence which is often defined as the ability “to reason, represent knowledge, plan, learn, communicate in natural language and integrate all these skills toward a common goal.” In order to achieve strong AI status, a system has to be able to carry out these abilities. Whether or when strong AI will emerge is highly contested in the scientific community.

AI In Legal

Al currently used in legal technology is far away from strong AI. Therefore, when we speak of AI in the context of legal, we mean technologies which seems intelligent but have defined functions, i.e. weak AI. It uses models of its problem domain given to it by programmers. Weak AI cannot perform autonomous reduction, whereas strong AI has a real understanding of a problem domain. Therefore, weak AI requires an expert who performs all required reduction in advance and implements it into a system. As a result, weak AI will only have a specific set of tasks it can solve. A chess computer would, for instance, not be able to solve legal problems. The problem statement is outside of its capabilities. A legal technology tool using weak AI would only be able to “understand” the specific problem domain it was designed for. Therefore, modern AI algorithms are only able to replicate some of the human intellectual abilities. This also true for IBM’s cognitive computer Watson which famously won the quiz show “Jeopardy” against human competitors in 2011. IBM’s Watson is a (very advanced) machine learning algorithm, not a computer with human-level intelligence. Current AI tools are not able to mimic advanced cognitive processes, such as logical reasoning, comprehension, meta-cognition or contextual perception of abstract concepts that are essential to legal thinking. We should bear these limitations in mind when we proceed to exploring which techniques are used to “produce” weak AI. 

Today’s Limitations – Tomorrow’s Potential

Even though it is important to understand the current limitations of AI, it is equally important to understand the evolving technological progress which is unfolding in rapid speed. It is essential to understand that computational power is growing exponentially. Exponential growth is difficult to comprehend for humans as we generally think in linear terms. The most famous equation which stands for this exponential growth is Moore’s law. Moore’s Law states that CPU processing power will increase exponentially by a factor of 2 every 18 to 24 months. In other words, Moore’s Law claims that CPU processing power will double approximately every two years. Assuming that computers continue to double in power, their hardware dimension alone will be over two hundred times more powerful in 2030. Differently put, the next decade will witness more than thirty times as much increase in power as the previous one.

Regardless of whether this growth will continue and whether the growth of computational power means that the abilities of AI systems will grow exponentially as well, people have the tendency to underestimate the potential of tomorrow’s applications by evaluating them in terms of today’s enabling technologies. This tendency is sometimes referred to as “technological myopia”. This should be born in mind, when we discuss the application of technology in the legal realm. Current techniques used in legal technology tools are called machine learning (including deep learning and predictive analysis) and natural language processing (NLP).

Current Tools Are Far Away From Rendering Legal Advice… For Now

The work of lawyers is sometimes highly complex. Lawyers need to process complex sets of facts and circumstances, consider applicable legal rights and obligations and render reasoned opinions and guidance on the best course of action based on all of that information. A lawyer (ideally) has the ability to understand the background and context of events, general knowledge of how the world works and knowledge of the law and its application.

The work of lawyers also involves a lot of automatic filtering out of irrelevant noise and focusing in on the signal. For computers it is generally highly challenging to perform these tasks. To completely replicate a human lawyer would mean to re-engineer a process that could produce creative, imaginative and innovative ideas and results whilst drawing on a comprehensive set of legal information and an “experience database” comparable to an experienced lawyer. As lawyers know, it can be an extremely complex task to render legal advice. Thus, it will be an extremely difficult task to replicate this with computers using AI. Current tools are far away from achieving this.

Even though there are some substantial limitations today this does not mean that these limitations will still exist in five or ten years. The ability of technology might change more radically and sooner than we expect. Hence, although machines are just beginning to perform legal tasks, it is likely that we can expect substantial progress in the coming years. Someday computers may mimic intelligent legal reasoning. Until then, the question is not whether they’re replacing lawyers, but how they impact the way a lawyer works.