AI Impact: 5 Ways Artificial Intelligence May Impact the Legal System

917
Image by vectorjuice on Freepik

Artificial intelligence is changing the legal landscape, and its impact is bound to grow in the coming years. From generating legal documents to giving sentencing recommendations, AI tools will affect everyone from business lawyers in Albury-Wodonga to paralegals in Albany, New York. 

Below are five likely impacts:

1. Document creation

Recently, Large Language Models (or LLMs) such as Chat-GPT have been making quite a splash. With a simple prompt, these LLMs can generate natural-sounding text to answer questions, write stories, or generate a work of art. 

In the legal world, LLMs may help generate legal documents from simple prompts. This is especially true for basic forms with set structures that only require you to add a few details. That being said, LLMs aren’t perfect, and they’re bound to make mistakes. Therefore, human lawyers will still need to double-check the facts and polish up the writing. 

2. Document review

Alongside generating (some) legal documents, AI can also help lawyers review documents faster. More specifically, AI tools can help lawyers classify documents, flag any problematic clauses, point out any inaccuracies, and group relevant information together. 

For the process of discovery, or identifying what documents are relevant to an opponent’s lawsuit, this is particularly helpful. Be that as it may, caution is key. Since words don’t always have a fixed meaning, these AI tools are far from perfect. Just like in article generation, reviewing articles will require legal professionals to review and edit any AI output.

3. Intellectual property 

As AI tools continue to get adopted across a broad range of industries, questions of intellectual property are going to be raised and debated. This is particularly true in the realm of creativity. When people use generative AI to create works of art, do they have ownership over that art? Furthermore, if an artist’s work was used to train AI without the artist’s permission, is that legal? Questions such as these will require intellectual property lawyers to work them out.

4. Sentencing Recommendations

Correctional Offender Management Profiling for Alternative Sanctions (or COMPAS) is an AI tool that gives bail and sentencing recommendations to judges. More specifically, it advises them on how likely a convicted person is to re-offend. 

How it arrives at that answer, however, is far from objective. In fact, the COMPAS algorithm assessed black prisoners as far more likely to reoffend than white prisoners. This brings up the issue of algorithmic bias — if the data used to train the AI is biased, then the AI output will likely be biased as well. In addition to those issues, there’s concern that the proprietary nature of the COMPAS algorithm means that the public doesn’t know how the AI generates its sentencing recommendations. 

5. Deep Fake Evidence

Deep fake technology can create fake images and videos that look real. It, quite literally, puts (digital) words into someone else’s mouth. At the moment, most deep fake videos are relatively easy to spot. However, that’s bound to get more complicated as the technology progresses. 

For the legal profession, deep fake images (AI-Generated Images) and videos present a huge risk because they might be used to produce false evidence. To mitigate that risk, lawyers might need to undergo training, remain vigilant, and hire digital forensics experts to authenticate certain kinds of evidence.

While artificial intelligence promises a lot of benefits, such as helping lawyers quickly review data, it also comes with considerable risks, such as the creation of fake evidence. Balancing those benefits and risks will shape the legal industry in years to come.