The use of artificial intelligence is increasing in industries such as healthcare, finance, retail and manufacturing, but what would happen if AI were utilized by the legal system? In the past few weeks, a case where a plaintiff attempted to use an AI-generated avatar to act as counsel sparked debate about whether AI should play a role in the courtroom.
AI may be beneficial for efficiency, but its limitations in perception of reality, ethics and symbolic understanding make it ill-equipped to replace or significantly influence judgment in the courtrooms.
AI has proven to be fundamentally unreliable, which is objectionable in an environment where truth is essential. Though it can mimic human characteristics, AI remains a form of technology, limiting it to scanning data and recognizing patterns without comprehending the context and concepts.
This lack of understanding has already resulted in real-world consequences. During court proceedings in June 2023, lawyers cited fictitious legal cases generated by a chatbot.
As technology lacks the critical thinking skills to distinguish fact from fiction — a trait amplified by misinformation on the internet — AI will repeat the same untruths it finds. The tech industry even has a name for AI’s tendency to fabricate bylines and manufacture facts: hallucinations.
AI’s limitation to machine calculation is also disadvantageous compared to human perception. Humans are capable of interpreting emotional nuance, social context and abstract concepts like justice. Decisions in legal cases are not attained through only logic and facts; the nuances of morality and ethics are key.
AI translates symbols and patterns that lack inherent meaning. This data alone is devoid of morality and ignorant of philosophical subtleties. The data provided by the technology can be utilized by humans, but it is only when people view the information through the lens of human experience and ethics that it obtains significance.
Furthermore, the judicial system largely deals with human perception. A judgment is made based on the perception of reality: a witness perception of the crime, people’s involvement, the use of weapons or a prosecutor’s perception of past cases. If people who cannot consistently perceive reality are considered incapacitated in the courtroom, technology with the same limitations should not be an exception.
Not only do technology’s calculations pale in comparison to human perception, but they also cannot replicate human experiences. Because humans cannot be reduced to algorithms, computers cannot perfectly replicate the human mind. Human consciousness is comprised of emotions, empathy and morality.
If AI cannot comprehend concepts that are foundational for justice, it is unfit to play a major role in the kind of decision-making that can determine whether someone deserves freedom.
Although AI’s calculations cannot equate to human judgment, it can be an asset in the courtroom.
AI can translate words into text and record depositions to create more thorough records. It can also provide interpretation services in the courtroom — California, for example, saw one million interpretations a year, and many people are denied court services due to language barriers. AI can also reduce the time spent on manual labor, allowing lawyers to dedicate more time to their clients.
There is more to the legal system than just law. The law serves as a tool for obtaining justice, upholding morality, offering redemption and preserving human dignity.
AI may enhance administrative functions, but legal responsibility should remain in the hands of humans.