Can AI help ordinary people analyze cases in reality? Including analysis based on information provided by the parties. For example, analyzing litigation requests, evidence, trial records, and judgments to obtain a guiding suggestion or result.
2 Answers
It can be helpful, but you should also be very wary of it.
AI is all about style over substance, and the substance that an AI answer produces is a side effect of its mimicking the style it has seen in the material it was trained upon.
AI doesn't have a concept of truth v. falsity. It isn't uncommon for it to literally make up cases and statute that don't exist to make its case (something that is called AI hallucination) or that exist but have nothing to do with what the AI claims that the case says the case shows, or to make up quotations from a mass of documents that aren't actually present in the documents but sound like they could be there. It also isn't uncommon at all for an AI answer to advance a legal theory that sounds superficially good, but is just plain wrong, especially if the answer it provides is the one that your question suggests subtly that you want to hear.
Further, AI picks up on subtle aspects of the way that you phrase a question that can lead it in the wrong direction.
For example, a colleague of mine recently did an AI search on a seemingly straightforward legal question that triggered the AI to look for cases involving key words that were used in the question, which led it to reach the wrong conclusion because those key words almost always showed up together in a fact pattern that wasn't analogous to our case. But the opposite conclusion was reached in almost every case that didn't contain those two key words in close proximity to each other. I didn't accept the AI answer at face value, however, because I understood the underlying theory and had read a couple of cases that were inconsistent with the conclusion that the AI had reached.
AI has the virtue of being very fast, and generally, very articulate and clear. This is great in the maybe 80% of cases where it gives you a fairly reasonable answer. (For comparison sake, AI gets the right answer to close calls about which sentence is more grammatically correct about 89% of the time.) But it is frequently wrong too. So you need to double check its assertions (never trust that a case cited by an AI, or a fact that it mentions, really exists), and you need some way to independently confirm that it isn't barking up the wrong tree. If the AI cites sources, you need to read the sources too. I've seen this happen more than once in the actual practice of law over the past few years. A link to 87 real life examples from all over the world can be found here.
There are also some areas where AI performs better than others. If a question is about the meaning of a case that is taught in every law school textbook on the subject where that case is taught, say Marbury v. Madison, or it is summarizing a form contract that is used with small variations in countless transactions, the AI answer is usually going to be a pretty good one. On the other hand, the more obscure or specific a question is, and the less of an academic literature there is out there discussing it, the less likely it is that AI will give you a good answer.
When summarizing material, the better organized and written the original material is, the most likely it is that the AI summary of it will also be good.
Another problem for a non-lawyer is that the best legal AI applications are available only on a paid basis from legal publishing companies. Law is a topic where free, general purpose AIs do less well than in other subject areas, especially when the legal question is not in heartland of what is taught to first year law school students.
If you have really mastered the art of how to ask the kind of questions in the manner that optimizes how well AI performs, however, you can get it to make a good first stab at a good answer.
You might, for example, be able to ask a question in just the right way to get the AI to locate the parts of a many day long trial transcript where the discussion of someone's capacity to make medical decisions is discussed, and it might find 90%-95% of them including the most important bits, and that might be a good place to start, or a good place to focus on if you are very pressed for time. But it is no substitute for doing the document review the hard way.
There is a related discussion at Law Meta SE about ChatGPT answers and why they are banned on this site.
How to use AI in legal proceedings:
Exclusively as a guide on where to start your own research. That's it. Assume everything it tells you to be a blatant fabrication (because chances are it is).
So, what exactly is its use?
Well, if the AI "sees" discrepancies between statement A and statement B, you now know two documents to read through and what to search for. The discrepancy may be there. Or not. But if you can't make the time for a thorough analysis and you just need one "gotcha", then it's a great start. Or the AI may tell you that statute X doesn't apply here. Now you have a (potentially existent) statute that's worth looking at.
You may get (potentially even real) cases that sound similar to yours. That's also a great starting point. But treat these just as any other top-level Google Search result: it may be completely unrelated or inapplicable to your case (if they're even real).
Another valuable takeaway would be what legal avenues exist in general for that rough field of circumstances, which could be a vague indicator whether it's worth pursuing anything. Again: this could be completely wrong as well, from missing valuable options to making up valuable sounding ones.
As a layman, you still need a lawyer. As a lawyer, you still need to do the work.
The best approach for generative AI for research of any kind is to treat it as an ancient trickster god. It may be genuinely helping you, but it could also be pulling a prank on you just for the giggles.
- 195
- 5
- 780
- 5
- 11