Description of the program, provided by the presenters:
Legal research vendors have taken new steps to integrate artificial intelligence and machine learning in their products. The latest development is brief analysis tools that read a document and formulate searches with little to no additional human input. This session will critically evaluate these new tools and compare them to existing search options. Participants will learn how brief analysis tools work, when they should and should not be used, and how to react to increasing researcher reliance on algorithms.
A recording of this program can be viewed at https://www.eventscribeapp.com/live/videoPlayer.asp?lsfp=NnhvN1dyQ3hQZERmOXBiU250VHRnbTJjQ3FqN0Q2Ti9DWlFlUGcxMkxXZz0= (AALL 2021 Annual Meeting registration required).
I offered the editors of this blog the following reason for wanting to recap this program:
“I have been getting questions from LLMs over the last few years about automation tools and have always told them I don’t believe in them or any automation like that for law students. I decided this was the year I would finally educate myself more about them, so I have a better answer about why they should not be used when you are writing your first-ever brief or memo, because I really think they can compromise the learning process.“
After having attended this program (which, in terms of the quality of the content and the speakers, was one of the best programs I have ever attended at an AALL meeting), I am not 100% sure I have a perfect answer to the questions I get from LLMs about automated legal research tools, but maybe I will be able to have some better answers than I did in the past.
Ben Keele, Katie Labonte, and Susan Nevelow Mart presented preliminary results from a research project they have been working on which involves evaluating four brief analyzer tools that are included in legal research databases:
- Casetext Cara (https://casetext.com/cara-ai/)
- Westlaw Quick Check (https://legal.thomsonreuters.com/en/products/westlaw-edge/quick-check)
- Bloomberg Brief Analyzer (https://pro.bloomberglaw.com/brief-analyzer/)
- Lexis Brief Analysis (https://www.lexisnexis.com/community/insights/legal/b/product-features/posts/feature-spotlight-experience-brief-analysis-on-lexis)
All four of these products have a common functionality: they analyze a brief the researcher submits to determine (a) if the cases cited in the document are still good law, and (b) if there are cases other than those cited that might be helpful for the researcher to consider in crafting their argument.
In his portion of the presentation, Ben made it very clear that brief analyzers are NOT front-line research tools. Instead, they require that the researcher submit a fully-drafted document. If a researcher just submitted their notes from torts, for example, the brief analyzer may produce something, but that something won’t be very useful.
The vendors advertise that their products include “Artificial Intelligence” (AI) tools like this. But what is AI exactly? According to Susan, it is a way that computers can be trained to analyze data, although in the end it may be no more than “math at its heart that can get away from creators.” She thinks that “machine learning” might be a better term for what this functionality is and does.
Whatever you call it, she argues, it has the following characteristics:
- It “cannot replace creative problem solving under uncertaincy and complexity that is legal research and analysis,” which means that “human lawyers have to do heavy lifting in creative lawyering.”
- It “is not creative problem solving” and instead is “just a tool to augument [human] thinking.”
It also has some distinct drawbacks in legal research, including:
- Baked-in viewpoint bias, given that each platform has its own metadata/classification system
- Algorithmic fluidity, which creates variability in search results over time that cannot replace deep reading and deep thinking
The presenters also found that analyzers are not designed to do all the research, but only answer very specific questions, and that their effective use depends on further human interaction after the analysis is complete.
Katie introduced the study that the three presenters had undertaken. They selected a brief from each of 10 cases (seven federal, one Indiana, one Colorado, and one California) and ran it through each of the four brief analyzer tools. From there, they analyzed the cases returned based on each database’s analysis of the brief.
To avoid playing the spoiler, I will leave a detailed reveal of the results to their article about the project. However, it is interesting that, as the presenters had expected, each of the four tools returned somewhat different results when analyzing the same brief, and that there was relatively little overlap between analyzers. This could be attributed to the various ways the analyzers broke the document down and the type of data from the brief each analyzer focused on and used in its analysis.
What were some of the lessons learned?
(1) What is the analyzer’s place in the legal research process, and are bots taking over? No, we are not anywhere near that place – it is more like an augmenting tool.
(2) A well-structured and supported (thoroughly cited) document is needed as input to optimize the analyzers’ abilities.
(3) The researcher should be expected to be well-versed in the issues, and to be able to refine the results with filters, issue selection, and keywords. In fact, this seems to be the expectation of the developers of the tool.
One thing I took from this, which should be relatively easy to explain to students, is that this will not replace the researcher having to read the secondary sources, run the searches, read the cases, craft the arguments, and write the brief. According to the presenters, this tool basically won’t even work until you can plug a finished brief into it, and that is by design. This was a huge relief for me to hear. We are still clearly far away from the days of case law research in a common law legal system where a researcher can just click a button in Westlaw and get all the cases they need. Although your LLMs from civil law jurisdiction may expect the technology to be that sophisticated, it is not yet and (likely, in my opinion) will never be.