
The Second Draft - Volume 37, No. 3
Three Blind Drafts: An AI-Generated Classroom Exercise DOWNLOAD PDF
March 26, 2025This article offers a potential tool for legal writing professors seeking to quickly orient students to the positive power—and potential peril—of using generative artificial intelligence tools wisely in the practice of law. This article describes a verified, helpful classroom exercise designed to engage students in the critical evaluation of memos or briefs generated by various AI systems. Through this exercise, students quickly grasp pitfalls of the tools, while they also start to understand that different AI products suit different purposes.
Legal writing faculty have mostly moved beyond the question of whether law students should be taught about generative artificial intelligence.[1] The more relevant questions at this moment center around how to teach both students and lawyers how to use these tools wisely. This article does not claim to answer these deep questions about AI, nor does it seek to provide information about how best to use AI to teach specific legal writing skills.[2]
Professor Alsbrook first conducted this exercise in her objective legal writing class at Mercer University School of Law in April 2024. Professor Chase repeated it, with slight variations, in her objective legal writing class at Stetson Law School in September 2024 and yielded similar insights. The authors offer their experience in the hope it will be helpful to colleagues looking for resources and activities for their students.
-
The Classroom Exercise
This exercise works best when students are already familiar with the legal questions and issues that apply to the sample factual scenario. If the students have a basic understanding of the underlying legal questions and applicable case law, it saves class time and energy and allows the students to quickly focus on the quality of the analysis and writing in the three blind writing samples created by generative AI systems.
The professor generates samples before class by prompting each of these systems to answer the same questions based on the same factual scenario that the students had been asked to analyze in the previous memo[3] assignment.
In the first version of this exercise, the professor simply used “copy and paste” to ask the same prompt of each generative AI product. While this worked well, later experimentations have proven that better memo samples will be generated if the prompting professor alters the prompt slightly to account for the strengths and weaknesses of each AI product.[4] It may also be beneficial to expand or change the variety of AI tools used depending on further AI capabilities and product offerings.[5]
The students are then asked to critique three different writing samples. The students are assured the samples were not written by anyone at their law school and encouraged to be honest in their critiques. The students are not told that the memos were generated by three distinct generative AI systems. The systems used in the classrooms described in this article were ChatGPT, Lexis+ AI, and Claude, but this exercise could be adapted and used with any future generative AI platform.
-
Suggested Reflection Questions for the Classroom Exercise
An important part of this AI classroom exercise is the selection of evaluation and reflection questions. These questions can be changed and/or tailored to the current pedagogical goals of each professor, and altered to reflect the savviness of the students related to generative AI technology and its capabilities. Here is a potential “starter list” for professors who are interested in replicating a version of this exercise in their classrooms:
- What do you think of the writing style used in this writing sample? Thinking about the techniques we have studied in this class, list at least three things you think the sample does well and why. Then, list at least three things you think the sample does not do well and why.
- Based on your knowledge and understanding of the underlying law, does the writing sample reference the “best cases” for this jurisdiction on this issue?
- Look up each case referenced in the writing sample in a reliable legal research database such as Westlaw, Lexis, etc. Does the writing sample’s descriptions of these cases and their holdings accurately reflect what the cases say?
- Is the writing sample complete? Has the creator left out any information that would allow the reader a better understanding of the law, legal question, or key facts? If so, what is missing?
- Would you want to work with the creator of this writing sample? Why or why not?
-
The Evaluation Process
In Professor Alsbrook’s class the students had already completed their own “open universe” style assignment, so they had conducted the research and analysis related to the fact pattern. During Professor Alsbrook’s in-class exercise, the students were divided into pairs. Each pair was given one of the three AI-generated memos to analyze and evaluate. They were also given a set of questions to help guide their analysis, similar to those above, asking the students to critique the quality of the legal writing, the quality of the analysis, and the accuracy and relevance of the cited authority used in each memo. They were also instructed to verify the sources and cases cited in the memos to ensure they were on point and represented the best available authority.
During Professor Chase’s out-of-class independent exercise, the students were asked to evaluate each of the three memos independently.[6] The questions given to guide the analysis included specific inquiries about the way in which the writer used facts to explain their analysis and asked students to identify the applicable rule of law the writer used to guide their conclusions (in some of the sample memos this would have been impossible).
Both Professor Alsbrook and Professor Chase asked their students to think about whether they would want to work with the author of the memo, and whether they would trust the author’s work in the future after reading this sample. This question proved valuable for inspiring thoughtful contemplation about the ways in which legal writing reflects on an author’s integrity, professional reputation, and future opportunities.
-
The Students’ Findings
To the delight of the professors, the students’ critiques varied significantly depending on the sample, and depending on the AI system that had generated the sample they were evaluating:
- ChatGPT-Generated Memo: The students assigned to the ChatGPT-generated memo praised the quality of the writing, noting that the memo was articulate, well-organized, and had very good “flow.” However, they were disappointed in the shallowness of analysis, noting the memo did not say much of substance about the law or the facts. They also expressed frustration when they discovered that not all of the cases cited in the memo could be located, leading to a broader discussion on the importance of source verification and the potential pitfalls of relying on AI without cross-checking its outputs.
- Lexis+ AI-Generated Memo: The memo generated by Lexis+ AI received mixed reviews from the students. While the students found the memo adequate in terms of structure and logic, they critiqued the selection of cases. The cited authorities were not the leading cases in the area, and they found the writing “clunky” and lacking in sophistication. This feedback highlighted the need for students to critically assess a written piece in its entirety while focusing on the content and the quality of the language used in legal writing. It also provided an opportunity to reiterate earlier lessons about the importance of choosing strong and relevant cases, particularly for issues where case law is abundant but the language in the cases is not applicable to the relevant questions in the assignment.
- Claude-Generated Memo: The students evaluating the Claude-generated memo were particularly critical. They noted a lack of substance in the arguments presented and pointed out the absence of any authoritative sources. This group’s discussion emphasized the importance of depth in legal analysis and the dangers of superficial reasoning.
-
Revealing the AI Source
After a spirited classroom discussion in which the students shared their critiques, the professors revealed that the memos had been written by generative AI systems. This disclosure prompted further lively discussion about the role of AI in legal research and writing. The students recognized the potential of AI as a tool but also acknowledged its limitations. The consensus was clear: while AI can assist in generating drafts and organizing thoughts, it cannot replace the need for independent verification of sources and a deep understanding of the law.
The classroom discussion also led to a helpful conversation about the long-standing legal practice of adapting previous work to current purposes. While AI is a relatively new tool, for decades, lawyers have used templates, previous briefs, etc., to save time and effort. Students gained a deepened understanding of their ethical and professional obligation to view all previous work with skepticism, regardless of whether the source is a colleague or a computer. Whether a document is written by a person or a program, lawyers have an ethical and professional obligation to critically evaluate the language to ensure it works for the current task and desired outcome.
-
Implications for Legal Education
This exercise underscores several key lessons for legal educators. First, it demonstrates the importance of teaching students to approach AI-generated content with a critical eye. As AI continues to evolve, law students must be equipped with the skills to assess the accuracy and reliability of AI-generated legal documents.
Second, the exercise highlights the value of experiential learning in the legal classroom. By engaging students in simulated real-world scenarios where they must apply their legal knowledge and analytical skills, we can better prepare them for the complexities of modern legal practice.
Finally, the exercise serves as a reminder that, while AI has the potential to revolutionize legal research and writing, like previous legal technologies, AI is a tool that must be used judiciously. Law students—and indeed all legal professionals—must remain vigilant in verifying AI-generated content and ensuring their legal work is grounded in sound research and reasoning.
-
Conclusion
This exercise demonstrates to students that AI can be a powerful ally but does not replace the rigorous analysis and deep legal understanding that are the hallmarks of effective legal practice. Hopefully, the description of this exercise will help other professors looking for ways to help their students think critically about generative artificial intelligence systems.
The authors would love to hear from professors who try variations of the “three blind drafts” exercise in their classrooms going forward. As the integration of generative AI into law practice and legal education becomes increasingly inevitable, sharing successful classroom techniques can help every professor seeking ways to present the challenges and opportunities of these technologies to our students.
[1] See, e.g., Kirsten K. Davis, A New Parlor is Open: Legal Writing Faculty Must Develop Scholarship on Generative AI and Legal Writing, 7 Stetson L. Rev. F. 1 (2024).
[2] The exercise presented in this article is meant to introduce students to the things generative AI does well, and things it may not do so well. There are myriad other ways to use generative AI in the legal writing classroom to help teach students how to do things like format memos. See, e.g. Sarah Parks, Using ChatGPT to Teach the CREAC Format to First-Semester Legal Writing Students, 36 Second Draft 1 (2024); Ashley B. Armstrong, Who’s Afraid of ChatGPT? An Examination of ChatGPT’s Implications for Legal Writing, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4336929; Sandra Simpson, ChatGPT Exercise in the LRW Class, Institute for Law Teaching and Learning, https://lawteaching.org/2023/02/13/chatgpt-exercise-in-the-lrw-class/.
[3] This classroom exercise could also be used in a persuasive writing class using different styles of briefs.
[4] To learn more about the process of drafting and revising prompts for generative AI, see Danny Liu, Prompt Engineering for Educators – Making Generative AI Work For You, Univ. Sidney Teaching Tips (Apr. 27, 2023), https://educational-innovation.sydney.edu.au/teaching@sydney/prompt-engineering-for-educators-making-generative-ai-work-for-you/; Jose Antonio Bowen & C. Edward Watson, Teaching with AI (Johns Hopkins University Press 2024) (this book, in particular, does a wonderful job of walking educators through the process of using generative AI in a way that is useful to both professor and student, and has been used by Professor Chase in determining best uses for generative AI in her courses, for this exercise and others).
[5] Prompt engineering is an iterative process, and prompts that work with one product may not work with another; in fact, a single prompt reused months later within the same product may not operate the same way due to the way generative AI works.
[6] Professor Chase originally planned this as an in-class exercise similar to Professor Alsbrook’s but, when faced with back-to-back class cancellations due to hurricanes Helene and Milton, shifted to an out-of-class delivery method. The exercise helped center students in what they were learning about legal writing without them having to draft anything themselves, and made an excellent lesson delivery for asynchronous education in exigent circumstances.