Back to Publications

The Second Draft -

Write, AI, Revise: A Framework for Inte-grating Generative AI into Legal Writing Self-Assessment DOWNLOAD PDF

  • Casey M. Reiter
    Associate Professor and Director of Legal Writing
    St. Thomas University College of Law

Published: December 2025

Legal writing professors are increasingly tasked with preparing students for a world where generative AI tools like ChatGPT are both ubiquitous and ethically fraught.[1] Generative AI is not going away, but neither is the need for students to learn to think and assess for themselves. Rather than resisting these tools, this article proposes an integrative approach: using generative AI as a self-assessment partner in a structured pedagogy called “Write, AI, Revise.”

1. The Problem

Students struggle with self-assessment. They also fear professor feedback, too often equating critique with failure. That fear makes them reluctant to engage with comments that are truly intended to be roadmaps for improvement. Meanwhile, generative AI tools sit right there, ready to give feedback 24/7. Because students perceive generative AI as faceless and nonjudgmental, they seem more willing to engage with it than with us. 

But that willingness has gone off-course: instead of using AI as a mirror to reflect shortfalls, many students use it as a shortcut—outsourcing drafts without reflection, never questioning the output, and bypassing the hard work of revision. This misuse can hinder the development of the self-assessment and independent critical thinking skills necessary in the practice of law.

I developed the proposed framework after observing that many students were already using generative AI and realizing that policing students’ AI use was both futile and misaligned with the deeper pedagogical goal. What I wanted was a way to channel the students’ willingness to engage with AI into something useful. I wanted to make AI a sounding board for better thinking, not a substitute for it, and to use that practice as a bridge to engaging more openly with feedback from professors. 

When students learn to approach AI feedback critically, they also become more practiced at receiving feedback generally without defensiveness. In this way, generative AI can become a low-stakes training ground for habits of openness that make professor critiques more impactful. With guided structure and guardrails, I believe we can gradually retrain students to see all feedback (machine and human alike) as fuel for growth, while preserving their engagement in the analytical processes at the core of legal reasoning.

2. The Solution: Write, AI, Revise

There are many wonderful exercise ideas already available that help students sharpen their evaluative judgment and deepen their understanding of what makes legal writing effective by evaluating externally produced or AI-generated briefs, for example.[2] The method I propose here builds on the spirit of that guided engagement and turns the AI lens inward—helping students use generative AI tools to reflect on and improve their own work rather than using it as a ghostwriter. This structured three-step process asks students to: (1) write a draft independently; (2) test with AI prompts; (3) then revise with reflection.

The Process

  1. Write: Students complete a writing assignment draft entirely on their own.

  2. AI Prompts: Students use a generative AI tool along with a curated bank of prompts designed to identify structural, analytical, and rhetorical weaknesses to analyze their draft. Students can choose from a bank of professor-approved prompts such as: “Identify areas where my writing lacks clarity or precision and explain why.” “Without revising, explain the structure of my analysis in this memo as it currently exists.” “Suggest how I can improve the organization of this legal argument.” If the answers from generative AI are unclear, students can also ask follow-up questions for clarification.

  3. Revise: Students revise their drafts based on both AI insights and their own critical thinking. They must also submit a short reflection memo explaining what revisions they made and why. Sample reflection prompts might include: “What was one specific suggestion from AI that you applied, and how did it change your argument?” or “What did the AI miss that you noticed and corrected on your own?” Professors may also consider pairing students for a short AI debrief, where they compare what AI flagged in their drafts and reflect on what surprised them.

Here is a quick example of how this looks in practice[3]:

[Prompt to AI, after Student uploads independent first draft]: “Highlight one section of my memo that lacks clarity and explain why.”


[AI Output]: “In paragraph 3, your explanation of the standard is vague. The phrase ‘some courts have allowed...’ needs clarification. Which courts have allowed this? Under what conditions?”


[Student Reflection]: “I hadn’t realized how vague that sentence was until AI flagged it. I revised it to cite two relevant cases and specify the standard used.”

Here is another example that includes a follow-up question[4]

[Prompt to AI, after Student uploads independent first draft]: “In the voice of a legal writing professor, please identify where my CREAC feels repetitive.”


[AI Output]: “You’ve repeated the definition of [the same rule] in your opening paragraph, again in your rule explanation, and again in your application section. Once you’ve stated a rule clearly, you don’t need to restate it in full each time—readers will assume it continues to apply.”

 

[Student follow-up question to AI]: “What is the harm in repeating the rule if that is the law?”

 

[AI Output]: “Good question. The harm in repeating a rule in multiple places is that it undercuts the precision and efficiency legal readers expect. In a CREAC, your reader assumes that once you’ve stated the governing rule, it remains in force throughout your analysis. If you restate it over and over, three things happen: (1) the writing starts to feel padded, which makes the analysis look less disciplined; (2) the reader may wonder if you’re shifting the rule or uncertain about its scope; and (3) you lose valuable space you could devote to reasoning or application. Strong legal writing trusts the initial rule statement and then uses short, strategic references (‘as established above’) to keep the reader oriented without redundancy.”


[Student Reflection]: “[The AI response] made me realize I was re-explaining the same rule over and over without advancing my point, and that helped me figure out what to change.”

Having students draft first and reflect afterward gives them a clear record of their thinking, encourages accountability, and motivates meaningful revision over blind copying and pasting. Giving students room to ask follow-up questions also trains them to probe feedback rather than passively accept (or avoid) it and builds confidence. This can be especially helpful for those who fear asking “dumb questions” during class and office hours. Many appreciate having a nonjudgmental second set of eyes—even if generative AI can sometimes deliver feedback with more sass than expected.

3. Implementation Tips for Professors

I have found this model works especially well for mid-semester memos or any assignment where students already have a complete draft written without AI assistance. Starting with unaided work preserves student ownership and curbs overreliance on generative AI.  

Once students have an independent draft, professors can introduce the method with a brief class module or workshop on generative AI tools and prompt development.[5] Demonstrating how to use AI for reflection—rather than generation—helps frame this as a metacognitive exercise, not a shortcut. When introducing the method, I explain that we are treating AI like a writing tutor, not a ghostwriter—its job is to help you see what you missed, not write for you. 

Finally, professors should provide some guidance on what meaningful revision entails. Students often equate “revision” with “light editing,” so it helps to reframe the revision stage as substantive: refining rule explanations, improving analytical depth, clarifying structure, and strengthening transitions. I personally share with my students that I often spend far more time revising than drafting, which can help calibrate expectations: if professors invest most of their time in revising their own work, students should expect to do the same. Offering a brief checklist or a short mini-lesson on how to evaluate the quality of a revision can prevent students from treating the AI step as a one-click correction. This also reinforces that the value of the method lies in the student’s analytical choices, not the AI’s suggestions.

To maintain the integrity of this method, professors should establish clear academic integrity guidelines. Students must understand that AI use under this method is limited to revision—not initial drafting. Requiring reflection memos can reinforce authorship by asking students to explain what they changed and why. Reflection memos also promote transparency, especially when the students submit both their original draft and the AI-assisted revision. Professors can compare drafts to identify signs of over-reliance on AI, such as abrupt shifts in tone, structure, or vocabulary, if needed. Finally, professors new to generative AI should pilot the method with low-stakes assignments before integrating it into graded assessments—an approach that aligns with current calls to rethink assessment structures considering both generative AI and the evolving expectations of the NextGen Bar Exam.[6]

4. Preventing Students from “Going Beyond the Prompts”

Of course, an obvious concern lurks behind any structured use of generative AI: what stops a student from bypassing this process entirely and asking AI to complete all three steps? The honest answer is that nothing can prevent this completely. But that possibility already exists in every writing assignment we give—AI or not. Ghostwriters, commercial supplements, and well-intentioned parents have always posed the same risk. The solution is not to ban the tool; it is to design the assignment so that outsourcing becomes both detectable and strategically pointless.

The framework itself creates some built-in guardrails. First, requiring students to submit their initial draft alongside their revision and reflection memo creates a clear record of their work. It becomes obvious when a “revision” is actually a wholesale replacement. Second, the reflection component demands specificity. Students must explain what weaknesses surfaced, which AI suggestions they adopted or rejected, and why. A student who outsources the process has no authentic decisions to describe, and the reflection exposes that immediately. Thin or generic reflections also open the door for natural follow-up with the student. 

If the concern is that students will outsource all three components, one practical option is to bring part (or all) of the process into the classroom. The entire Write-AI-Revise cycle can be completed in a structured class session, which minimizes opportunities for outsourcing while preserving the core benefits of the model. This option also gives professors a real-time diagnostic view of where students struggle, how they interpret AI feedback, and what kinds of revisions they make independently. For professors who want tighter guardrails, integrating even one component into class time is an easy and scalable solution.

Some students will still game the system; that is true of any assignment. The key is to make authentic engagement the path of least resistance. Most students are relieved, and even excited, when AI use is permitted within clear boundaries. When students feel they have a legitimate professor-approved way to use AI, the incentive to circumvent the rule diminishes. And when students understand that outsourcing only hides the very weaknesses legal writing courses are designed to identify and correct—gaps in rule synthesis, weak analytical habits, structural confusion, and difficulty translating facts into legal consequences—they see little value in bypassing the process. For most students, especially those who understand that bar exams and legal practice demand independent thinking, that is enough.

6. Conclusion

When used the right way, generative AI can actually support what we care most about in legal writing: getting students to think critically, revise meaningfully, and develop a voice that holds up in practice. “Write, AI, Revise” offers a straightforward, scalable way to bring AI into the classroom without letting it take over. It also models a habit students will need throughout their legal careers: taking a rough draft, asking critical questions, and revising with purpose. If generative AI can help our students build the discipline of revision and remove the fear of feedback, then it is not a threat to legal writing—it is the ally we have been waiting for.

 


[1] See John Bliss, Teaching Law in the Age of Generative AI, 64 Jurimetrics J. 111-61 (2024) (discussing the urgency and complexity of integrating generative AI into legal education).

[2] See Margie Alsbrook & Ashley Krenelka Chase, Three Blind Drafts: An AI-Generated Classroom Exercise, The Second Draft (Spring 2024), https://www.lwionline.org/article/three-blind-drafts-ai-generated-classroom-exercise. 

[3] For illustration, the AI responses shown below reflect typical patterns of feedback from common generative AI tools rather than verbatim outputs.

[4] For illustration, the AI responses shown below reflect typical patterns of feedback from common generative AI tools rather than verbatim outputs.

[5] See Joe Regalia, From Briefs to Bytes: How Generative AI is Transforming Legal Writing and Practice, 59 Tulsa L. Rev. 193, 222–37 (2024) (emphasizing the need for legal professionals to develop sophisticated prompt engineering skills to extract accurate and nuanced responses from generative AI tools).

[6] See Carolyn V. Williams, Bracing for Impact: Revising Legal Writing Assessments Ahead of the Collision of Generative AI and the NextGen Bar Exam, 28 LEGAL WRITING 1 (2024) (discussing assessment redesign in light of both GenAI and the evolving expectations of the NextGen bar exam).