The Second Draft -
Critique v. Composition: Rethinking Memos, Motions, and Briefs in the Era of AI and the NextGen Bar DOWNLOAD PDF
January 9, 2026Published: December 2025
Legal writing professors face a pivotal moment in legal education. The introduction of the NextGen bar exam and the rise of generative AI are fundamentally changing how we need to prepare our students for legal practice. While these two developments might seem distinct, they both demand the same core competency: the need to strengthen critical reading and evaluation skills.
The NextGen bar exam’s new emphasis on document review and revision signals a broader change in how we might develop law students’ legal writing competency. Simultaneously, the emergence of generative AI tools requires lawyers to excel at critically reading, evaluating, and improving machine-generated text. These parallel developments suggest that the ability to analyze, critique, and improve existing legal documents may be just as crucial for lawyers’ success as the ability to draft them.
This article presents an innovative approach to legal writing assessment that addresses both challenges: replacing traditional memo, motion, or appellate brief writing assignments with structured document critique assessments. This method not only aligns with the NextGen bar exam’s focus on document improvement but also develops the critical reading and evaluation skills essential for working effectively with generative AI.
This essay will first examine the legal writing foundational skills that the NextGen bar exam will test. Next, it explores the parallel skillset required for effective collaboration with generative AI tools in legal practice. Finally, it presents a novel legal writing assessment: replacing an assignment to draft a memo, motion, or brief from scratch with one that requires students to review and provide feedback on how to improve a draft memo, motion, or brief the professor provides. This structured critique develops skills for real-world legal writing competency.
1. The NextGen Bar Exam’s Foundational Skills
Beginning in July 2026, the National Conference of Bar Examiners (NCBE) will begin to phase out the Uniform Bar Exam (UBE).[1] Its replacement, the NextGen bar exam, will specifically test the foundational lawyering skill of legal writing.[2] The NCBE has published the NextGen bar exam's content scope, which includes a list of six tasks that the exam will test to assess examinees’ legal writing competency.[3] Only one of those six tasks requires examinees to draft a memorandum, brief, letter, or other legal document from scratch.[4] Five of the six legal writing skills tested on the NextGen bar exam do not necessarily involve drafting.[5] Instead, one task asks examinees to “draft or edit correspondence to a client.”[6] Another task asks examinees to “[d]raft or revise discovery documents.”[7] Three other tasks anticipate that examinees will be “given draft sections of” a complaint, answer, affidavit, or contract, implying that examinees will not draft those sections from scratch.[8] Once given those draft sections, students must “identify language that should be changed, explain why it should be changed, and suggest how that language should change, consistent with the facts, the relevant legal rules and standards, and the client’s objectives, interests, and constraints.”[9]
This shift in emphasis from pure drafting in the UBE[10] to evaluation and improvement in the NextGen bar exam reflects the realities of modern legal practice, where lawyers frequently collaborate on documents, utilize form books, and inherit work product from others. Demonstrating the ability to critically evaluate existing text and suggest strategic improvements may be more valuable than the ability to draft perfectly from scratch.
2. Parallel Skillset for AI Text Generation
Furthermore, the skills needed to utilize AI in legal practice parallel those needed to succeed on the NextGen bar exam. To effectively use generative AI, lawyers need to critically assess whether AI-generated text aligns with the specific context, purpose, and audience of their legal documents.[11] This analysis requires careful, critical reading, yet many of our students are “power browsers.”[12] Power browsing involves non-linear, selective reading wherein readers “search for key terms and skim the text surrounding the key terms instead of reading line by line.”[13] Simply skimming AI-generated text is not thorough enough reading for a lawyer to verify that every word is accurate and beyond reproach. Additionally, if lawyers do not possess deep substantive knowledge of the law and legal writing conventions and requirements, then they cannot succeed in this critical evaluation no matter how careful a reader they are.
This parallel between AI collaboration and the tasks tested on the NextGen bar exam suggests that in this new era, the ability to critique and improve existing text may be equally important as—or perhaps even more crucial than—the ability to draft documents from scratch. Both scenarios require lawyers to carefully read text they did not generate, exercise judgment, apply legal knowledge, and articulate specific improvements needed to achieve the document’s objectives.
3. The Critiquing Assessment
Typically, the students in my 1L legal writing course draft two graded memos during the fall. In the spring, students draft a trial motion, receive feedback, and then draft an appellate brief on the same issue. Because both the NextGen bar and verifying AI output require students to develop the skills needed to critically read a draft of a document that they did not write and revise it or suggest revisions, it seemed prudent to develop an assessment for my 1L legal writing courses to do just that. Thus, I replaced the first memo assignment in my fall 2024 1L legal writing course and the appellate brief assignment in my spring 2025 1L course with a critiquing assessment. As you will see below, these assessments were designed for students to demonstrate their knowledge by describing what they saw in the memo or brief, explaining why the text did not quite meet the mark, and suggest how the author could revise the text to better accomplish their goals.
4. Critiquing a Memo
I began the fall semester as normal. I gave students a client scenario with the attendant file and facts involving Arizona’s notice of claim statute. The client hypothetical suggested that four elements needed analysis. Students completed the same preparatory work as if they were drafting the memo themselves. They reviewed statutes and case law, prepared a chart of authorities, and outlined their arguments. As a class, we drafted the argument for element one. Then, in groups of five or six, the students drafted an argument for element two to give them practice.[14] At this point, I would traditionally have my students draft a memo on elements three and four. But instead of drafting the memo themselves this semester, I gave the students a memo that I drafted that addressed elements three and four. The memo was representative of a “B” paper—competent and effective in several respects but with notable weaknesses. Students were tasked with providing detailed feedback and assigning a grade, mirroring the evaluative process used by legal writing professors. The exercise required them to identify effective aspects of the writing while also articulating specific areas for improvement.
To complete the assignment, I provided the students with instructions and a rubric, both of which were available before I released the memo. We reviewed the instructions and rubric in class, and then they had three days to complete the assignment out of class.
First, the rubric instructed students to allocate points to specific components or skills that the memo should demonstrate. If a student awarded full points for a particular component or skill, they could move on. But for each component or skill the student did not award full points, they had to give the author specific feedback on why they did not award full points and describe how to improve the draft. For full credit, when advising the author on how to improve, students had to reference material from the course, such as readings, lectures, videos, and exercises.
Second, students had to correct the citations using track changes. They were responsible for correcting all citation errors, including whether the author cited the correct case reporter; used full cites and short cites where appropriate; used accurate pincites; and accurately stated and altered quotations.
I graded the students’ efforts based on four things: what they identified as problems, what they mistakenly thought was a problem, the thoroughness with which they explained how to correct problems, and the completeness of their citation revisions. I weighted their explanations of how to revise the memo the heaviest. The more they explained precisely what was wrong and how to fix it using evidence from the course materials, the more points they received. Conversely, if they did not give full points to a skill, but their explanation of a “problem” was incorrect, they lost points. While grading, if they identified a problem I had not, I read their explanation and gave them credit if it made sense.
I created two answer keys. The answer key for the citation corrections simply tracked all the changes I would have made, with explanations for corrections that may be more obscure than others. The answer key for the rubric included detailed explanations of problems that I would expect the students to catch and examples of ways the anonymous author could improve those problems. Once all students had submitted their work and I was ready to release grades, I shared the keys. Students were required to compare their critiques to the keys and identify their own mistakes before meeting with me after receiving their grade. The only individualized feedback I provided was a brief summary on their grade sheet highlighting a few strengths and one or two areas they may need to revisit before drafting the second memo, both of which were based on problems they consistently identified or missed.[15]
5. Critiquing an Appellate Brief
In the spring, the process I followed was much the same. After receiving the client file, students researched the client’s problem, produced a chart of authorities, created a joint statement of facts with opposing counsel, drafted a motion for summary judgment, and received feedback on it in the form of live grading. The students also participated in a trial-level oral argument on the motion. Then I provided them a rubric—revised for new components and skills, but very similar to the one we used for the memos in the fall—and a “B” level appellate brief on the same issue as the trial motion. Then the students had to give feedback to the fictional author, as they did in the fall.
I made only two minor tweaks to the critiquing assessment in the spring. First, I gave them five days instead of three because the appellate brief was longer and more complicated. Second, I reduced the number of items I was grading from four to three. I realized that two categories I used in the fall were redundant. Giving points for simply identifying problems in addition to giving points for the thoroughness of their explanation of what was wrong double-penalized those students who failed to identify a problem with a particular component or skill, because if the student was awarded full credit for a skill, they were not required to give an explanation. Thus, they lost points in both categories for the same mistake. To remedy this, I combined those two categories to award a large amount of points for “Items the student identified as problems with the brief and the student’s explanation of what the problem is and how the author can remedy the problems they identified.” I continued to take off points if students identified “mistakes” that were not actually problems with the memo. And I continued to give a certain amount of points for the completeness of their citation revisions.
6. Addressing Challenges
One challenge I faced was accounting for subjectivity. I made sure that the problems I created in the memo and brief were as obvious as possible. For example, I drafted two successive rules clearly from the opposing party’s point of view, so when the rubric asked if all the rules were written persuasively in one issue, it would be clear that the answer was “no.” I did not expect them to catch extremely subtle persuasive techniques or to suggest small word changes, such as replacing the term “task” with “project.” I also did not pay attention to how many points students deducted for a particular skill. If my rubric key said I may deduct one point from three available points for a skill, it did not matter that a student decided to deduct two points; the important thing was that the student identified the correct problem in the explanation.
Creating the materials was also a challenge. Of course, I had to draft a memo and brief and carefully construct the rubric and the answer keys, all of which took quite a bit of time. I had to be extremely specific as to each component or skill the rubric was asking the students to assess, and the key had to be extremely specific as to the problem with a particular component or skill. I viewed this as an investment in time-savings later. I based the critiqued memo and brief on problems I liked enough to use every three years. In three years’ time, those materials will (most likely) be just as useful to me. And the time-saving for grading both this year and future years was worth the initial investment.
An additional challenge is that no writing assignment is AI-proof; they can only be AI-resistant. Could my students have run the memo/brief through AI and asked it to complete the rubric? Sure. But there would be major holes in the critique, at least with current AI models. Part of the skills the rubric evaluated were about hierarchy of authority, which AI is not good at deciphering. The rubric also evaluated whether the author used appropriate authority, which requires knowledge of statutes and case law outside of the memo/brief provided to the AI. And of course, in order for the AI to refer to course materials, all of the textbooks, handouts, videos, PowerPoints, etc., would need to be loaded into the AI to perform that task. Doing so may be easy for some sources (e.g., PDF of a supplementary reading provided to the students), but many of those cannot be downloaded easily or in their entirety (e.g., West Academic’s Interactive videos or textbooks on a publisher’s online site). And although AI can handle basic citations, it is also not great at identifying how to alter quotations from statutes and case law, or at verifying pincites in legal sources, both of which are skills necessary to complete the citation portion of the assignment. In short, the amount of energy and time students would need to expend to use AI to complete the critiquing task would take longer than doing it themselves, making it a less attractive option.
By far, the most challenging aspect of implementing the critiquing assessment was determining how to evaluate it. I was generous with grading because, even as a professor, I do not catch every single place where students’ writing can be improved. I focused mainly on whether the students articulated problems correctly and the detail they put into the suggestions for revision. So even if the students were more critical than I was and marked off for something I would not have, as long as the explanation made sense, I did not mark them down.
7. The Aftermath
I found that I could assess learning outcomes just as well, if not better, through the critique than through a traditional memo. Students’ explanations of the problems were particularly enlightening. For example, when drafting a memo, a student does not necessarily select cases to support their rule because they understand hierarchy of authority. Perhaps they just happened to hear many other students talking about the case and decided it was important without understanding why. But if they could recognize that the memo’s anonymous author had relied upon a trial court case instead of a supreme court case, explain why that was a problem, and suggest how to fix it, I knew they understood that concept.
Students also demonstrated their critical reading skills. If they skimmed a point on the rubric, they often missed the call of the question. When that happened, their explanation made no sense. And to catch whether a case explanation actually expressed all of the relevant facts of a precedent case, they had to read closely.
In the fall, after critiquing the first memo, students researched and drafted a memo on a different client problem. I graded their final memos using the same rubric they used for the critiquing assignment. Anecdotally, the detailed knowledge they acquired from studying the first memo, understanding each skill on the rubric, and articulating how to revise the memo helped them learn the genre of legal writing just as much as writing the memo on their own would have. The second memo the students drafted on their own was not better or worse than the final memos I have had in the past.
In the spring, after comparing students’ grades on the trial motion with their grades on the critiquing assignment, there were no surprises. Students who excelled on the trial brief also excelled on the critique. Those who struggled to critique the appellate brief were the same as those who struggled with the motion. Most notable, though, was that even for those who struggled with the critique, their explanations reflected a higher-level understanding of concepts than their motion had. So even if they still had some more learning to do, the improvement and progress was evident.
I plan to continue using critiquing as an assessment method alongside traditional drafting, and to expand the critiques to other types of legal documents. I did not observe any decline in students’ ability to reason, evaluate, or learn the genre. On the contrary, they became increasingly fluent in the language of legal writing and adept at identifying areas in need of improvement and articulating strategies for revision. Those are precisely the skills they will need—not only to succeed on the NextGen Bar Exam but also to evaluate and refine generative AI output in legal practice.
Replacing traditional drafting assignments with structured critique assessments offers a pedagogically sound and forward-looking response to the dual challenges shaping legal education today: the evolving demands of the NextGen bar exam and the growing integration of generative AI in legal work. As legal writing professors rethink how best to prepare students for an increasingly collaborative and technology-infused profession, critique-based assessments offer a powerful, practical tool to bridge classroom learning with real-world readiness.
[1] See Implementation Timeline, Nat’l Conf. Bar Examr’s, https://www.ncbex.org/exams/nextgen-july-2026/about/implementation-timeline (last visited May 24, 2025).
[2] NextGen Content Scope Outlines, Nat’l Conf. Bar Examr’s, https://www.ncbex.org/exams/nextgen/content-scope (last visited May 24, 2025).
[3] Nat’l Conf. Bar Examr’s, Bar Exam Content Scope 4–5 (2025), https://www.ncbex.org/sites/default/files/2025-07/NCBE%20NextGen%20UBE%20Content%20Scope-Aug%202025.pdf.
[4] See id.
[5] See id.
[6] Id. at 4 (emphasis added).
[7]Id. at 5 (emphasis added).
[8] Id. at 4–5.
[9] Id. at 5.
[10] The written portion of the current UBE consists of two main components: the Multistate Essay Examination (MEE) and the Multistate Performance Test (MPT). SeeNCBE Exams, Nat’l Conf. Bar Examr’s, https://www.ncbex.org/exams (last visited Nov. 13, 2025). The MEE consists of six 30-minute essay questions covering a range of doctrinal law topics. SeeAbout the MEE, Nat’l Conf. Bar Examr’s, https://www.ncbex.org/exams/mee/about-mee (last visited Nov. 13, 2025). “The purpose of the MEE is to test the examinee’s ability to (1) identify legal issues raised by a hypothetical factual situation; (2) separate material which is relevant from that which is not; (3) present a reasoned analysis of the relevant issues in a clear, concise, and well-organized composition; and (4) demonstrate an understanding of the fundamental legal principles relevant to the probable solution of the issues raised by the factual situation.” Id. The MPT consists of two 90-minute writing tasks that simulate lawyering skills (e.g., drafting a memo, letter, or motion) under time pressure in a realistic context. See About the MPT, Nat’l Conf. Bar Examr’s, https://www.ncbex.org/exams/mpt/about-mpt (last visited Nov. 13, 2025). Examinees are given a packet with a fact pattern, the specific drafting task, and all the applicable law for a fictional jurisdiction that they then use as the basis of a written task they draft from scratch. See Unraveling the Mystery of the Multistate Performance Test (MPT), Barbri, https://www.barbri.com/resources/unraveling-the-mystery-of-the-multistate-performance-test-mpt (last visited Nov. 13, 2025).
[11] See Ethan Mollick, Co-Intelligence: living and working with ai 54 (2024).
[12] See Kari Mercer Dalton, Bridging the Digital Divide and Guiding Millennial Generation’s Research and Analysis, 18 BARRY L. REV. 167, 182 (2012).
[13] Id.
[14] I teach using team-based learning, so these teams are permanent throughout the year. This is not the only exercise they do together as a team, so students are used to working together. I do not view this team drafting step as crucial to completing the critique. Rather, the point is that the students are very familiar with the underlying law before being given the critiquing assignment, and that substantive knowledge about the law can be developed in a myriad of ways.
[15] Although not relevant to the point of this article, as a professor, the most valuable aspect of this assessment was my grading process, which offered several advantages over grading a traditional memo. First, having a detailed answer key that I released to students significantly reduced the need for individual feedback. As for the second advantage, the first time I graded this type of assessment, I was completely done with grading 30 students’ assignments in four days. I would typically spend at least two weeks giving feedback on a traditional memo, so this saved a significant amount of time. After grading this type of assessment over three semesters, I can now grade between 30–34 students’ critiques within two days.