How a 45-minute conversation turned into a semester of AI-integrated teaching at BC Law
Last October, Professor Raul Carrillo sent me a message:
“Hi Kyle, I hope you’re doing well! I’m wondering if you have time over the next few days / next week for that chat about AI & contracts. We’re about to hit interpretation and drafting a bit harder in the course, and it would be great to be up on the discourse.”
Raul teaches Contracts, a first-year law course at Boston College Law School. I’m the academic technologist for the law school, which means (among other things) I get paid to play with AI tools and figure out how they might actually be useful for our faculty and students.
His question boiled down to this: can AI analyze contracts for interpretation issues?
My honest answer: I don’t know. But it sounds like a well-scoped problem with clear boundaries, and with the right guidance from a domain expert, let’s find out.
I’d also just been experimenting with a new feature from Anthropic called Skills, and I was looking for a real use case to test it on. This felt like the right one.
What happened in the room
Raul and I sat down in his office for about 45 minutes. He’d already built a scaffolding for the exercise he had in mind, a multi-party negotiation scenario grounded in UCC contract interpretation. The premise (deliberately absurd, pedagogically rich): a high-end restaurant called The Duck, an agricultural cooperative called Quackery Co-op, and a discount distributor called the Market Middlers of Moo, all arguing over what the word “duck” means in an ambiguous Memorandum of Understanding.
He had the structure. He had the legal principles. What he also had, and what made this work, was a clear vision for what he wanted his students to experience: the messiness of interpreting ambiguous contract terms, the pressure of competing interests, the real challenge of drafting language that resolves conflict rather than creating it.
Here’s what we did. I fired up a speech-to-text tool on my laptop and Raul just talked. He walked through the exercise step by step, narrated his ideas, described the learning objectives, read through his simulation notes and filled in gaps out loud, gave additional framing for what he wanted students to take away. All of that implicit knowledge, the stuff that lives in an experienced professor’s head but doesn’t always make it onto the page, got captured as a transcript.
I then pasted that transcript into Claude along with the written exercise scaffolding Raul had already prepared. Two sources of context: the formal plan and the informal narration that filled in what the plan didn’t say.
But before letting Claude run with it, I did something I’ve found makes a huge difference. I asked it to restate what it thought we’d be working on together. Not “go make the thing,” but “tell me what you think the goals are.”
“So what I’d like you to do is restate what it is you think we’ll be working on together. Mainly the deliverables I’m looking for so that we can work on them incrementally.”
This is just good old fashioned collaboration practice in action. It’s the same thing you’d do if you were handing off a project to a colleague. You want to make sure they understand you before they go off and spend time on it. Claude restated the goals, Raul confirmed and course-corrected a few things (“I want the evidence materials to contain small suggestions that parties are engaging in bad faith”), and then we told it to proceed.
Fifteen minutes of “cooking”
What happened next was, honestly, a little surreal. Claude worked for about fifteen minutes on its own, generating materials without coming back to us for input. It created ten different resources: party-specific instruction sets for each group of students, research files with embedded evidence, negotiation guides, drafting templates, and background materials complete with fake invoices, duck breed specifications (including made-up scientific names), and confidential business proposals.
The output was about 90% of the way there. Raul and I only touched base again to fix a few date inconsistencies and minor substance issues. The core structure, the creative details, the pedagogical scaffolding, all of it held up because we’d done the work upfront (ok, mostly Raul). We spent our time building context, not fixing output.
This isn’t really a trick. It’s just what happens when you invest in making your expectations clear before you delegate.
Bringing it full circle
Here’s where it gets a little meta. We’d already built such a rich context in that conversation, all those details about contract interpretation, the UCC provisions, what makes terms ambiguous, how different parties might argue their positions, that I realized we could package that expertise into something reusable.
So, in the same conversation thread, I asked Claude to help me build what’s called a “skill.” At its core, a skill is nothing more than a folder of text files that contain instructions for how to perform a specific task. Plain language, not code. You’re just writing down how you want a particular kind of work done: what to look for, what good output looks like, what to avoid, how to structure the result. And let’s be honest, you, the human in the loop aren’t writing most of this. You’re describing the task with appropriate levels of clarity and specificity and then leverage AI to help you scaffold out the details. You bring your judgement to assess, iterate if necessary, and move on.
In this case, the skill would take a contract as input, apply the necessary contract interpretation principles, identify ambiguous terms and potential disputes, and analyze from each party’s perspective. All the knowledge Raul had shared, structured into something any attorney could point AI at and say “review this for interpretation issues.”
The thing that made this feel different from a one-off prompt is that we didn’t start from scratch. The skill grew directly out of the work we’d already done together. That conversation, those ten resources, the back-and-forth about what mattered and what didn’t, it all fed into instructions that captured Raul’s expertise in a form AI could follow again and again.
Exercise day
A few weeks later, Raul led his class through the exercise. He started by walking students through Frigaliment Importing Co. v. B.N.S. International Sales Corp., the classic “What is Chicken?” case, then introduced the scenario we’d built: What is Duck?
The class of first-year law students split into groups, three sets of three parties. Each group received confidential instructions reflecting their party’s strategy. They had twenty minutes to research, then they negotiated, then they drafted.
While students worked, Raul and I sat in the corner running Claude through the same exercise with the same information the students had, using the skill we’d built. Funny enough (and this is what you get for going live), Anthropic’s services went down right at the start of the exercise. I was genuinely sweating for a minute. But it came back, caught up, and completed the full analysis.
What I remember most is the energy in the room. Students were arguing about what “commercially reasonable quantity” means. One group tried to broker a deal with both opposing parties simultaneously, essentially hacking the exercise (Raul called it their “Kobayashi Maru” moment). The negotiations were messy and creative in exactly the way Raul had hoped.
When we came back together and compared notes, Claude’s analysis was thorough and well-reasoned, but it also pulled in outside case citations we hadn’t asked for and missed some of the creative problem-solving the students attempted. It was a useful comparison point, not a replacement. Raul’s closing to the class was perfect: “There’s a hard baseline. You’ve got to beat that, but more likely, work with that.”
What happened after
Here’s what I personally love to see. Raul didn’t just use this exercise and move on. Over the remaining five weeks of the semester, he created four additional exercises with Claude. He described the tool as becoming “hard to separate from the teaching experience.” He budgeted a Claude Pro subscription to his research account. He started using it for screening law review articles, recognizing patterns across scholarship.
One 45-minute conversation in his office cascaded into a semester of AI-integrated teaching. Not because the technology is magic, but because the initial investment in context, in articulating what he wanted and why, gave him a foundation to build on.
What this means for you
I want to be clear about what made this work, because it wasn’t technical skill. It wasn’t prompt engineering. It was:
- A domain expert who could articulate his vision clearly
- Enough time spent building shared context before asking for output
- A verification step (restate the goals before you run with them)
- Iteration, not perfection, on the first try
- Packaging what worked into something reusable
These are collaboration skills. If you’ve ever onboarded a new team member, managed a project, or worked through a messy handoff, you’ve practiced them. AI just makes you spell them out more explicitly.
If you’re wondering whether AI can handle something in your domain, the answer is probably yes, but with a caveat: it’s only as good as the context you give it. Start with one task you care about. Sit down and talk through it, out loud if that helps. Describe what good looks like. Then see what happens.
You might surprise yourself.
Cover image concept brainstormed with Claude (Anthropic), generated with Gemini (Google). Kyle Fidalgo is the Academic Technologist at Boston College Law School. This is part of an ongoing series exploring what AI collaboration looks like in practice across legal education.