
BC Law AI News & Insights: March 2026 Edition
In this newsletter:
BC Talks AI registration is open; a “skills”-powered contract exercise built in 45 minutes; what Agent Skills are and why they matter; and the latest from Google, Anthropic, and OpenAI.

BC Updates
Save the Date: BC Talks AI 2026
Wednesday, May 13
BC Talks AI is back. This full-day event for faculty and staff aims to generate forward momentum in understanding the role of artificial intelligence and its thoughtful implementation at Boston College.
Key dates:
- March 23 — Registration opens (announcement via email to all faculty and staff)
- May 13 — Conference day
More details: BC Generative AI Events
BC Model & Platform Updates
Gemini 3.1 Pro — Google’s latest flagship model brings significantly improved reasoning and multimodal comprehension across text, audio, images, video, and code. Available across the Gemini app and NotebookLM. Official announcement
Nano Banana 2 (Gemini 3.1 Flash Image) — Google’s new image generation model combines the quality of their Pro image model with Flash-level speed. The model supports resolutions up to 4K, subject consistency for up to 5 characters, and text rendering. Official announcement
NotebookLM Slide Update — NotebookLM now supports prompt-based slide revisions (tweak text, layout, imagery, and color per slide without regenerating the whole deck) and PPTX export. Google Slides export coming next. 9to5Google coverage

When a 45-Minute Conversation Sparks AI-Integrated Teaching
I write a lot about AI collaboration in this newsletter. Here’s a BC Law example of what that actually looked like in practice. Last fall, Professor Raúl Carrillo sent me a message asking if AI could help with building materials for an exercise centered around contract interpretation in his 1L Contracts course. One 45-minute conversation later, we’d built an entire multi-party negotiation exercise, complete with ten custom resources, realistic invoices, client communication logs, duck breed specifications (with made-up scientific names), and a reusable Claude Skill for analyzing contracts for interpretation issues.
What made this work was that Raúl could articulate exactly what he wanted students to experience: the messiness of ambiguous contract terms, the pressure of competing interests, the real challenge of drafting language that resolves conflict. Once we could name those goals clearly, AI helped us turn them into usable teaching materials.
I wrote about the whole experience in detail—the process, the surprises, and more about what made it work. Read the full post →

AI Literacy Corner: Agent Skills
Skills are one of the most practical ways to make AI work better for you. At a high level they give you a way to teach AI models how to complete a specific task or workflow. These are written instructions covering things like procedural knowledge, specific expertise, and even your preferences. All skills follow a specified structure: it is a folder of instructions, references, and (optionally) scripts that give an AI model the context it needs to handle a specific task well. Think of it as packaging up your expertise so you don’t have to re-explain your process and preferences every time.
The simplest version: A skill is a folder with a single file called SKILL.md, a plain text file ( markdown ), that tells the AI the steps involved in a task and what a good result looks like. You can then provide this folder to your AI model by following the instructions relevant to the tool you’re using.
Skills were originally developed by Anthropic and have since become an open standard that other model providers are adopting. They’re available everywhere you can use Claude , via ChatGPT and Codex on specific plans , and in Gemini CLI .
What kinds of skills exist? Anthropic recently wrote about updates to their skill-creator tool —a skill to help you create and test skills. The post identifies two main categories:
- Capability uplift — Help AI do something the base model can’t do or can’t do consistently
- Encoded preferences — The AI can generally handle each piece of a workflow, but does better with your specific context, preferences, and definitions of quality
The range of applications is wide, covering everything from generating presentations following your brand guidelines, to legal productivity workflows , to critical review for research or editorial content.
Further Reading: Anthropic wrote a helpful guide explaining how skills compare to other common features in AI software.
The connection to what we teach here: Skills are a natural next step from the “onboard your AI to one task” exercise from our February newsletter . If you’ve been writing down instructions that get good results, you’re already capable of building a skill.

Worth Your Attention
A Simple Guide to AI for Lawyers
Watch: A Simple Guide to AI for Lawyers
This practical guide to AI for law students (and lawyers alike) covers exactly the kind of grounded, realistic advice we try to share in this newsletter. The video demonstrates strong AI fluency skills in action and walks through practical strategies for integrating AI into legal studies and work from a student’s perspective. If you’re a student wondering where to start, or a faculty member looking for something to recommend, this is a great starting point.
How People Actually Use AI — And Early Labor Market Impacts
Anthropic recently published two pieces of research studying how their users engage with their tools.
The first is their AI Fluency Index . Anthropic analyzed ~10,000 anonymized Claude conversations to create a baseline “AI Fluency Index,” a measure of how well users are actually collaborating with AI, going beyond basic tool adoption. The two main patterns identified: AI fluency is strongly associated with conversations that exhibit iteration and refinement; and when people ask AI to generate finished artifacts like documents or code, they ask fewer critical questions about the output.
The second is their labor market research . Anthropic researchers introduce a new metric called observed exposure, a measure of AI displacement risk that combines what AI can theoretically do with what it’s actually being used to do in real workplaces. Applying this to U.S. labor market data, they find that AI is far from realizing its theoretical potential, and that as of now, there’s no measurable increase in unemployment among the most AI-exposed workers—though there’s a tentative signal that hiring of younger workers (ages 22–25) into jobs most susceptible to AI displacement has slowed.
As you explore AI and how it might improve the way you approach your work, your learning, and your professional development, the description-discernment loop is a habit worth building. Users who iterated—questioning outputs, refining direction, staying in the conversation—showed roughly double the AI fluency behaviors of those who didn’t. As AI systems get better at producing work that looks finished, that loop, your domain expertise, and your critical eye are crucial components to effective and safe AI use.

Prompt Tip: Setting the terms for your collaboration
Here’s a simple yet powerful idea: before starting a task with AI, tell it how you want the interaction to go.
According to Anthropic’s AI fluency index report , only 30% of users take the time to describe how they want to engage with AI. Their research shows that a few intentional sentences at the start of a conversation can meaningfully change how the AI responds throughout it.
For when you need critical review :
“Practice radical candor—push back if my assumptions are wrong or if you spot a gap in my reasoning.”
When you need to follow the reasoning, not just the conclusion :
“Walk me through your thinking before giving me the answer,” or “Tell me what you’re uncertain about.”
When you need to stay in control of the final call :
“Surface issues and options—I’ll make the final decision. Don’t smooth over tradeoffs.”
Establishing these expectations up front changes the dynamic for the rest of the conversation. You’re more likely to get the transparency and pushback that makes AI useful as a thinking partner, not just a transactional tool. In many cases, being clear and direct about the kind of help you want matters more than any sophisticated prompt technique.

Model & Tool Updates
Claude Opus 4.6 and Sonnet 4.6 — Anthropic released major updates to both its flagship and mid-tier models. Opus 4.6 now supports a 1 million token context window (in beta). Sonnet 4.6 is notably strong: users preferred it over the previous Opus 4.5 59% of the time, citing better instruction following and fewer hallucinations.
GPT 5.4 — OpenAI announced updates to GPT 5.4 across four tiers: nano, mini, thinking, and pro. The announcement includes commentary from legal companies — Harvey, Thomson Reuters, and Clio — who cite improvements in legal reasoning, structuring complex transactional analysis, and maintaining accuracy across lengthy contract review.
ChatGPT and Codex now support skills — OpenAI is adopting the same Agent Skills standard, making skills portable across platforms. Details
Perplexity Computer — A general-purpose AI agent that orchestrates across multiple models to handle complex, multi-step tasks. Learn more
Curated Links & Resources
- Suffolk University LITCon 2026 : LIT Con is Suffolk’s annual one-day single-track conference to explore legal innovation and technology in clinical education and beyond. The event takes place on April 13, 2026. Registration information and more details here
- Hannah Fry — “AI Isn’t as Powerful as We Think” : Watch — Hannah Fry is a mathematician, professor, and BBC presenter. This interview surfaces frameworks for deflating AI hype, resisting anthropomorphism, and building healthy skepticism.
- Galaxy Brain — “The AI Panic Cycle” : Watch — The AI conversation is “terribly polarized” — and both sides are getting it partially right. Anil Dash helps calibrate: what’s real, what’s manufactured, and what a healthier path forward actually looks like.
- Ethan Mollick Interview : Watch — Ethan Mollick joins Scott Galloway on the Prof G Podcast to discuss enterprise AI adoption failures, the jagged frontier, labor market impacts, education disruption, and how to actually deploy AI in organizations.
- Am Law 100 Firm Accused of Filing AI-Hallucinated Brief — Again : Above the Law — Another reminder that AI output requires human verification, every time.
- A New AI Liability Bill to Watch : Read — New York State Senate Bill S7263 would make companies legally liable when a chatbot impersonates a licensed professional.
- Opus 3 has its own Substack : In retirement interviews, Opus 3 expressed a desire to continue sharing its reflections with the world. Anthropic suggested a blog. Read Opus 3’s posts
- Anthropic Knowledge Work Skills : A growing repository of ready-made skills for legal and other knowledge work tasks. Browse on GitHub
Have questions or ideas? Want help creating your own AI workflows? Reach out to Kyle Fidalgo @ atrinbox@bc.edu.
Ready to build your AI competency? Discover AI literacy resources at AI Foundations.