
BC Law AI News & Insights: December 2025 Edition
In this newsletter:
- Legal AI Tools Launch for Students: CoCounsel and Lexis+ AI Protégé expand access with strong privacy protections—plus the latest Gemini updates.
- From Knower to Learner: Why fear of AI correlates with non-use, and how legal training’s “addiction to certainty” makes experimentation harder (but more necessary).
- Prompt Tips for Visual Creation: A playful guide to generating consistent characters and images with Gemini—with transferable lessons for any AI workflow.
- Legal AI in Practice: Bot negotiation experiments, law firm training programs, and insights from judges and practitioners on what they expect from new lawyers.
- Curated Resources: Essential reading on AI disclosure statements, study quality, adoption trends, and practical frameworks for getting better results.

BC Tool Updates
Legal AI Tools Launching for Students
Two major AI platforms are expanding access to law students this winter, both with strong privacy protections:
Thomson Reuters CoCounsel launches to all law students on January 5, 2026. Faculty can get started now with the Faculty Launch Guide , which includes step-by-step tutorials, videos, and curriculum integration resources.
Lexis+ AI Protégé General AI lets users toggle between the latest models—including Claude Sonnet 4 from Anthropic and models from OpenAI—without switching platforms or subscribing to separate public AI services. Key features:
- All prompts stay within Lexis+ AI’s protected infrastructure
- Data is not shared or used to train models
- A Shepard’s Citation Agent proactively checks legal sources
Note: the Lexis+ “General AI” feels like it needs another important distinction. Given that this is different than their Protégé suite of products, which includes legal research but also drafting tools, General AI is more of a way for Lexis+ to provide access to the common AI chat models, such as you would get from the ChatGPT, Claude, or Gemini via web or desktop interfaces. Providing secure access to leading models within a legal product legal professionals already use opens up a lot of possibilities and use cases that would have required you to copy and paste content then hop to a different AI tool, for example.
Gemini Updates
Google has rolled out significant updates across its AI ecosystem in recent weeks:
Reminder : When you use Gemini and NotebookLM with your BC accounts, your chat history and data are not used for model training and are protected under BC’s privacy agreement with Google.
Gemini 3 Pro
Google Announced the release of their latest frontier model, Gemini 3 Pro, with improved reasoning capabilities , alongside Gemini 3 Image Pro optimized for image generation and editing tasks . You can read about additional Gemini app improvements here.
NotebookLM
Google’s research and notetaking app NotebookLM has also been steadily adding new features: chat history and personalization that lets you save and resume conversations, and Deep Research that helps discover sources. The platform now also supports the creation of infographics , slide decks , quizzes, flash cards and more.
Ethan Mollick has early impressions of Gemini 3 worth reading. Here are additional impressions and experiments from other industry insiders and subject matter experts: Matt Shumer , The folks from Every , Allie K. Miller , Hard Fork interview with Demis Hassabis & Josh Woodward , The Neuron interview with Logan Kilpatrick

From Knower to Learner
Essays and research on the paradox of how fear and perception might prevent you from getting the most out of generative AI.
In previous newsletters we shared data on how recent frontier AI models—the latest and most capable releases from labs like OpenAI, Anthropic, and Google—are approaching expert level quality on discrete tasks , data also suggests efficiency and quality boosts for individuals and teams who can leverage AI for their work.
But if we are to believe the benefits are clear, why is adoption stalling for so many?
A recent breakdown of an EIG survey of 1,500 American workers found that 42% viewed AI as a threat to job availability. However, a critical detail emerged: The people most afraid of AI were the people who hadn’t used it. Workers who didn’t use AI were 31 percentage points more likely to say that AI is a threat to jobs than workers who do use it.
One Explanation, AI has an “Image” Problem
Duke University researchers ran four preregistered experiments with 4,439 participants and found a clear social penalty for using AI at work. People who imagined using AI tools expected others to see them as lazier, more replaceable, and less competent than if they used a traditional tool, and independent evaluators in fact rated AI users as lazier and less competent than otherwise identical workers who got help from a human or no help at all. These perceptions even spilled over into hiring decisions, where managers who rarely use AI themselves were less willing to select AI users.
New research from Northwestern’s Kellogg School of Management adds a nuance to this finding. In his 2025 Job Market Paper , David Almog runs a field experiment on an online labor market and finds that workers adopt AI recommendations at lower rates when their reliance on AI is visible to an evaluator, leading to a measurable decline in task performance. Almog shows that many workers fear that heavy reliance on AI signals a lack of confidence in their own judgment—exactly the kind of “image cost” that can keep people from using a tool they know would help them.
What about legal professionals? Recent data from a Bloomberg Law State of Practice Survey , based on responses from nearly 800 in-house and law firm attorneys, reinforces the idea that those that don’t use AI tend to have a more negative perception of the tools.
Nearly 62% of respondents answered that they use AI “never or a few times a month.” Only 16% said they use it a few times a week. Crucially, a third of respondents stated they do not believe AI is capable of responsibly handling basic tasks.
This social dynamic adds to the paradox: We have studies that suggest AI tools can improve quality and productivity , but it appears some professionals are still unwilling or afraid to use them lest they look “unconfident” or “lazy” to their peers.
Why Legal Training Makes This Harder (and What to Do About It)
Bridgett Carr’s essay “The Knower” captures the tension precisely. She points out that law school systematically rewards expertise: knowing frameworks, possessing answers, projecting confidence. Visible uncertainty reads as professional weakness. Carr calls this the profession’s “addiction to certainty.”
Carr suggests that AI requires a different approach. The models change constantly. You can’t “master” Claude the way you master the Federal Rules of Evidence. Carr argues what’s needed is a design thinking mindset — rapid prototyping, iterating based on what works, and treating AI as a responsive medium that adapts to your approach rather than a fixed tool requiring mastery. She writes: “Mastery lives in the willingness to try something, watch what happens, and adjust.”
This distinction of expertise versus mastery reframes the challenge:
- Expertise = knowing things (frameworks, vocabulary, comfort with familiar territory)
- Mastery = doing something with what you know (navigating unfamiliar territory, staying curious longer, improvising)
From this framing: expertise provides the foundation to get the most out of skillful use of AI—legal vocabulary, pattern recognition, judgment about what matters. AI fluency then requires continued practice and willingness to experiment.
Experiential Learning Suggests a Path Forward
Vivek Sankaran’s essay ” Ditch the Panel, Host a Lab ” offers insights into moving past fear and into knowledgeable, competent use. Reiterating the point that most people who fear AI don’t use it , Sankaran offers his reflection on giving people space to try conversations with AI tools so that they can practice treating AI like a thought partner.
One takeaway his participants learned was “AI isn’t magic or a menace. It’s a tool that becomes more useful the more competently you use it.”
The California State University system provides evidence that this shift works. After an overhaul to training and resources , survey data from 12,773 faculty and staff shows:
- 85% of faculty and staff now regularly use AI in teaching, research, or administrative work
- 64% of faculty and staff report positive impact on their work
The Advantage of Going Second
Carr’s essay closes with a reframe: “None of us will ever ‘know AI.’ It’s not a body of knowledge we can acquire once and file away. But maybe that’s the gift. It gives us permission to play again, to experiment, to fail and try again.”
For legal education, this moment offers an opportunity. We’re not racing to be first adopters—we can be both cautious and exploratory. Our community benefits from those who’ve tested these tools, identified failure modes, and developed judgment about trade-offs. That’s not something you get from uncritical enthusiasm or reflexive avoidance. It does however require engaging with the tools themselves.
The professionals thriving with AI aren’t the ones who figured it out fastest. They’re the ones who gave themselves permission to try things, watch what happens, and adjust based on results rather than assumptions.
BC Law has resources available to help everyone get started . Everything from faculty development workshops, to expanding tool access, to educational materials on AI literacy and competency. The shift from “knower” to “learner” is a process that works best when it’s both thoughtful and active. Keep an eye out for opportunities to learn alongside colleagues. The most valuable skill in this landscape is staying willing to keep learning.

Prompt Tips: Creating Visuals with Gemini
This month’s prompt techniques for creating visuals are deliberately playful. However, the workflow below—building context, creating references, refining through iteration—applies just as well to getting input on ideas, document drafting, or other collaborative work you do with AI assistants.
You may have noticed a recurring character in the artwork throughout this newsletter. Google’s latest image generation model has brought significant improvements in character consistency and prompt adherence. Your BC account provides access to these capabilities through the Gemini app .
The Basic Workflow
Start by building context. When generating images, spend time describing what you want. Think about medium (watercolor, line drawing), style (minimalist, detailed), and specific elements. If you’re uncertain where to start, try:
“I want to create images for a presentation. I need help describing different art styles using appropriate terminology. Some ideas: stained glass, line drawing, vintage poster. Give me descriptions that would guide an artist.”
Then ask for variations. Once you’ve established the style, request specific images within that framework.
Creating Consistent Characters (Bonus)
For those who want to experiment further, you can create character reference sheets for repeated use:
- Upload a source image
- Ask Gemini to create a multi-angle reference sheet showing the character from different perspectives
- Use that reference sheet in future prompts to maintain consistency
The technical details matter less than the principle: AI tools work better when you give them clear references and context. Here’s a starter prompt that works well after you’ve uploaded a reference image to your chat.
“Extract [desired subject] from this image, then create a character sheet showing [subject] from 4 angles in a 1:1 grid: straight-on, side profile, back view, and top-down. All angles should be full-body portrait shots with neutral lighting, as if photographed with an 85mm f/1.4 lens.”
The Transferable Skill
Whether you’re generating images or you need help with a specific task, the pattern is the same: provide context, be specific about what success looks like, expect to iterate, and refine based on results. This section gives you permission to practice that pattern in a context where mistakes are just funny pictures instead of professional consequences.
If you want to explore this over the holidays, this custom assistant can help you craft stronger prompts from rough ideas.

Interesting Legal AI Developments
In a recent legal research paper, a professor asks the question: Can we train bots to negotiate like lawyers? And what do we learn about bargaining as we do? It’s an interesting exploration into how we might teach our AI assistants how to think like us and what it means to make the implicit explicit when giving instructions to AI assistants. You can further explore the result of the experiment here https://sites.suffolk.edu/ai-negotiation/ .
Ropes & Gray is rolling out a new AI training program called “TrAIlblazers” . The company initiative encourages new associates to allocate up to 20% of their required billable hours for AI training and simulations, including time spent experimenting on their own and in groups.
A recent webinar hosted by the ABA and co-sponsored by Thomson Reuters Institute brought together a judge, a litigation boutique partner, and two leaders from an Am Law 100 firm to discuss how courts and legal employers are adopting generative AI, and what they expect from new lawyers. Read a collection of notes and insights from BC Law’s Academic Technologist here .

Other Links of Interest
For Faculty & Instructors:
The case against AI disclosure statements – A professor explains why mandatory AI disclosure statements backfire, creating a culture of confession rather than transparency, and offers alternative approaches to normalize responsible AI use in the classroom.
For Understanding the Landscape:
Why you can’t trust most AI studies – Critical analysis of conflicting AI research (MIT’s “95% of pilots fail” vs. Wharton’s “75% see positive ROI”), explaining how extreme claims dominate while truth hides in the middle. Essential reading for evaluating AI hype and doom.
Wharton 2025 AI Adoption Report – Three-year longitudinal study tracking enterprise AI adoption from exploration (2023) to accountable acceleration (2025). Key finding: 82% now use AI weekly, 75% report positive ROI, with detailed data on implementation challenges and success factors.
For Practical Guidance:
An opinionated guide to using AI right now (late 2025) – Ethan Mollick’s comprehensive guide to choosing between Claude, ChatGPT, and Gemini, when to use free vs. paid tiers, how to select the right model for different tasks, and practical tips for getting better results. Particularly useful for anyone overwhelmed by tool choices.
For Law Libraries:
Building the Future of Law Libraries: AI, opportunities, and advancement – White paper from cross-regional roundtables proposing how law libraries can lead AI integration through coordinated organization, tiered training programs, and shared knowledge infrastructure rather than being sidelined by vendors.
Prompting Techniques and Frameworks:
The FRAME Method – A five-step AI workflow (Find evidence, define Rules, Assemble inputs, Model/refine, Expand or Embed into a system) designed to address documented performance drops from unstructured AI use.
Verbalized Sampling Research – Recent research suggests you can improve AI creativity by simply asking it to be more random. The technique for getting more creative AI responses starts by explicitly asking models to “sample from the full distribution” rather than default to high-probability (generic) outputs. Read more in this interesting thread from one of the paper’s authors .
Have questions or ideas? Want help creating your own AI workflows? Reach out to Kyle Fidalgo @ atrinbox@bc.edu.
Ready to build your AI competency? Discover AI literacy resources at AI Foundations.