BC Law AI News & Insights: June 2025 Edition

Welcome to the June edition of the BC Law AI news and insights newsletter, where we highlight resources that support the informed and responsible use of generative tools.

In This Newsletter

A video spotlight on AI collaboration techniques, teaching and learning resources for course policies, upcoming summer workshops, and the latest AI model updates and research.


Video Spotlight: AI Collaboration Techniques

Kyle Fidalgo, Academic Technologist, Administrative and Technology Resources, Law Library, recently gave a workshop on fostering meaningful AI literacy in higher education. The talk covers his experience designing AI Foundations,a platform to help students, faculty, and staff explore generative AI tools with curiosity and intentionality.

The presentation includes live demos of collaborative workflows, custom AI assistants, and practical examples of using AI as a thought partner rather than a replacement for human expertise.

Key topics covered:

  • Building AI literacy and competency aligned with institutional values
  • Creating custom assistants and system prompts for specific tasks
  • Collaborative workflows from brainstorming to implementation
  • Lessons learned about verifying and critically evaluating model output

Watch the full presentation | View the slides | Explore AI Foundations


A thoughtful composition on a polished dark wooden surface: a few carefully chosen textbooks, with leather bindings

Teaching & Learning Resources

Here are resources to help inform your course policies and assignment guidelines:

BC Policy Development Tools

External Examples

Upcoming Resources

This summer, additional practical resources will be available including assignment design examples, verification techniques, and implementation strategies. These tools will support faculty regardless of their position on AI use in education, including those who prefer to limit or prohibit its use. Updates to resources will be sent by email as they become available.


Oil painting of participants in an outdoor workshop on the lawn

Summer Workshop Series Preview

ATR will offer three workshops this summer to support faculty in their learning about and use of AI:

  • Workshop 1: AI Policy Considerations
    Templates, tools, and techniques for developing course-specific guidelines that align with your teaching objectives.
  • Workshop 2: Effective Communication and Collaboration with AI
    Practical examples including course content creation, brainstorming techniques, and creating custom assistants for specific educational tasks.
  • Workshop 3: Tool Spotlight – Google Gemini for BC Users
    A deep dive into features available through BC accounts: Gems, Canvas, Deep Research mode, Drive integration, and practical use cases for academic work.

Registration details and dates will be announced soon. All sessions will offer hybrid attendance options and include hands-on activities and opportunities for questions and discussion.


The words "META PROMPTING" are clearly hand-drawn into smooth, damp beach sand, as if inscribed with a pointed stick

Technique Spotlight: Meta-Prompting

Meta-prompting is a technique for improving your interactions with generative AI. Instead of only asking for a final output, you first ask the AI to help you formulate a better request. In short, you are asking the question, “How can I help you (AI assistant), to help me better?”.

If you’re ever stuck with the “blank input” problem when initiating a chat with a generative AI model, try some meta-prompting techniques.

Prompts to Try

“I want to work on [this project], with [these constraints], with the end goal of [this]; I want you to ask me a series of questions that, when answered, will give you enough context to complete this project successfully.”

—OR—

“I want to ask this question: [Insert question here]. What’s the best way to ask this to an AI to get the help I need?”


hand drawn illustration of Bob Ross painting on a canvas with a caption of Let's paint some happy little artifacts

AI Feature Spotlight: Canvas/Artifacts

All major chat assistants (ChatGPTClaudeGeminiCopilot) now include interactive workspaces for seamless content collaboration. Called “Canvas” in most tools (“Pages” in Copilot or “Artifacts” in Claude), these separate panels let you work iteratively on documents, code, or other content within the main interface.

Key features:

  • Direct editing capabilities in Gemini and ChatGPT
  • Targeted feedback by highlighting specific sections
  • Export options including direct integration with Google Drive

Next time you’re working on substantial content with an AI, try asking for a Canvas or simply mention you’d like to collaborate on a document—the interface will often appear automatically.

Need some inspiration?

Try this experiment: Ethan Mollick has a fun approach for academics—turn your course materials or academic papers into an interactive learning game using the Canvas/Artifacts feature. Paste your content and let the AI help you build a working game experience for your students. See Ethan Mollick’s walkthrough here.


AI Model Updates & Other News

Anthropic (Claude):

Google:

Google I/O 2025 – Over 100 new AI tools & features announced. Here are a few highlights:

  • Imagen 4 (pronounced like Imagine)
    Improved image generation with better text rendering, prompt adherence, and improved quality.
    Try it out in Whisk.
  • Veo 3
    Google’s latest video model offers state-of-the-art video generation capabilities.
    Try it out in Flow. (Note: not available with your BC Google account)
    Here’s a good example of what’s possible with Veo 3: A mockumentary from u/AmadeusMS on Twitter
  • Native Multimodal Media Generation
    New multimodal capabilities include the ability to generate audio, including dynamic speech, directly from Gemini 2.5.
    Try it out in the AI Studio. (Note: not available with your BC Google Account)

OpenAI:

  • Codex – Codex is a web development tool that can perform tasks for you such as writing features, answering questions about your codebase, fixing bugs, and more.
  • Updates to Advanced Voice Mode for paid users
  • A deep dive into model sycophancy – This post explores “sycophancy,” the tendency for models to generate responses that align with a user’s stated views, even if incorrect. The research examines methods to make models more objective and truthful.

Research & Legal Implications

  • The Power of Persuasion Study: A recent study suggests that some large language models can be more persuasive in argumentation than human participants. For a profession centered on rhetoric and advocacy, this research raises important considerations about the nature of persuasion and the evolving skillset required for legal practice.
  • AI in Education Meta-Analysis: “The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking” provides insights into educational outcomes.
  • LLM Interpretability Research: This research from Anthropic offers a look into the internal workings of its Claude model, identifying mechanisms that suggest a capacity for structured thought beyond simple next-word prediction. It also explores how these same internal patterns can occasionally activate incorrectly, leading to confident but flawed responses.
  • AI Legal Hallucinations: A legal case hallucination database shows over 30 cases in May 2025 alone where AI-generated content included fabricated information, underscoring the critical need for AI competency in legal practice.

Broader Perspectives

  • Shaping AI for the Greater Good: In a recent TED talk, Tristan Harris warns against repeating the mistakes made with social media, emphasizing the need to focus on AI’s “probable” outcomes rather than just its “possible” benefits: “We must become the responsible adults and have the clarity of mind and courage to choose the future we actually want.”

Have questions or ideas? Want help creating your own AI workflows? Reach out to Kyle Fidalgo @ atrinbox@bc.edu.

Ready to build your AI competency? Discover AI literacy resources at AI Foundations.