Skip to content
BC Law Educational Technology
  • Home
  • Past Workshops
  • Newsletters
  1. Home
  2. 2025
  3. July
  4. 20
  5. July 2025 AI Newsletter

July 2025 AI Newsletter

  • Author By Kyle L Fidalgo
  • Publication date July 20, 2025
  • Categories: Newsletters
  • Tags: AI, Newsletter, Training
a compass rose with the letters "AI" in the center.

BC Law AI News & Insights: July 2025 Edition

In This Newsletter:

  • AI & Learning: A balanced look at the latest research on how AI affects learning, moving beyond the hype to see the real risks and rewards.
  • The Path to Intentional Use: Concrete resources—including a free course and tool recommendations—to build your AI fluency and skills.
  • A Pro Prompting Tip: Learn the secret of “Context Engineering” to dramatically improve the quality of your AI results.
  • AI News Roundup: The latest headlines at the intersection of AI and law, from AI acing law school exams to landmark copyright rulings.

The workshop recording and resources from the latest workshop on AI policy considerations for your classroom are now live on the Ed Tech Blog.


A graphic illustration of A retro computer in the foreground with a large brain in the background

Image Source: Sora

Use AI to Help, not Hurt, Our Learning

You’ve likely seen the headlines inspired by recent research: “AI is making us dumber” or “AI is the end of learning.” This narrative, sparked by a preliminary MIT study titled “Your Brain on ChatGPT,” is not only misleading—it’s a distraction from holding space for an honest conversation about how AI use affects us and those around us. The authors of the study themselves have cautioned against these oversimplified interpretations.

The research, when examined closely, doesn’t paint a picture of inevitable cognitive decline. Instead, it reveals a crucial distinction: the way we use AI determines whether it hurts or helps. We are afterall still figuring out how, if at all, AI can be useful in education settings. Let’s look at two recent studies to see what risks are involved and how we might safeguard and mitigate potential harms to the learning process.

The Cognitive Cost of “Answer-Seeking”

Two key studies highlight the risks of using AI without clear guardrails.

  1. The “Your Brain on ChatGPT” Study (MIT): This study examined students writing essays. The group using ChatGPT as their sole resource showed reduced brain connectivity compared to those who used Google or no tools at all. This suggests that when we offload the core tasks of generating and structuring ideas to an LLM, we engage less in the deep cognitive processing required for learning. A striking finding was that students who used the LLM struggled to quote from the essays they had just written, indicating that the information was never properly encoded into memory—a phenomenon the researchers termed “cognitive debt.”
  2. “Generative AI Without Guardrails” Study (University of Pennsylvania): In this field experiment, high school students used a GPT-4-based tool for math homework. The group given the standard, “GPT Base” version of the tool actually performed 17% worse on a later exam than students who had no AI access. They used the AI as a “crutch,” simply asking for the answer without engaging in the problem-solving process.

These studies confirm a commonsense intuition: if we use AI to bypass the mental effort of learning, we don’t learn as much. Even when students used AI models in ways they thought were assisting them, these studies suggest that the default “helpful” nature of AI can inadvertently short-circuit the very processes that build long-term knowledge.

From Cognitive Debt to Cognitive Growth: How to Use AI Effectively

The same research that highlights the risks also points directly to potential solutions. The negative effects are not inevitable and can be mitigated with intentional, pedagogically-sound strategies.

  • Design for Learning, Not Answering: The math tutor study had a second group of students who used a “GPT Tutor.” This version was specifically prompted not to give away the full solution but to provide incremental hints and encourage students to explain their work. The result? Not only did this group significantly improve performance (in the form or grades) by 127% while using the tool while solving problems. When later tested on the material without the tools, the learning losses observed in the GPT-base group were completely eliminated. The students’ performance was statistically indistinguishable from the control group that used no AI.
  • Think First, Then Use AI: The MIT study offered a fascinating finding. A group of participants wrote essays on their own for three sessions and were then introduced to the LLM in a fourth session (the “Brain-to-LLM” group). When they started using AI, their brain activity increased. Having already done the hard work of thinking and structuring their ideas, they used the AI more strategically—as a tool for refinement and exploration, not as a substitute for thought. This maintained their sense of cognitive ownership and enhanced their engagement.

These findings suggest a promising path forward. Introduced at the right time, we can leverage the power of AI while mitigating the risks by adopting a human-first, AI-forward approach. The responsibility lies with us to use these tools with intention. By generating our own ideas, doing the initial work, and then bringing in AI as a collaborator, tutor, or editor, we can avoid cognitive debt and instead use these powerful tools to augment our own intelligence.


Diverging pathways with a shimmering digital overlay highlighting the paths

Image Source: Midjourney

The Path to Intentional AI Use

Beyond understanding the research, developing practical AI skills is essential. Using AI well means developing literacy, competency, and fluency. Here are two additional resources to help you on your journey.

Anthropic’s AI Fluency Course

If you’re looking for a structured way to build your skills, I highly recommend Anthropic’s free AI Fluency Course. It provides an excellent framework for collaborating with AI effectively and ethically.

The course’s “4D Model”—Delegation, Description, Discernment, and Diligence—aligns perfectly with the principles of professional responsibility. For the entire BC Law community, these concepts are paramount, emphasizing when to use these tools, how to evaluate their output critically, and the importance of maintaining transparency and integrity in your work. It’s on us to use this technology effectively, ethically, and safely.

Choosing the right tool for the job

For a great overview of the current landscape, start with Ethan Mollick’s post, “Using AI right now.”

My current recommendation for most users is to start with Google Gemini 2.5 Pro, which is available through your BC account. Be aware that our institutional account has conversation limits.

For more consistent and powerful use, I suggest trying a $20/month subscription to one of the leading models (Claude, ChatGPT, or Google Gemini with a personal account). This investment truly unlocks their full potential.

A critical reminder:

  • Adhere to BC’s privacy and security policies. Be mindful of what information is permissible to share with these external tools.
  • Protect your data. While Claude doesn’t train on your conversations by default, you must manually disable this setting for ChatGPT and Google Gemini.

Black pixelated sphere on a white background, sparse and textured

Image Source: Midjourney

Prompt Tip: Context is Everything

If you feel like your results with AI are hit-or-miss, you’re not alone. For one thing, that’s part of the process, but the secret to consistently good output isn’t about finding a “magic” prompt. It’s about mastering context.

There’s a rising term for this skill: “Context Engineering.” In past workshops, I’ve called it being the “orchestrator of context.” It’s the art and science of supplying an LLM with rich, relevant information so that you and the model can do your best work together.

As AI power user and entrepreneur Greg Isenberg puts it, high-quality prompting is just being good at “managing context.” He breaks it down into a simple, iterative loop:

  1. Frame the role and goal (e.g., “You are a helpful teaching assistant creating a quiz”).
  2. Add context/guidance (e.g., “Use the attached lecture notes and focus on the key themes of Chapters 2 and 4”).
  3. Get output and review it.
  4. Add more context to refine the output (e.g., “Let’s focus on these topics and make sure each multiple choice question has answer feedback.”).
  5. Iterate and repeat.

This process is powerful for two reasons. First, it forces you, the human in the loop, to clarify your own thinking, judgment, and expertise. Second, as AI researchers like Andrej Karpathy often explain, providing specific context grounds the model, dramatically improving the quality and relevance of its responses. You are effectively constraining the AI to the “slice” of its knowledge that is most useful for your task.

Sometimes “failures” when working with AI aren’t model failures; they’re context failures. To help you build this muscle, try asking the AI to interview you for the necessary context before it starts working on the task.

Try This “Gut-Check” Prompt:

Next time you start a project, paste this into your chat window.

“Before we begin, I want you to help me provide you with the best possible context for this task. Please ask me a series of questions to understand my goal, the intended audience, the desired format and tone of the output, and any key information, examples, or constraints I need to provide.

Once you have gathered all the necessary context, please summarize it for my approval before you proceed with the actual task.”

Magical things happen when you bring the right context into the mix. You move from a generic chatbot to a focused, expert assistant tailored to your exact needs.


A humanoid robot with teal accents is reading a book. Behind it is sepia toned newspaper clippings can be seen with the text "AI in the news".

Image Source: Sora

Other AI News

Here are a few other noteworthy developments at the intersection of AI, law, and education:

  • On Criticizing AI: For a thoughtful piece on how to approach the limitations and potential of today’s AI tools with intellectual honesty, I recommend this article on “What Academics Get Wrong” when it comes to AI.
  • AI Gets Its First A+ in Law School: A recent study from the University of Maryland highlights the growing power of “reasoning” models in the legal field. This points to a significant trend: as AI develops more sophisticated reasoning capabilities, its potential applications in law are expanding beyond simple document review into complex analysis and strategy. You can read the full paper here.
  • Rethinking Legal Training in the Age of AI: In a recent episode of the AI and the Future of Law podcast, hosts Jen Leonard and Bridget McCormack discuss how AI is breaking the traditional legal apprenticeship model. They explore how custom GPTs, AI-powered coaching, and new training methods are becoming essential for developing the next generation of lawyers. Listen to the full episode here.
  • A “Landmark” Ruling on AI Training Data: A federal judge has sided with AI company Anthropic in a copyright lawsuit, ruling that training AI models on copyrighted books constitutes fair use. This decision, which was followed by similar rulings in cases involving Meta and Microsoft, provides critical legal precedent for the use of copyrighted material in training AI models and is a major development in the ongoing debate over AI and intellectual property.
  • Suffolk Law Mandates AI Training: In a sign of how central AI is becoming to the legal profession, Suffolk University Law School has announced that it will require all incoming students to complete an AI literacy course.
  • The State of Consumer AI: For a data-driven look at how AI is being adopted by the public, check out the Menlo Consumer report on AI usage. It offers valuable insights into market trends and usage statistics that will shape the future of educational technology.
  • Denmark’s “Deepfake” Legislation: Denmark is set to pass landmark legislation giving citizens copyright over their own likeness, voice, and facial features. This is a first-of-its-kind law in Europe aimed at combating the misuse of AI for deepfakes.
  • AI and Legal Education Survey: A recent survey of law school deans by the American Bar Association found that 83% are already implementing AI into their courses, with over half offering dedicated AI classes.
  • AI for Literacy in the Philippines: A case study from Microsoft highlights how AI is being used to power a literacy breakthrough in the Philippines, demonstrating AI’s potential to address global challenges in educational equity.

Have questions about AI integration or want help developing course policies? Reach out to Kyle Fidalgo, Academic Technologist, BC Law Library, at kyle.fidalgo@bc.edu.

Ready to build your AI literacy? Explore resources at AI Foundations.

Post navigation

AI Policy Considerations for Your ClassroomAugust 2025 Back to School Reminders

Contact

Kyle Fidalgo
lawedtech@bc.edu
617-552-2091
Law Library 278

Latest Posts

  • AI News & Insights: December 2025 EditionDecember 1, 2025
  • AI News & Insights: October 2025 EditionOctober 28, 2025
  • AI News & Insights: September 2025 EditionSeptember 10, 2025
Powered by WordPress | Theme created by CampusPress.
  • Home
  • Past Workshops
  • Newsletters