Course AI Policy Considerations & Resources

Get to a solid first draft of a course AI policy without overcomplicating it. We’ll explore a few practical policy frameworks you can adapt to your teaching style, plus the key “surrounding guidance” that makes policy actually work in practice. You’ll leave with templates, examples, and a clear next step for updating or creating your syllabus language.

When: Tuesday, January 6th
Time: 10:00 am – 11:15 am
Where: Law Library 300 & Remote Option via Zoom
Who: Faculty-focused (TAs welcome)

Workshop Resources

Policy Guidance Resources

Workshop Summary

This summary was produced using AI-assisted transcript processing with human review and minor edits for accuracy. Please contact Kyle Fidalgo, Academic Technologist, with questions or edits.

Key Takeaways

  1. Start where you are. You don’t need to be an AI expert to create an effective policy. The BC default (restrictive) is a safe starting point. Build from there.
  2. Be specific over comprehensive. A clear policy for three assignments beats a vague policy that tries to cover everything. Specificity reduces student anxiety and gives you defensible standards.
  3. Think about AI policy as a teaching tool. Your policy models the professional judgment and discernment you want students to develop. It’s not just rules — it’s formation.
  4. Design, don’t detect. Instead of policing AI use, design assignments that are harder to outsource and build in verification requirements that teach real professional skills.
  5. Use the human analogy. When you’re unsure about whether a use is appropriate, ask: “Would this be okay if a human collaborator did it?” That heuristic cuts through a lot of the ambiguity.
  6. Involve your students. Share your default stance, then open the conversation. Where do they see AI being helpful or harmful? What’s their comfort level? This builds buy-in, develops their own ethical engagement, and humanizes the process.
  7. Give yourself (and your students) grace. We’re all learning together. Be willing to adapt, iterate, and have honest conversations about what’s working and what’s not.

What We Covered


The Big Idea

AI tools are no longer optional encounters for law students. BC provides access to leading AI models such as Google Gemini, and legal research tools Lexis and Westlaw are both integrating AI assistants directly into their platforms. As Professor Elisabeth Keller noted during the workshop, Westlaw’s new landing page will put deep AI access front and center for students — not buried in a menu, but right where they start their research.

Students now have access to the most capable AI tools from leading model labs, so your course AI policy should reflect that reality. It’s a chance to make intentional pedagogical choices that protect learning, promote fairness, and prepare students for a profession that increasingly expects technological competence. Think of your AI policy as a teaching tool — it’s your first opportunity to model the kind of thoughtful, intentional AI use you want students to develop.


The Path to AI Fluency

Before diving into policy specifics, the workshop grounded the conversation in the approach to AI fluency outlined by BC Law’s Academic Technologist:

AI Literacy — The essential baseline. Knowing what generative AI is, what the tools can do, and what their limitations and risks are. This foundational awareness is the bare minimum that everyone should know.

AI Competency — The practical layer. Applying AI tools to accomplish defined tasks within your field. This is the skill of using AI to help you achieve your goals.

AI Fluency — The holistic goal. Strategic, reflective, adaptive integration of AI into professional thinking and workflows. This is building the intuition and judgment to apply AI effectively, ethically, and safely.

“AI fluency is not a destination. Just like the practice of law, there’s the practice of working well with these new probabilistic partners.”

This progression matters for policy because where your students fall on this spectrum should inform how much AI latitude you give them. A 1L building foundational legal reasoning skills needs different guardrails than a 3L in a clinic who’s ready to supervise AI as a junior collaborator.

Recommended AI Fluency Resources

  • AI Foundations — BC Law’s self-paced AI literacy learning platform
  • Anthropic AI Fluency Course — Covers core competencies of effective AI use, with tracks for general use, teaching and learning, and teaching AI fluency

The 4Ds: A Framework for AI Competency

The workshop introduced Anthropic’s four core competencies of AI fluency — the 4Ds — which serve as both a personal development framework and a scaffolding tool for policy design:

CompetencyWhat It MeansPolicy Connection
DelegationKnowing what to hand off to AI, what to keep for yourself, and when to co-createDefine when students may or may not use AI
DescriptionArticulating your vision and ideas clearly enough that AI can collaborate effectivelyGuide students on how to communicate with AI tools
DiscernmentCritically evaluating AI output — checking for accuracy, quality, and reasoningRequire verification and critical review of all AI-assisted work
DiligenceTaking responsibility for the final output, being transparent about AI use, and giving proper attributionEstablish accountability standards and attribution expectations

The Description–Discernment loop is particularly relevant: effective AI use is iterative, not a one-shot transaction. You describe what you need, evaluate what you get, refine your description, evaluate again. This is the core workflow, and it mirrors good professional practice — the legal equivalent of drafting, reviewing, and revising.


Five Design Basics for Working with AI

Drawn from collaborative work with BC’s Center for Teaching Excellence and Center for Digital Innovation and Learning, these principles frame how to think about AI collaboration:

  1. AI is a new kind of “sentient” design material — Not actually sentient, but sufficiently intelligent that human collaboration analogies work well. These are probabilistic, non-deterministic partners that are highly adaptable and context-aware.
  2. Natural language is our design tool — If you can write, speak, and articulate what you’re trying to do, you can collaborate well with AI. No technical skills required.
  3. Planning and iteration is the work — Working with AI is simple but not easy. Building intuition about what AI does well (and where it fails) requires practice and iteration.
  4. When in doubt, ask AI for help — AI is both the vehicle and the guide. Sometimes you’re the teacher, showing it what good looks like. Sometimes you flip the script and ask it to teach you.
  5. Context is everything — AI is lost without your goals, context, and intent. The more explicit you are about what’s in your head, the better the collaboration.

BC Law’s Current Default Policy

The workshop reviewed BC Law’s institutional AI policy, which lives in the Academic Integrity Standards and takes a restrictive-by-default stance:

  • On cheating: Work generated through “unsanctioned and/or undocumented artificial intelligence assistance” constitutes cheating
  • On plagiarism: Using AI text generators to perform “central requirements of an assignment” (reading, synthesizing, interpreting, writing, coding) without both instructor permission and complete attribution
  • On GenAI tools: Use of LLMs for academic work is “prohibited unless expressly permitted by the course instructor”

This is a safe starting point — most institutions start restrictive — but it leaves open questions:

  • What does “preparing to write” actually mean?
  • How do you cite AI if required? What does “full attribution” look like in practice?
  • Where does learning with AI end and offloading to AI begin?

These ambiguities are precisely why course-level policies matter. The institutional default sets the floor; your course policy adds the specificity students need.


The Permissibility Spectrum: Choosing Your Default

The workshop presented a spectrum of AI permissibility — three default stances that most institutional policies fall into, with visual badges from the Center for Digital Innovation and Learning that faculty can use in Canvas, syllabi, and assignment instructions:

No Use (AI-Free)

Best for: Foundational skill building, core knowledge acquisition, exams and timed assessments, assignments where independent learning is the objective.

Sample language: “In this course, all work must be your own. AI tools (ChatGPT, Gemini, Claude, etc.) are prohibited for all assignments and assessments. This ensures you develop the core analytical and writing skills essential for legal practice.”

The workshop played a clip highlighting the trap of thinking you know material because you had a good conversation with AI about it. Understanding a concept in dialogue is not the same as being able to apply it independently.

Conditional Use

Best for: Specific assignments, designated tasks within assignments (brainstorming, proofreading), situations where process is part of the learning objective.

Key requirement: Always includes accountability, verification, and potentially citation/attribution.

The challenge: This is the “murky middle” — the most adaptable option but also the one requiring the most clarity and specificity. Ambiguity here creates anxiety for students and enforcement headaches for faculty.

AI-Friendly

Best for: Low-stakes exploratory assignments, exercises where learning AI is the objective, advanced students (e.g., 3Ls in clinics) who have foundational skills and can supervise AI as a collaborator, courses where AI fluency for legal professionals is a core learning goal.

Critical caveat: AI-friendly does not mean without accountability. It still requires discernment, ownership, and verification. Think of the student as a supervising attorney — they set the strategic vision, coach the AI through the process, and take full responsibility for the final work product.


The Student Use Scale: Levels 0–4

Adapted from the University of Kentucky’s “UK Advance” task force, this scale provides a more granular way to think about student–AI agency:

Student-First Levels

LevelRoleDescription
Level 0Student as sole thinker & authorNo AI involvement. Student does all work independently.
Level 1Student as primary creator, AI as editorStudent generates all core ideas, content, and structure. AI reviews for grammar, citations, polish — suggesting, not writing. Comes at the end of the process.
Level 2Student as conceptual architectAI comes in at the beginning as a thought partner (research plans, brainstorming, organizing reasoning). Final prose is still the student’s responsibility.

Shared Agency Levels

LevelRoleDescription
Level 3Student and AI as co-creatorsSubstantial back-and-forth throughout the assignment. Student and AI co-create content segments following the description–discernment loop. Student applies critical review and verification at the end.
Level 4Student as supervising attorneyAI generates solutions across many stages. Student provides strategic vision and initial reasoning, reviews output for soundness and quality, puts their final stamp of approval on it.

You can scaffold these levels across:

  • A student’s educational career (different expectations for 1Ls, 2Ls, 3Ls)
  • A semester (foundational work first, AI exercises later with built-in reflection)
  • Discrete steps in an assignment (brainstorming with AI at the start, solo drafting in the middle, AI for final polish)
  • A single classroom exercise or simulation

Four Core Elements Every Policy Needs

Regardless of where you land on the permissibility spectrum, every effective course AI policy should address:

  1. Definition — What counts as generative AI in the context of your course? Name the tools explicitly (ChatGPT, Gemini, Claude, Lexis AI, Westlaw AI, etc.)
  2. Permissions — What’s allowed, when, and why? What’s prohibited, and what’s the rationale? Be specific about tasks, assignments, and stages of work.
  3. Process — How do students acknowledge and take responsibility for AI use? What are the attribution and accountability requirements?
  4. Alternatives (optional but recommended) — What if a student objects to AI use? There are legitimate ethical, privacy, and equity concerns. Having an alternative path shows respect for student autonomy.

The Citation Burden Trap

Many well-intentioned policies require documentation of every AI interaction. In practice, this creates problems:

  • AI collaboration is nonlinear. Students may work across multiple tools, multiple sessions, bringing content in and out of different systems. In reality, requiring a full transcript of this process could mean hundreds of pages of documentation.
  • Logging isn’t learning. If students are anxious about documenting every interaction, they’re focused on compliance rather than the actual work.
  • Review burden on you. Are you actually going to read all those logs? If not, you’ve created busywork for students with no pedagogical payoff.
  • False precision. The prompts going into AI are only part of the story. The real intellectual work happens in what students do with the output.

Better alternatives:

  • Cite primary sources, not AI conversations. If AI pointed them to a case or article, cite the case or article.
  • Reserve detailed documentation for select assignments where the process is the learning objective.
  • Focus on output quality and demonstrated understanding rather than process logging.
  • Brief explanatory statements — a short appendix describing tools used, a summary of the initial prompt, and verification methods.
  • Reflective process — students explain why they used AI, how they evaluated the output, and whether they would use it again for that purpose.

Chicago Law School offers useful framing here: “A student cannot avoid limits on the use of generative AI by attributing their work to generative AI.” Attribution doesn’t equal permission. If AI writes the entire paper and a student just cites it, they’ve still skirted the learning objectives.

The Ideation-to-Drafting Gap

When students use AI to brainstorm, clarify concepts, or refine ideas, then switch to solo drafting, the transition creates genuine uncertainty: Is this my idea or the AI’s? Faculty have reported students coming to them worried about unintentional plagiarism after using AI to learn a concept and then writing about it in their own words.

The human analogy helps: Would you require attribution if a student talked through ideas with a study group, a TA, or a professor during office hours, then wrote their paper independently? Apply the same standard to AI-assisted ideation.

The “Helpful” Trap

AI’s default mode is to be an overeager, helpful collaborator. Even with guardrails, after a productive learning conversation, AI will often offer: “Would you like me to write that section for you?” Teaching students to decline help when AI offers to take over is itself a professional skill — part of the Delegation competency. Knowing when not to delegate is as important as knowing when to hand work off.

The Enforcement Reality: AI Detectors Don’t Work

The workshop was direct on this point: do not rely on AI detection tools.

  • High false positive rates (Kyle demonstrated that human-written text can trigger detectors, and lightly edited AI text can evade them)
  • Biased against non-native English speakers and neurodivergent students
  • Rapidly obsolete as new models emerge every 3–6 months
  • Creates a culture of suspicion rather than learning

Better approach: Design, don’t detect.

  • Assignments requiring personal reflection, specific course knowledge, or in-class oral presentation are harder to misuse
  • Build in strict liability for hallucinations (see below)
  • Teach transparent attribution as a professional skill

Strict Liability for Hallucinations: A Practical Enforcement Strategy

If we can’t reliably detect AI writing, we can detect errors whether human or AI. This reframes enforcement around verification and accuracy — skills that matter in legal practice regardless of AI.

The AI Hallucination Tracker — a crowdsourced database documenting cases involving AI errors — has logged 485 submissions since Q2 2023:

  • 375 fabrications (made-up citations, facts, people)
  • 163 false quotes (misquoted material)
  • 208 instances of misrepresented content (misapplied reasoning or precedent)
  • 12 cases of outdated advice

Professor Jake shared that he’s contributed submissions to this database from his own experience.

What to Look For

  • Content that’s out of scope or too advanced for the course level
  • Misinterpreted law or fabricated legal elements
  • Citations to non-existent sources (not typos — entirely fabricated references)
  • Facts or holdings that don’t match the actual case

How to Use This

Frame it as professional responsibility, not punishment:

  • “It’s your responsibility — your professional duty — to verify all citations are real, confirm facts and holdings are accurate, and ensure reasoning is sound.”
  • Position it as a starting point for conversation, not an accusation
  • Remind students: they can and will make mistakes without AI, too. Give grace, but maintain standards.

A Case Study from the Workshop: Jake’s Contracts Exam

Professor Jake shared a revealing experience from allowing students to use AI (specifically Lexis AI) on a contracts final exam for the first time. His observations illustrate several key AI fluency concepts:

What happened: Students were allowed to use any AI tool on Lexis for a contract research question. Jake had prior semesters of traditional exam results for comparison.

What he expected: The “rising tide lifts all boats” effect — everyone would do well, diminishing the distinction between students.

What actually happened:

  • Few students at the very bottom (AI did raise the floor)
  • Overall quality was less impressive than expected
  • Most surprisingly: the top 3 students from the midterm all made the same error — incorrectly including the UCC in a pure services contract question

The root cause: Lexis AI returned an answer that included a statute that sounded exactly on point but was wrong for the scenario. The AI’s confident, authoritative presentation of a plausible-but-incorrect answer led strong students down the wrong path.

The AI fluency lesson: This is the Discernment competency in action — or rather, in absence. AI can be persuasively wrong. Students with strong fundamentals were overridden by AI’s confident presentation of an attractive wrong answer. This is precisely why the Description–Discernment loop matters: good students need to learn to question AI output even (especially) when it looks right.

As Professor Jennifer Connor observed from the chat: “From a legal research and writing standpoint, the discernment stage — checking for accuracy — is actually trickier and more time-consuming than doing something from scratch.” And yet, research shows that professionals skilled in both their domain and AI use still gain efficiency from the review-and-verify approach, because their expertise lets them quickly spot what’s wrong. See GDPval for one example.


The Professional Responsibility Framing

The workshop connected AI policy to something law students are already learning: professional ethics. The ABA Model Rules of Professional Conduct are increasingly relevant to AI use:

Rule 1.1 (Competence): “A lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.”

The ABA is clear that technological competence is a professional obligation. Preparing students to meet it is professional formation in practice. Your AI policy can frame AI fluency as part of the ethical obligations students will carry into their careers.

Relevant ABA guidance:

  • Model Rules on competence, communication, supervision, and confidentiality all apply to AI use
  • ABA Formal Opinion 512 addresses AI competency obligations directly

Next Steps

  • Invest in your own AI literacy. The AI Foundations site and Anthropic’s fluency course are excellent starting points.
  • Draft or revise your AI policy. Use the policy development guide and the Gemini Gem to get started.
  • Try the NotebookLM research notebook. Explore how other institutions handle policy language and find what resonates with your course context.
  • Schedule a one-on-one consultation. Kyle is available for individual support on policy development, assignment design, or AI integration questions.
  • Attend the remaining workshops. Sessions 2–4 cover creating course AI assistants, developing course materials with AI, and hands-on AI productivity labs.