Cinematic Still, a wide establishing shot of a sea captain sailing into a Foggy sea. sideview of the ship and the captain standing at the helm. Moody color grading and muted tones with bright digital accents.

Workshop Recap

In the first two talks of the summer AI series, I covered the core concepts, terminology, and practical applications of publicly available generative AI tools like ChatGPT. My goal was to help you make informed decisions about using these tools in the classroom and beyond. By understanding the fundamentals of Large Language Models (LLMs), you’ll be better equipped to craft effective prompts and integrate these tools into your workflows. I also shared several of my personal use cases for working with AI. You can view the recorded sessions and access any workshop resources here.

A key concept to understand is that LLMs are, at their core, next token predictors. While this simplifies the complex computations involved, it helps explain why certain prompting techniques are effective. For example, ‘chain-of-thought’ prompting, which encourages the model to think through its answer before responding, yields particularly good results.

My primary goal for these sessions is to enhance our generative AI literacy at the Law School. The field is evolving rapidly, new AI models are released every few months and soon many of them will be integrated into everyday devices and services. Apple, Google, and Microsoft have announced that their latest operating systems will include LLM models accessible at the system level, potentially as soon as this fall. They already appear in several of these companies productivity tools if you sign up for early access. Soon, AI chat assistants will be available wherever you see a text input field on your devices. 

Understanding when and why to use these tools, their strengths and weaknesses, and being able to distinguish useful concepts and tools over hype will set you apart from your peers. This knowledge is crucial to better prepare our students for a future where these tools are commonplace in their field. Some of our students may even go on to create policies and regulations that help shape how these technologies are used and integrated into society. Therefore, I believe it is crucial for us to stay ahead of the curve and equip ourselves with the necessary knowledge and skills. By fostering a deep understanding of generative AI, we can enhance our own capabilities and guide the next generation of legal professionals. This will enable us to leverage AI ethically and effectively, ensuring our institution stays ahead of the curve in integrating these emerging technologies into legal education.

Observations and takeaways:

  • While all participants have used generative AI, only about 10% are using paid versions of the most powerful models.
  • Most of our community is still in early stages of learning about and using generative AI, aligning with broader adoption trends.
  • ‘Hallucination’ in generative AI is a feature not a bug. 
  • AI adoption is rapid across industries, with about 49% of respondents using AI regularly and 25% occasionally, roughly speaking. This is according to one recent study on adoption rates in schools.
  • AI excels at some tasks and struggles with others, a phenomenon referred to as the ‘Jagged Frontier‘. With practice, you can often adjust your prompting strategy to achieve better results for specific tasks.
  • There’s still work to be done to understand how AI language models generate answers. These models aren’t inherently grounded in truth or up-to-date information, which doesn’t negate their usefulness but requires caution. For tasks demanding accuracy (like research or fact-dependent work), it’s crucial to verify AI-generated content. This raises important questions:
    • How can we assess the quality of AI-generated output?
    • How do we verify the accuracy of AI-provided citations or factual claims?
    • What strategies can we employ to effectively use AI while maintaining academic and professional standards?

A Quick Aside: Practical Insights on AI Use

While  I enjoy diving into the inner workings of these technologies, I realize you’re keen on more tangible examples and use cases relevant to your work. Generative AI chat assistants are a new kind of general-purpose technology, and we’re still in the early stages of discovering their full potential. In our first workshop, I noticed that while everyone had tried these tools, few had experienced their most powerful versions. The paid iterations typically offer superior performance, which is crucial for judging a tool’s true merit.

It’s important to understand that these general-purpose AI tools aren’t inherently grounded in truth. They’ve been trained on vast amounts of human-written text, recognizing patterns and relationships in language. That being said, most of the time, due to the nature of their training data, these models will provide accurate and true statements. After all, the internet does contain a large percentage of our collective human knowledge. However, it’s crucial to remember that this same training data also encompasses the full spectrum of human expression – including false information, inherent biases, and prejudices that we as humans, try as we might to overcome, still possess. This duality underscores the importance of critical thinking when using these tools. For tasks requiring factual accuracy, like research or writing, verification remains crucial. However, this doesn’t negate their usefulness. These tools can still be invaluable for reviewing work, expanding ideas, or suggesting improvements. The key is understanding both their strengths and limitations, which comes with experience.

My goal is to guide you towards AI literacy and proficiency by focusing on the fundamentals of how these tools work and how to prompt them effectively. In future sessions, we’ll explore real-world examples of how I and others are using AI to enhance work quality and efficiency. These tools excel in tasks like explaining concepts, summarizing material, polishing drafts, and even assisting with coding (regardless of your development experience). Some, like ChatGPT and Google Gemini, offer advanced data analysis features that can run scripts directly in the chat window to accurately perform computational tasks. Claude just announced their latest model and new capabilities including the ability to run working code next to your chat window. 

When asked about how or why to use AI, the answer often is “it depends.” Their usefulness varies based on the specific task and context. My recommendation is to start small, perhaps with something fun, and gradually explore professional applications. Spending time with these tools is the best way to understand where they fit into your workflow. As you become more familiar with them, you’ll discover innovative ways to leverage AI in your work, potentially uncovering applications we haven’t even considered yet.