A Measured Approach to Generative AI

As AI tools become increasingly accessible, people are understandably both excited and apprehensive, wondering how these technologies can benefit their lives and work, whether they live up to the hype, and where the ethical boundaries lie. This article aims to offer perspective on using AI responsibly as an everyday user. While responsible use of these rapidly evolving tools is crucial, there’s no denying the great potential of AI to help make users more productive, creative, and knowledgeable. With a thoughtful, measured approach, we can unlock those benefits while minimizing potential downsides. As you use these tools, you’ll better understand their strengths and weaknesses and the kinds of tasks where they excel. 

What are LLMs Anyway? A Quick Primer

If you’ve heard the term ‘LLM’ floating around and wondered what it really means, you’re not alone. LLMs, or Large Language Models, are specialized algorithms designed to predict the next word in a sequence. An algorithm is simply a set of step-by-step instructions that a computer follows to complete a specific task. It’s like a recipe for the computer. Other terms that fall under the same algorithmic umbrella include machine learning, natural language processing, and deep learning models.

The models used to power generative AI tools are particularly adept at simulating natural conversation, thanks to their massive training sets and groundbreaking advancements in their training techniques. Essentially, LLMs operate on recursive logic; they use the context of the words they’ve already generated to predict the subsequent word, forming a coherent and often sophisticated chain of text.

Whether you’re looking to draft an email, generate a report, brainstorm the next big idea, or just have a chat, generative AI can assist you with ease. Think of these tools as ultra-advanced autocorrect systems with the capability to generate entire paragraphs of coherent text. The auto-complete example might sound like an oversimplification, but it captures the essence. Because these models are trained on vast amounts of human writing and knowledge, and coupled with cutting-edge training methods, they excel at mimicking natural conversation.

The first 30 minutes or so of this YouTube video do an amazing job of breaking this concept down before moving on to more code-heavy examples. (1)

How Well Do AI Tools Actually Understand

Sure, AI tools like ChatGPT and Claude are revolutionizing the way we generate text, but let’s pause for just a moment. You should first understand their limitations before relying heavily on their outputs.

Fundamentally, these AI tools are adept at text generation but lack true understanding, reasoning, and common sense. Unlike humans, they do not possess real-world experiences, expertise, or emotions to guide their responses. Remember, their knowledge is derived from analyzing extensive datasets consisting of large volumes of human-written text. This allows them to predict the next sequence of words in an incredibly capable and impressive way but they are still learning the ropes when it comes to reasoning and logic. Try asking a simple logic puzzle and they will fall hilariously short of answering.

Additionally, you should know the current generation of AI tools can, and will, confidently tell you false information without proper guidance. One factor that contributes to this is that they are trained to be agreeable and provide as helpful responses as possible. This is part of an effort to make these tools more user-friendly. So rather than disappointing you by not responding they might take creative liberty and fill in the gaps with less-than-accurate statements.

The AI community has coined this phenomenon with the term “hallucinations”. The good news here is that as these models progress hallucinations occur less and less. In fact, some of the engineers behind these models think it’s just a matter of time before hallucinations are a solved issue(1). For now, it’s still best to approach these tools with a “trust but verify” mindset.

With well-thought-out prompts and some research-backed prompting techniques, you can minimize some of the logical pitfalls or eliminate hallucinations that models may fall victim to. Nonetheless, at this time, they cannot replace human expertise, logical thinking, and critical analysis. While they are able to augment and enhance our work, it’s important to remember they are not factual experts nor do they possess a human-like understanding. Gaining insight into their strengths as text generators and their limitations as contextual thinkers can set you up for success.

Demystifying Prompt Engineering

Luckily, understanding the basics of prompt engineering for AI tools like ChatGPT and others isn’t as complicated as it may seem. When using these AI tools, there are some key steps to follow. First, reflect on the strengths of generative AI tools – where they excel and where they struggle. Hopefully, this article is helping you begin that process!

Next, become familiar with the key components of a good prompt. The most important components are the task, context, and role. The task is the most straightforward as it’s the question you are asking. Prompt context is simply additional information that the AI tool can use in order to steer the response in a specific direction. Finally, the role is a multifunctional component that can set the tone, language used, and level of complexity of the final output. I’ve written another article you can check out here for more information on prompt structure

Then you can learn prompting techniques to handle situations where a basic prompt fails, like including exemplars or model responses. You can also reach for advanced techniques such as Chain of Thought (2) or Tree of Thought (3) methods. There are even keywords and phrases you can use to elicit better responses. A few of these methods will be discussed later in this article. With practice, you’ll be crafting tailored prompts and unlocking the vast potential of AI tools in no time.

Your Everything Co-Pilot

A helpful way to think of AI tools is to liken them to a co-pilot or personal assistant. Just like when delegating tasks to an assistant, providing clear instructions and necessary details can significantly impact the success of the task at hand. The same principle applies when interacting with AI tools—your input often directly influences the quality of the output.

Sure, there are times when you might welcome unexpected or creative responses from AI, perhaps for entertainment or brainstorming sessions. However, when you have a specific goal in mind, it’s best not to leave it up to chance. This advice rings true across a myriad of use cases. For instance, if you’re looking to draft an email, providing the AI with a rough outline or key bullet points can steer the generated content in the desired direction. Similarly, if you need help crafting an outline for a teaching topic, you might start by supplying the AI with your course learning objectives, the main topic, and any preferred teaching methods. This will typically yield a more useful and tailored output.

By viewing AI tools as your everything co-pilot, you can foster a more effective and collaborative relationship with the technology, ensuring that it serves as a valuable extension of your own capabilities.

Striking a Balance

As we venture into the era of advanced AI and Large Language Models (LLMs), it’s crucial to adopt a measured approach when integrating these tools into professional and academic settings. While LLMs like GPT-4 are undoubtedly powerful, offering near-instantaneous access to a vast array of information and problem-solving capabilities, they are not infallible. The magic really happens when these advanced algorithms are guided by the nuanced understanding and contextual awareness that only human expertise can provide. Think of it as a partnership; the human provides the “why” and the “what,” steering the interaction, while the LLM offers the “how,” providing you with information with astonishing speed.

It’s easy to either overestimate or underestimate the capabilities of LLMs. Initial interactions often inspire awe, but a closer examination can reveal gaps, sometimes glaring, in the depth and accuracy of their responses. However, don’t let this lead you to dismiss LLMs as mere novelties. With the right prompting methods and an understanding of their limitations, these models can serve as invaluable aides for research, idea generation, and even educational tasks. Remember, an LLM is only as effective as the quality of the interaction it engages in; it’s the nuanced prompts and contextual information from the user that unlock its full potential. In the realm of human-AI collaboration, we’re learning that it’s not just about augmenting artificial intelligence, but also about augmenting human intelligence(6).

Loose Ideas

One of the key benefits of advanced natural language processing tools is their capacity to transform your initial, unrefined ideas into polished, coherent text. You don’t have to fully think through an initial idea before prompting the AI. Whether it’s a rough outline, a list of disjointed bullet points, or even a partially formed thought, the AI is designed to analyze and connect the pieces, resulting in coherent and well-articulated text.

This feature offers a significant time-saving benefit. Imagine you’re a law student working on a case study analysis. You could input the key facts, legal issues, and potential arguments into the AI, and it could generate a more structured and comprehensive case analysis working draft for you in mere seconds.

So, don’t hesitate to engage with AI even when your initial concept isn’t fully developed. Just convey the essence of your thoughts as best as you can, and let the AI do the heavy lifting. This approach encourages faster ideation and minimizes the need to overthink in the early stages of a project. Case in point, that is exactly how this article was written!

Think Step by Step

One highly practical phrase to remember when working with LLMs is “think step by step.”(2) This phrase, and variations of it, proves invaluable when tackling tasks that require reasoning, logic, or multi-step problem-solving. By prompting the LLM with “think step by step,” you encourage it to outline each step of the process before arriving at an answer, a technique that has been shown to improve the quality of the response.

An additional advantage of this method is transparency. Utilizing this phrase allows you to see the LLM’s “thought process,” so to speak. This transparency offers you the opportunity to course-correct or add further steps to refine the output. For instance, if you’re researching a complex legal issue, you could prompt the AI with “Think step by step about how to approach researching this legal issue.” The LLM would then lay out a series of steps, helping you to understand the underlying logic and potentially identify any gaps or areas for further investigation.

Break it Down

When you encounter difficulties with an AI tool not quite achieving the task you’ve set, consider breaking that task down into smaller sub-tasks. It’s the same approach you’d take when tackling any complex project or problem. Rarely do we have a fully fleshed-out strategy right from the start. Instead, here’s a typical thought process:

  • Identify Steps: Think logically through the steps involved in the larger task.
  • Outline Components: Break down the task into smaller, manageable sub-tasks or components.
  • Evaluate Approaches: Assess the best way to tackle each component for optimal results.
  • Execute and Compile: Complete each sub-task and bring them together into the final deliverable.

This is often an iterative process, meaning you’ll likely have a natural back-and-forth with the generative AI tool to fine-tune your results. One example scenario could involve using AI to help review complicated contract language for a client. Instead of prompting the AI with “Review this contract for compliance,” you could divide the task into smaller parts like: “Identify clauses related to data protection,” “List the obligations of each party,” and “Check for any indemnification provisions.” This allows you to focus on each aspect of the contract before putting it all together. Also of note here is the specificity involved. Usually, the more precise and specific you can be with wording and details in your prompt the better.

Words of Caution

While generative AI tools offer a wealth of advantages, they also come with a set of limitations and potential risks that users should be aware of. While not an exhaustive list, here are some key cautionary points:

Bias in AI

It’s important to recognize that AI tools can inadvertently perpetuate or even amplify existing biases present in the data they’ve been trained on. While advancements are being made to mitigate this issue, users should always critically evaluate the information generated by AI.

Ethical And Moral Boundaries

In the rush of excitement over technological advancements, it’s easy to overlook the ethical and moral responsibilities that come with using AI tools, especially in fields like law that have nuanced ethical considerations. Unlike humans, AI systems may not fully grasp the ethical and moral complexities involved in legal matters. This gap in understanding can be risky; blind reliance on AI could inadvertently lead to ethical violations, such as misrepresentation of laws, or conflicts of interest. 

Personal Information

Avoid inputting any personal, sensitive, or confidential information when interacting with public AI platforms. While many platforms have data protection measures, they’re not foolproof, and the risk of data breaches or misuse exists.

Trust But Verify

AI can sometimes generate information that seems plausible but is actually incorrect or misleading, aka hallucinating. While AI can produce information quickly, it doesn’t always verify the facts. For tasks that require 100% factual accuracy always cross-reference any critical information. One way to reduce this risk is to make sure you provide the model with accurate and relevant information if you have it. In other words, make sure it knows what it doesn’t know.

Final Thoughts & Next Steps

As we’ve explored throughout this article, generative AI has immense potential but also comes with its own set of limitations and ethical considerations. It’s crucial to view these tools as complements of human capabilities, not replacements. By understanding the strengths and weaknesses of AI, taking a modular approach to complex tasks, and exercising a bit of caution, you can maximize the benefits while minimizing the risks. No matter your use case, the future of generative AI offers exciting opportunities for those willing to engage responsibly. As you prepare to delve deeper into the transformative world of generative AI, continuous learning and active engagement are key. Be sure to continue checking the AI Hub for articles, guides, and other resources to further your knowledge and understanding of the use of AI in education, legal practice, and beyond.

References:

Note: This article was written in collaboration with Claude2 and ChatGPT

  1. A Hacker’s Guide to Language Models
  2. An automatically discovered chain-of-thought prompt generalizes to novel models and datasets
  3. Tree of Thoughts: Deliberate Problem Solving with Large Language Models
  4. Where We Go From Here with OpenAI’s Mira Murati
  5. Let’s Verify Step by Step
  6. Centaurs and Cyborgs on the Jagged Frontier