AI Hallucinations Are a YOU Problem
Ben • August 30, 2024

AI hallucinations are an overblow problem that scares away usage and holds you back. Hallucinations are a frequent talking point when discussing generative AI tools like ChatGPT, Google Gemini, Microsoft Copilot, and others. While the term “hallucination” might make it sound like the AI is malfunctioning, the reality is often more nuanced. In fact, as we’ll explore in this blog post, simply understanding the basics of how generative AI works will drastically improve your outputs and limit real or perceived hallucinations. You have significant control over this concern.

It’s important to note that hallucinations are often a common misconception among new users of generative AI. For experienced users, these “hallucinations” are less of a problem and more of a concept that can sometimes hinder progress. By understanding the underlying mechanics of AI and learning to effectively prompt and guide it, users can significantly reduce the occurrence of these perceived hallucinations.

What Are AI Hallucinations?

Before we dive into the “why” and “how,” let’s define what we mean by AI hallucinations. In the context of generative AI, a hallucination happens when the AI produces information or content that is factually incorrect, misleading, nonsensical, or simply not aligned with your expectations. These can range from minor inaccuracies to completely fabricated content. While AI is highly sophisticated, it’s not infallible, and understanding how it processes prompts can reduce the likelihood of encountering these errors. By being the human in the loop, both on the front end of prompting, and the back end of editing and fact checking, dramatically limits any risk of hallucination.

User Problem 1: The Broad & Open-Ended Question

The first common scenario where hallucinations occur is when users provide broad, open-ended prompts without sufficient context or direction. Many users might ask a vague question, receive an unexpected response, and then quickly label it as a “hallucination.” However, AI is simply doing its best to answer the question based on the limited information it was given. The user has given AI a very large blank canvas with innumerable opportunities.

Remember, the prompts you input are the single biggest driver of AI’s output. A well-crafted prompt provides AI with a clear direction, reducing the chances of misinterpretation and, consequently, hallucinations.

Poor Example: Imagine typing in “Write a blog about the events of Ben and Bryce in Las Vegas.” You’re likely to get a wildly different response every single time. Every reader will have a different idea of what this final blog could be about.

Improved Example: Contrast that with “Write a 5-minute blog about the events of Ben and Bryce at a banking conference in Las Vegas, focusing on their discussion about data and AI usage, with three key takeaways on each topic, include a call to action to contact Ben to learn more.”

See the difference? That second prompt is laser-focused, leaving little room for the AI to “hallucinate”.

New users often use a single prompt more like the Poor Example, which leads to uninspiring AI results. Instead of the user recognizing it’s a “You Problem”, they blame AI for hallucinating. Plus, it’s important to continue to “chat” with AI and refine and improve your output before final editing.

User Problem 2: The Information Desert

The second scenario where hallucinations tend to crop up is when you’re venturing into topics with limited information or trying to creatively combine concepts that have seemingly no relationship. While a lack of information doesn’t necessarily mean AI will hallucinate, it can lead to unexpected and sometimes inaccurate results. AI is designed to find solutions, even when presented with seemingly incompatible ingredients. Mixing these ingredients can produce strange and unexpected results, which might appear like hallucinations.

Example: If you ask AI to “Explain the intersection of quantum physics and seed agronomy of Northen Dent corn”, AI will try to connect these two seemingly unrelated fields.

AI will draw from limited information on each topic and propose theories or connections that, upon closer examination, might be inaccurate or speculative. This is because AI is always searching for patterns and relationships, even when the data is limited or contradictory. This can lead to really interesting results. It’s important to remember that these results might not always be accurate, it may be a true hallucination while AI is attempting to fulfill your request. Or it might be lighting in a bottle.

Helpful Prompts: Here are some prompts you can use to better understand if AI is hallucinating or providing amazing new insights.

Prompt 1: “Cite your sources that helped you draw these conclusions.”

Prompt 2: “I don’t understand the content you produced. Give me a detailed explanation of how you arrived at these results. Justify your answer.”

Prompt 3: “Challenge your thinking, provide a rebuttal to your conclusions to help me think more critically about this topic.”

You can see with these prompts you won’t dismiss the potential hallucination; you’re really challenging AI to defend its work. You might find it’s all made up, which is unlikely. What you’ll really find is it’s the first step to finding true knowledge and value and you can follow new concepts to better outcomes.

User Problem 3: The Creativity Conundrum & The Eagerness to Please

There’s a third area where I often see AI hallucinations: when we push it to be overly creative. In its eagerness to assist, AI might lose sight of the boundary between generating imaginative content and producing fact-based results. This is where the human in the loop becomes crucial. While AI can be a powerful tool for creativity, it’s essential to remember that it’s a tool, not a replacement for human judgment.

Example: If you ask for the “10 best restaurants in Madison, WI,” AI will do a fantastic job.

If you push it to list the “2000 best restaurants” in a city that doesn’t even have close to that many, it might start making things up to fulfill your request. This is where the human in the loop can intervene. By asking AI follow-up questions or providing additional context, you can guide it back to reality and prevent it from generating nonsensical content. As the human in the loop, especially at the end to close the loop, you’re using your personal knowledge to recognize the value of the output.

Helpful Prompts: Use prompts like this to help in these situations.

Prompt 1: “Ensure all your output is factual and based on research you can cite.”

Prompt 2: “Some of this information seems made up. Review it and take out anything that is not factual.”

Remember, AI is designed to be helpful and will often try to provide an answer, even if it doesn’t have all the information it needs. This is why it’s important to critically evaluate the output and use your own knowledge and experience to determine if the AI’s response is accurate and relevant. By keeping the human in the loop, you can ensure that AI is used as a complementary tool, rather than a replacement for your own expertise.

The Silver Lining: Lean Into Hallucinations

Any discussion about AI hallucinations needs to acknowledge the flip side: sometimes, we want AI to hallucinate. The ability to quickly generate creative outputs, brainstorm ideas, and explore alternative viewpoints is incredibly valuable. Let’s celebrate those ‘hallucinations’ that help us think outside the box and push our creative boundaries.

Consider these prompts:

Prompt Example 1: “Make a persuasive and fact based valid case that Eric Thames, former Milwaukee Brewer, is the greatest Major League Baseball homerun hitter of all time.” This is clearly not true. But if you love Eric Thames, you can dream, and AI will help you do that.

Prompt Example 2: “Use biomimicry to brainstorm 5 ideas that can improve how notebooks are created in a professional setting.” As a banker, the use of biomimicry (mimicking nature to create and solve problems) is a foreign topic. Concepts like this will help you think outside of the box.

Hallucinations Are a You Problem, That You Can Control

The simple truth is that the problem with AI hallucinations is often you, the user. You control most of the risk of AI hallucinating and can limit this risk or irritation with these reminders.

Lack of clarity in prompts: If you’re not providing the AI with specific and detailed instructions, it’s likely that you’ll get vague or inaccurate responses. Remember, AI is a tool that follows your commands and acts more like a human being than a computer. The more precise and clear your prompts are, the more accurate and relevant the output will be.

Overreliance on AI: AI is not a replacement for human judgment. If you’re relying solely on AI to generate content without critically evaluating the output, you’re more likely to encounter hallucinations. Always fact-check and verify the information provided by AI.

Misunderstanding of AI capabilities: Many users may have unrealistic expectations about AI’s capabilities. AI is not omniscient and can’t provide accurate information on every topic. It’s essential to understand AI’s limitations and use it appropriately.

Lack of creativity and flexibility: If you’re not willing to be creative or flexible with your prompts, you may miss out on the full potential of AI. By experimenting with different prompts and approaches, you can encourage AI to generate more creative and innovative outputs.

AI is an amazing, transformative tool, but you as the user control so much of the value and output it provides!

Recent Posts

By Benjamin Udell November 17, 2024
Canvas just launched in the paid version of ChatGPT. Of course, ChatGPT is not great with promoting their new features but after five minutes you can see it's a huge enhancement for many of us. And easy to use. The five minute video walks you through these new tools and shows you exactly how they can transform the way you work. From boosting your creativity to streamlining your workflow, it is easy. If you've used Google Gemini you'll see many similar features, but I think Canvas leapfrog's Gemini. Click here to watch my video and learn how Canvas can help you.
By Ben September 29, 2024
AI voice cloning is the latest advancement that’s shaking up the financial world. Just as ChatGPT’s Advanced Voice mode was making headlines my article on The Financial Brand titled “How Voice Cloning Will Disrupt Client Verification” was published. Learn about upcoming threats and, most importantly, a practical solution to safeguard your organization. Read the full...
By Ben September 27, 2024
If you’re wondering how to navigate the AI world safely and effectively, my recent article on The Financial Brand, “AI Now: Real-World Lessons to Enhance Your Marketing, Immediately and Safely,” is a must-read. This post will give you a taste of the key takeaways and why you should dive deeper into the full article. Read...