Hallucinations with AI are often cited criticism for not using this amazing technology. Before beginning, this concern become an immediate turnoff of potential users. I’m here to tell you that this concern is mostly overblown for most users. With a little insight and thoughtfulness you can control when AI hallucinates, or limit the likelihood of it happening.
I often see this criticism from users who are untrained or inexperienced users. Their approach initially opens the door for hallucinations because they’re unsure of what prompts to start with, so they start really basic while giving AI wide latitude to respond. Or they believe AI is full of unlimited information and ask it to complete tasks that are not possible. With AI’s desire to follow directions, it responds anyway, making information up.
We’ll explore some common themes on why this happens which will help your writing and make the editing and review process even easier.
The term “hallucination” in AI can bring a sense of apprehension. It’s important to understand that in many writing contexts, such as blogging, emailing, or creative writing, the emphasis isn’t always on factual accuracy but on creativity and expression. In these scenarios, AI’s imaginative stretch can be a positive, especially with clear direction through good prompts. Having a sense of the output you’re looking to create will increase or decrease the potential for hallucinations.
Example of Hallucination Prone Scenario: Asking ChatGPT to write a factual article on a niche scientific topic can lead to inaccuracies, as it might extrapolate beyond its training data. By definition, many research topics don’t have great data. They’re literally being researched. You can quickly exhaust information AI can find and to meet your request it may hallucinate. It’s helpful in situations like this to ask AI to not make up information for the sake of completing the request.
Example of Thoughtful Direction: Using ChatGPT for a creative storytelling session, where the goal is to explore imaginative narratives rather than sticking to rigid facts. Writing a whimsical bedtime story for your kids is easy with AI, and a welcomed opportunity for hallucinations. You many literally want to tell AI to hallucinate to fuel its creativity.
The key here is to understand and leverage AI’s creative potential in contexts where precision is not the priority. Letting AI’s imagination run wild can fuel your creative process, adding a unique flavor to your content.
ChatGPT’s responses are highly dependent on the input and prompts it receives. A vague prompt can lead AI down a rabbit hole of creativity, sometimes resulting in outputs that are far from expected or realistic. This is where the art of crafting prompts comes into play. By providing clear and specific instructions, you can significantly influence the direction and quality of AI-generated content. You’re essentially directing AI to not hallucinate through clear instructions.
Example of Hallucination Prone Scenario: A general prompt like “Write about Ben and Amanda’s vacation” might result in ChatGPT creating an extravagant, unrealistic narrative, perhaps involving space travel or time loops. You’ve just given AI a lot of freedom to “think” and the results may nor may not impress you.
Example of Thoughtful Direction: A more specific prompt, such as “Write about Ben and Amanda’s hiking trip in Colorado, include thoughts about the Rocky Mountains, make it engaging and exciting, and talk about how tire we were at the end of the day,” helps ChatGPT to generate content that is more aligned with real-world scenarios. You’ve provide more concrete direction, direction that sets guidelines and roadmap for it to work.
Being thoughtful about good prompts, good follow up prompts, and finding a direction in your writing will drastically limit unwelcomed hallucinations.
Recognizing the limitations of AI (and this can take time and practice) is a crucial aspect of effectively utilizing it. Overestimating what AI can do, or asking it to perform tasks beyond its knowledge, often leads to hallucinations. This understanding is key in setting realistic expectations and achieving useful outcomes from your AI interactions.
Example of Hallucination Prone Scenario: Requesting ChatGPT to list 5000 restaurants in a town that only has 50 is a surefire way to get a list filled with fictional entries.
Example of Thoughtful Direction: Conversely, asking for a list of the top 10 most popular restaurants in the same town is a more realistic and practical use of AI’s capabilities.
The takeaway here is clear: by using your current knowledge with reasonable requests you won’t push AI to fulfill requests simply because you asked.
What we often forget, is that that we’re saving tons of time by using AI. But it’s human nature to not want to redo work. Redo it, you’ll still be ahead of the game. You may need to redo it and iterate on a topic a few times to find the right outcome your looking for. Getting five, ten, or even more prompts deep into a conversation will help you redirect where necessary and create some amazing work.
The importance of human engagement, at the beginning of your process and the end, cannot be overstated. Always proofread and fact-check AI-generated content, especially in scenarios where accuracy and credibility are important. By recognizing AI’s limitations and employing a thoughtful approach to its use, you can minimize the risk of errors and find amazing results.
The best advice, and easiest, is to have thoughtful prompts with clear direction. This probably means you’re asking for multiple requests with different variables. More simply, your request should be multiple sentences or even multiple paragraphs or steps. Greater direction upfront more quickly gets you to the amazing results you’re expecting and needing.