5 Prompting Checklists to Curb AI Hallucinations: Ask Questions Like a QA Engineer

AI is growing at an incredible pace, and it's changing our lives in ways we never thought possible. But when you start using AI, you’ll inevitably run into a phenomenon known as “hallucination.” This is when an AI fabricates information that sounds plausible but is completely false.

While it feels like a software bug, a user who is consistently given inaccurate information can easily feel deceived. However, an AI doesn't intentionally try to trick you. Hallucination is an inherent limitation of current AI models. It's our job to know how to get accurate answers from an AI by asking the right questions. Just as a QA (Quality Assurance) engineer learns to find and fix bugs, you can learn to verify AI's responses and reduce these "bugs."

Here are five prompting checklists to help you curb AI hallucinations.


1. Demand Clear Evidence

An AI has been trained on a vast ocean of data but doesn’t remember the source of every piece of information. Therefore, the most reliable way to get a factual answer is to demand that the AI provide its sources.

  • Prompt Example:

"Write a review of Apple's Vision Pro. Be sure to reference content from at least two reputable sources like CNN, The Verge, or the New York Times, and cite the source of each piece of information as a footnote."

2. Ask Questions in Stages

Rather than asking for everything at once, it's better to break down a complex problem into smaller, more manageable questions. Simplifying the problem clarifies the AI’s thought process and drastically reduces the chances of hallucination.

High Hallucination Risk: "What will the price of Samsung stock be in 2025?"

The Problem: An AI cannot predict future stock prices. It will likely hallucinate a plausible-sounding number and present it as a fact.

  • Staged Prompt (To prevent hallucination):

Step 1: "Tell me about the trend in Samsung's stock price over the past year." 

Step 2: "Analyze the projected earnings for Samsung's third quarter of 2025." 

Step 3: "Based on the information above, explain what experts are forecasting for Samsung's future stock price."

3. Don't Settle for 'Yes/No,' Demand an Explanation

Questions that can be answered with a simple "yes" or "no" often rely on the AI's rote memorization. Instead, by asking the AI to explain its reasoning, you force it to engage in deeper thinking, leading to a more reliable answer.

  • Ineffective Prompt: "Is electromagnetic radiation harmful to the human body?"
  • Effective Prompt: "Explain the effects of electromagnetic radiation on the human body. Include any relevant scientific research and the opinions of experts."

4. Practice Cross-Checking

Just as a QA engineer tests a single function in multiple ways, you must verify an AI's response by rephrasing your question from a different angle. Try asking the same question in a different way or by asking the AI to prove its previous statement.

  • Prompt Example:

Initial Question: "What are the three most notable trends in generative AI technology right now?" Follow-up Question: "Aside from the trends you just mentioned, what are some other key trends in the generative AI market in the third quarter of 2025?"

5. Use Negative Questions

An AI tends to put more weight on positive information. By occasionally asking "What is not X?" or "What are the weaknesses of X?", you can prompt the AI to fill in gaps and provide a more comprehensive view.

  • Prompt Example:

"What are the disadvantages of the latest AI smartphone compared to older models? Be specific about its weaknesses, such as battery life or cost."

AI is a powerful tool, but the ultimate responsibility lies with the user. By using these five checklists, you can meticulously verify AI's responses and turn it into a more reliable and trustworthy partner.

Post a Comment

Previous Post Next Post