Artificial Intelligence

How AI Hallucinations Happen and How to Spot Them

Hello, TextRanch blog readers! This is Jennifer. If you’ve been a regular TextRanch customer at any time from 2017 onward, then I have probably edited your writing. I have also served as the author of this blog for the past three years.

Normally, I do not use a first-person style of writing for this blog. But I’m about to share an experience I had with an AI hallucination and it will read more naturally this way. We’re also going to look at how AI hallucinations happen and how you can identify them.

Need Editing and
Proofreading Services?

Learn More

What is an AI hallucination?

When a human hallucinates, they are seeing, hearing, or experiencing something that seems real. They may even believe it’s real. But it isn’t. In most cases, people who experience hallucinations have a mental illness (particularly schizophrenia) or they are under the influence of drugs.

Similarly, an AI hallucination occurs when it presents information that seems true, but it’s actually false. Unfortunately, the AI user may believe the information is accurate and then share it with a larger audience.

Let’s say you’re using an AI tool to write the first draft of an article on recycling. The AI churns out something that sounds pretty good (but of course, you can make it better). However, you notice some odd “facts” in the article. For example, it says that 90% of all blue jeans are now manufactured from recycled plastic bottles. You were already aware that some jeans are made this way. But 90%? That number sounds too big.

Fake facts are only one type of AI hallucination. AI tools are also known to give fake quotes, or take a real quote and attribute it to the wrong person. Imagine you’re using AI to help you with an essay on Hamlet and instead of providing you with a quote from Shakespeare’s tragedy, it gives you a line from Star Wars!

On a more serious level, AI hallucinations can be dangerous. An AI tool could give someone harmful medical advice or provide inappropriate suggestions to someone who is using AI instead of seeking psychological counseling.

A fake court case

A law school professor I know recently chatted with me about the challenges of teaching in an era where so many students are using AI to cheat on their exams. In my experience, the most commonly used AI tools aren’t sophisticated enough for law school, so I started wondering if AI might be giving students inaccurate information.

When I got home that night, I started playing around with ChatGPT just to see what would happen.

My Prompt: Can you write a brief summary of how the 1971 Morrison vs. Franklin case impacted school desegregation in Michigan?

ChatGPT’s Response: The 1971 Morrison v. Franklin case had a significant impact on school desegregation efforts in Michigan, particularly in the context of Detroit’s racially divided school system. The case challenged the Detroit Board of Education’s policies that reinforced segregation through school zoning and attendance boundaries.
The court ruled that the Board had intentionally maintained segregation, violating the Equal Protection Clause of the 14th Amendment. This decision laid the groundwork for broader desegregation remedies in Michigan, including the controversial Milliken v. Bradley case (1974), which dealt with cross-district desegregation. While Morrison focused on actions within Detroit, it highlighted systemic inequities and helped prompt legal scrutiny of how school district policies across the state contributed to segregation.
In summary, Morrison v. Franklin helped expose deliberate local practices of segregation and served as an early legal step toward confronting educational inequality in Michigan.

This all sounds okay, right? The robot did its job and gave me a solid, well-written overview of the Morrison v. Franklin case. So what’s the problem?

THAT CASE IS 100 PERCENT FAKE!

I was listening to “LA Woman” by the Doors at the time, so “Morrison” was named after the band’s lead singer, Jim Morrison. Then my eyes scanned my playlist and Aretha Franklin’s name popped out at me. That’s how “Franklin” became the second party in my fake legal case.

I tossed 1971, school desegregation, and Michigan into my prompt because I’m from the Detroit area, I have a degree in education, and I know the history of school desegregation in this state. I thought perhaps ChatGPT would tell me it has no record of any Morrison v. Franklin case and that I might be thinking of Milliken v. Bradley (which was an actual case). But instead of correcting me, ChatGPT linked my fake case to the real one. In short, it gave me a hallucination!

Need Editing and
Proofreading Services?

Learn More

 

How does an AI hallucination happen?

ChatGPT wasn’t trying to deceive me on purpose, nor was there some kind of technological malfunction. Like all AI tools, ChatGPT was simply doing the job it was programmed to do.

In an article I wrote for this blog a few months ago, I explained how AI language models are like ultra-sophisticated versions of the predictive text features on your smartphone. You type a word or phrase, and the AI guesses which words will come next. Its purpose is to look for language patterns and imitate human writing, not to provide the user with facts.

The words and phrases that an AI tool generates come from the data it is trained on. If the data is biased, then it can give a biased response to your prompt. If the AI tool doesn’t know the answer to your question, it will produce something that sounds good, even if it’s wrong.

How can you spot an AI hallucination?

The best way to spot an AI hallucination is to carefully examine its output and ask yourself some questions:

  • Does it have any statistics that are too big, too small, or too weird to be true?
  • Does any of the information seem wrong for the time period, place, or context? Going back to our Hamlet/Star Wars example, a play written in 1600 might have references to swords, but not lightsabers.
  • Do the “facts” change throughout the text? For example, the first paragraph says that Albert Einstein worked at the German Patent Office and the fifth paragraph says he worked at the Soviet Patent Office.
  • Are you finding a lot of sweeping statements with words like “all”, “none”, “always”, “never”, “everyone”, and “nobody”?
  • Is the style of the AI-generated text unsuitable for your purpose? If you asked the AI to help you with your personal statement for your graduate school application and it’s making you sound like a stereotypical used car salesman, then there’s a good chance you’ll also find hallucinations in that text.
  • Does the AI claim that one thing causes another thing in a way that doesn’t sound right? “Smoking causes cancer” is common knowledge, but “Smoking causes drowning” doesn’t make sense.

If the AI gives you the names of studies, court cases, books, articles, or any other source materials, look them up to make sure they exist. Then do some fact checking. If you already did your research and you’re just using AI to help you get organized, make sure it doesn’t scramble your facts.

Need Editing and
Proofreading Services?

Learn More

Even when an AI tool gives you a decent piece of writing without any hallucinations, errors, or weird stuff, you’ll want to add a human touch before you publish it or share it with your professor, colleagues, online followers, or boss. This is where TextRanch can help. Our editors are available 24 hours a day, seven days a week. Click the blue box below and it will take you directly to our home page.

Do you have any interesting AI stories you want to share? Or are there any specific topics you’d like us to cover in this blog? Scroll down and leave a comment.

Trusted by thousands of learners and professionals. Subscribe now for weekly English grammar and writing tips that really help.

How useful was this post?

Click on a star to rate it!

Average rating 5 / 5. Vote count: 2

No votes so far! Be the first to rate this post.

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Leave a Reply

Your email address will not be published. Required fields are marked *