Why ChatGPT Is Not a Fact-Checker (And That’s Okay)
Updated: 7 days ago
One of the most common reasons editors say they don’t like ChatGPT is because it sometimes answers with false information. In a profession where accuracy is so important, these “hallucinations,” the term for when ChatGPT gives incorrect answers, cause alarm and make some editors view the AI program as reckless and wholly untrustworthy.
Publishing professionals take note: ChatGPT is not a fact-checker. It wasn’t designed to be one. And that’s okay. It is still a valuable tool for editors when you understand how to use it best within its limitations.

How ChatGPT Works
ChatGPT has the appearance of knowledge, but it doesn’t actually know anything. When you first start using ChatGPT and see it spitting out an encyclopedia entry’s worth of text in seconds, it seems so technologically advanced that the information must be correct.
Rather than presenting you with an encyclopedia entry, what it’s really doing is making a series of guesses based on statistics. It formulates responses to prompts (questions users ask) based on patterns and statistical probabilities it has learned from the vast amount of text it was trained on. ChatGPT, at its core, is an AI language model that predicts the next word in a sentence. It repeats this process, word by word, until it has generated a complete response.
ChatGPT doesn't understand the text it generates in the same way humans do. It doesn't understand whether the responses it provides are factual. This is because it doesn’t understand what it means to be factual, only most likely.
ChatGPT’s Hallucinations Scared Me Too
ChatGPT does a much better job now at relaying to users that the program’s responses are not always factual. As of this writing, there is a statement at the bottom of the screen that states: “ChatGPT may produce inaccurate information about people, places, or facts.”
In one of my first experiments with the program, I wanted to test its accuracy and asked, “Tell me about Erin Servais.” What it answered with was close, but not correct. It said I am a “professional writer, editor, and writing coach” and that I am a “founder of a company called Dot and Dash,” which is correct. But it also said I’ve written for Writer’s Digest, I’ve published a book called Need a Writing Retreat?, and I have a master’s degree in creative writing from Miami University—not true, not true, and not true.
If you Googled me, you’d know all these details seem plausible, but they aren’t factual. I didn’t know then that ChatGPT’s goal was plausibility. If it says something is true, it's because that aligns with the patterns the program has learned, not because it understands the truth of the statement. So, when I asked the follow-up question “Is all of the bio about Erin Servais you just wrote factually correct?” and it said yes, I was feeling rather concerned for humanity.
Here are the receipts:

When you spend a little more time with ChatGPT, you begin to learn its limitations. I was also using an earlier, free version of ChatGPT, which was much more prone to these hallucinations than version 4, the more-advanced, paid version that has been trained on more data.
ChatGPT vs. Early Wikipedia

A parallel can be drawn between ChatGPT and the early years of Wikipedia. Both have been revolutionary in their domains, and they've also had their share of criticism and apprehension.
At first, Wikipedia was criticized for its potential to spread misinformation, similar to the concerns about ChatGPT today. However, over time, with robust editorial guidelines and active community moderation, Wikipedia evolved into a much more reliable source of information (though I still wouldn’t use it as a primary source for fact-checking).
Likewise, while ChatGPT isn't perfect, it's continually improving. With my experience between ChatGPT version 3.5 (the current free version) and version 4 (the paid version), I’ve found, in every way, version 4 is leaps ahead of 3.5. It’s really not fair or accurate to judge the entire program based off of the free version. I strongly encourage you to pay for one month of version 4 and try it out for yourself.
Also note that because hallucinations happen now doesn’t mean they’re a fundamental part of ChatGPT. As the technology improves and the AI model learns more and more information, its trustworthiness will also increase.
What Causes ChatGPT Hallucinations
Here are the most-common reasons why ChatGPT will answer incorrectly.
1. Highly Specific Topics
Let’s say you asked ChatGPT to tell you the mating habits of a little-known insect species or to list the qualities of a brand-new, incredibly niche subgenre of fiction. It may make mistakes because it did not encounter information about these topics during its training.
2. Long Answers
ChatGPT is also more likely to provide an answer that doesn’t fit when it gives a lengthy reply. Here is the example the program gave me when I asked it about this:
Consider you ask ChatGPT to write a long story based on the prompt: "Once upon a time in a faraway kingdom, there was a kind and noble king." Given the length of the narrative, there's a high chance the model might start generating details that are inconsistent or entirely hallucinated. For example, it might begin accurately, maintaining the theme and characters, but midway, introduce an unexpected element, like a spaceship landing in the kingdom. This is a hallucination because the AI has deviated from the initial context, introducing elements that aren't consistent with the original prompt.
3. Statistical Training
Sometimes, ChatGPT simply doesn’t guess correctly. It answers based on statistical probabilities. This means it might occasionally respond with something that is statistically plausible but factually incorrect (as with my bio experiment above).
Traditional Fact-Checking Skills Still Apply
A good fact-checker knows you rarely use one source to verify information. You double- or triple-check facts. I often go back to the adage I learned in journalism school: “If your mother says she loves you, check it out.” Just because one reference says something is true doesn’t mean you shouldn’t look it up somewhere else just in case—and as a best practice.
If or when ChatGPT’s trustworthiness as a fact-checking resource improves, you should still search for the information in other places to see if the veracity is backed up, as you would with Wikipedia.
While it might seem disappointing that ChatGPT is not a fact-checker, it's important to recognize it was never designed to be one. It was designed to respond to queries with human-like text, and it does. Plus, ChatGPT's inability to fact-check actually underscores how vital humans still are in the publishing process.
By understanding what ChatGPT is and isn't capable of, we can more effectively harness its potential while appreciating the value of human expertise. As we move forward in this AI-driven age, we'll find that it's not about AI replacing us; it's about how we can work together with AI to create a future where technology and humanity coexist to bring out the best in each other.
After all, it's not about creating machines that can do everything we can do, but about creating tools that enable us to do what we do—even better.
Want to learn more about ChatGPT?
Sign up for the ChatGPT for Editors course.

Erin Servais is an editor, educator, and community builder. She founded the Editors Tea Club, an online space where editors gather for education and support, and she offers editor coaching through her company, Dot and Dash.
In her fifteen years in publishing, she has helped to bring hundreds of titles to print and has presented about editing and entrepreneurship across the United States and Canada. Erin serves on the board of directors for ACES: The Society for Editing.
She always tells ChatGPT please and thank you, just in case.
Email Erin: Erin@aiforeditors.com