Go to the main content

Sam Altman says “don’t trust ChatGPT—it hallucinates.” Here’s what that actually means for everyday users

Even OpenAI’s CEO says don’t trust ChatGPT too much — here’s how to keep its creative magic while sidestepping the hallucinations that could torpedo your credibility.

Lifestyle

This is an AI-generated image.

Even OpenAI’s CEO says don’t trust ChatGPT too much — here’s how to keep its creative magic while sidestepping the hallucinations that could torpedo your credibility.

I was nursing a lukewarm oat-milk latte and casually doom-scrolling when a headline made me sit up straight: OpenAI CEO Sam Altman had just told users not to trust ChatGPT “that much” because, in his words, the model “hallucinates.”

The quote came from the first episode of OpenAI’s official podcast and was picked up by tech outlets worldwide.

Altman admitted he finds it “interesting” that people place such a high degree of trust in a system that can fabricate facts, adding bluntly that it “should be the tech you don’t trust that much.”

Those remarks landed with a thud in my writer-brain—so much of my workflow now laces around large language models (LLMs) for outlines, summaries, even grocery-list poetry when I’m avoiding real work.

If the guy who runs the place is waving a red flag, what should the rest of us be doing differently?

Hallucination 101: when AI dreams out loud

“Hallucination” sounds like a psychedelic party trick, but in AI it simply means generating confident-sounding answers that are flat-out wrong.

Imagine you ask for the capital of Australia and the system declares, “That’s Sydney, of course—here’s a detailed history!” Charming, detailed, and dead wrong.

Under the hood, these models predict the next word based on patterns in their training data — they don’t know truth the way we do.

If the data are sparse, contradictory, or absent, the model will guess—often elaborately—rather than shrug. That guessing is helpful when you want creative metaphors, less so when you’re drafting an HR policy or diagnosing a rash.

Why we trust anyway

Psychologists call it “automation bias”: the tendency to overvalue machine output, especially when it’s packaged in a slick interface.

Add fluency bias — our brain equates confident language with accuracy — and you’ve got the perfect storm.

Altman’s warning highlights that the better the text sounds, the more our guard drops. I’ve caught myself accepting citations that looked legitimate, only to discover they linked to articles that never existed.

The prose was persuasive — my critical radar had hit snooze.

The hidden costs of blind reliance

A few mis-attributed fun facts in a blog post might seem harmless, but scaling that error across legal briefs, healthcare guidance, or financial reports is a recipe for real-world fallout.

Last year, two New York attorneys were sanctioned for submitting a brief peppered with imaginary case law generated by an LLM.

A family-practice doctor told me she now double-checks every drug-interaction summary spit out by AI after spotting a dangerous fictitious contraindication.

Small stakes? Maybe.

Cumulative stakes? Enormous.

How I audited my own AI habits

Altman’s quote pushed me to run a one-week experiment.

I logged every task where I leaned on ChatGPT: drafting newsletter blurbs, summarizing research papers, brainstorming trail-run playlists. Then I manually verified each output.

Result?

A 15-percent hallucination rate.

Most errors were subtle — misdated studies, conflated authors — but they would have slipped into published work had I not checked.

Interestingly, the hallucination rate spiked to 30% when I asked for niche statistics or quotes. That makes sense: the rarer the data, the shakier the model’s footing.

Practical guardrails for everyday users

Before we get tactical, a quick note: these suggestions assume you still want to use AI—it is, after all, an incredible accelerator.

The goal is right-sized trust, not technophobia.

  1. Narrow the prompt scope. Ask for ideation, outlines, or style rewrites—tasks where creativity trumps precision. Avoid relying on ChatGPT for undisputed facts without secondary confirmation.
  2. Cite-and-verify loop. Whenever the model offers a statistic or URL, open it. No citation? Ask explicitly for one—then open those too. If a source doesn’t load or reads like spam, bin it.
  3. Introduce a temporal filter. LLM training cuts off at a snapshot in time. If you request “latest GDP numbers,” you’re begging for hallucination. Phrase time-sensitive questions as, “Based on data available up to [date]…” and be prepared to update from a live source.
  4. Run parallel queries. Feed the same prompt into two different LLMs (e.g., ChatGPT and Claude) and compare. Divergence often flags areas to fact-check manually.
  5. Treat medical, legal, and financial advice as first drafts only. Experts still need to sign off. The model can outline questions to ask your CPA, but shouldn’t file your taxes unsupervised.

How companies are baking in friction

Several platforms now surface confidence scores or highlight sentences likely to be hallucinated. Some law-firm tools embed Shepard’s Signals so any case law generated is instantly checked against real databases.

Microsoft’s Copilot marks citations with purple ribbons; click to verify before copy-pasting into that board memo.

Altman’s plea aligns with this trend: product design is shifting from frictionless output to verifiable output.

In other words, good AI UX may start to feel slower because it asks you to pause and confirm.

Counter-intuitive? Yes.

Necessary? Absolutely.

A quick word on the “expert quote” trap

One sneaky hallucination genre is the fabricated expert quote.

The model knows a sentence will sound smarter if it begins, “As psychologist Angela Duckworth once said…” so it obliges, pulling thematic but fictitious words from the ether.

I reached out to Duckworth’s team about a quote that “she” gave ChatGPT on grit and remote work. Not hers.

The PR rep sighed, “We get these weekly.”

Lesson: when citing real humans, verify directly from their books, interviews, or academic papers, not from an LLM snippet.

The future: AI as co-pilot, not captain

Altman’s phrasing — “tech you don’t trust that much” — might sound like marketing suicide, yet it’s a blueprint for sustainable adoption.

Airplane autopilot didn’t replace pilots — it offloaded repetitive tasks.

Word processors didn’t kill editors — they eliminated Tipp-Ex fumes.

ChatGPT can draft email first passes, generate alternative phrasings, or explore arguments you hadn’t considered. But the human stays in the loop, steering, checking, and, when necessary, overriding.

My new workflow in three quick beats

Research phase → Ask ChatGPT for an outline and key counterpoints, then fetch primary sources myself.
Draft phase → Use the model to tighten topic sentences or vary transition words.
Fact-check phase → Manually cross-verify any claims, numbers, or citations in a separate color so omissions pop visually.

The entire cycle adds maybe ten minutes to a 1,000-word article — less than the time I’d waste correcting a single embarrassing factual error after publication.

Final thoughts

When Sam Altman says not to trust his own product blindly, I hear both a warning and an invitation.

The warning: unchecked enthusiasm plus hallucinating language models equals misinformation at scale.

The invitation: treat the tool as a brainstorm buddy, not a crystal ball.

Used that way, ChatGPT can speed creative flow, sharpen clarity, and free us from blank-page paralysis—without turning our drafts into fantasy fiction.

And if the tool ever offers to wire a mild electric shock just to keep things interesting? Well, I’ve learned enough this year to politely decline.

What’s Your Plant-Powered Archetype?

Ever wonder what your everyday habits say about your deeper purpose—and how they ripple out to impact the planet?

This 90-second quiz reveals the plant-powered role you’re here to play, and the tiny shift that makes it even more powerful.

12 fun questions. Instant results. Surprisingly accurate.

 

Avery White

@

Formerly a financial analyst, Avery translates complex research into clear, informative narratives. Her evidence-based approach provides readers with reliable insights, presented with clarity and warmth. Outside of work, Avery enjoys trail running, gardening, and volunteering at local farmers’ markets.

More Articles by Avery

More From Vegout