Go to the main content

Fake doctors, real lies: how AI is spreading dangerous health misinformation and selling unproven supplements

AI didn’t invent snake oil, but it just made selling it a whole lot easier.

News

AI didn’t invent snake oil, but it just made selling it a whole lot easier.

The other night I was chopping garlic for dinner when a video slid onto my feed. A well known British professor was calmly explaining a “new menopause symptom” I’d never heard of and recommending a probiotic with fancy herbs. It looked legit. The lighting, the subtitles, the confident tone. I almost saved it to read later.

Thirty seconds in, something felt off. The mouth didn’t quite sync. The voice was a little too smooth. I clicked away and went looking. What I found wasn’t just one sketchy clip. It was a pattern that’s now been documented by fact checkers and journalists: AI deepfakes using the faces and voices of real doctors to push health lies and funnel people toward unproven supplements. The Guardian covered the trend on December 5, 2025, tying it to a wave of videos across TikTok, Facebook, X, and YouTube. The common thread was the use of real medical authority to sell products and plant doubt about evidence-based care.

Here’s the root of the story

The UK fact checking charity Full Fact investigated a network of accounts that took real footage of doctors and academics and turned it into AI-generated endorsements. Their report walks through how one professor’s talk was manipulated into clips about menopause, complete with a sales pitch and a link to a U.S. supplements site. By the time TikTok permanently banned one key account, a single video had racked up more than 365,000 views. Full Fact documented similar fakes using other experts and found the pattern across platforms. They also traced repeated links to the same supplement seller and discussed how affiliate marketing likely greased the wheels.

This is not a niche problem. The BMJ flagged the risk in 2024, warning that deepfake doctor clips were already fooling viewers and nudging them toward expensive pills dressed up as cures. That early signal reads like a preview of what exploded in 2025. The takeaway isn’t that supplements are evil. The point is that synthetic endorsements borrow trust they haven’t earned, and they do it in a space where people feel vulnerable and want relief.

What this looks like when you scroll

Imagine you’re up late with hot flashes or joint pain, or, in my season of life, rocking a teething toddler back to sleep. You’re tired and you want answers you can act on. The video shows a familiar doctor. The script hits your symptoms. The recommendation sounds natural. Then a link. Tap, buy, hope. I get the appeal because I’ve been there, not with menopause yet, but with stress, sleep, and postpartum recovery. We want shortcuts when we’re stretched thin.

The Full Fact team shows how these clips are built from real talks, podcasts, and conference appearances. The result is a “doctor” who looks and sounds like the person you trust but says things that person never said. Sometimes it’s a wild claim like a made up condition. Sometimes it’s a list of herbs with big promises. In one case, TikTok initially said the content didn’t break rules before reversing course and taking it down after media pressure. That lag matters because speed is everything in feeds.

Why this tactic works so well

We believe faces. We trust familiar names. We’re wired to accept advice from people who look like they know what they’re doing. Deepfakes exploit these shortcuts. They also exploit how platforms reward watch time and engagement. A video that makes you curious or worried spreads faster than a sober infographic, and anyone who has ever paused on a dramatic health claim knows how that algorithm responds.

There’s another layer. The checkout path is frictionless. An affiliate link placed neatly in a bio turns attention into money within seconds. As noted by Full Fact, the affiliate structure helps explain why the same products kept appearing under different accounts that posted deepfaked clips: people can get paid for clicks and conversions even if the seller claims no official connection.

What experts are saying

“People who know me could have been taken in by it,” said Duncan Selbie, former head of Public Health England, after seeing a deepfake of himself. “It wasn’t funny in the sense that people pay attention to these things.”

His reaction captures the harm beyond sales. When real names get tied to fake claims, trust erodes for everyone involved, including the public who will be more skeptical of genuine advice next time.

The BMJ put it plainly last year: reliable evidence on how convincing deepfakes are is still emerging, but doctors are already being used as puppets in ads that promise the world. That uncertainty is the danger. When we don’t know how often people are fooled, we risk underreacting while the scammers perfect their craft.

And here’s a line from Full Fact’s reporting that I keep thinking about: “These deepfakes represent a public health threat.” That quote comes from Stanford’s Dr Sean Mackey, who himself was impersonated. He’s not being dramatic. If a believable fake nudges someone to ditch a needed therapy or swallow something that interacts with their meds, that’s not a small online hiccup. That’s risk.

What this means for how we consume health content

I write about self development because our choices compound. Health choices do too. The easiest thing is to tap the thing that promises relief. The stronger thing is to pause and cross-check.

Here’s what I practice now, and what I’m teaching my brain to do when a health clip appears. First, I follow the link and look for the setup. Is the video account new, with a random handle and no consistent identity? Are there many posts promoting the same product in slightly different voices, or sudden jumps between accents? Full Fact’s examples include those tells.

Second, I search for the exact quote outside the platform. If a famous doctor appears to be endorsing a capsule, I look for a statement on their university page or personal site saying they did. Most serious experts leave a trail. Silence is a sign to slow down.

Third, I check the claims against independent sources. The Guardian piece threads together different cases and shows how easy it is for a fake to ride on a real name. If the clip says a product will “cure” menopause symptoms, that’s already a red flag. And if there’s a new condition you’ve never heard of that supposedly all women have, that’s a pause and verify moment.

Finally, I weigh the risk. Supplements can be helpful in specific contexts, and I’m not anti supplements at all. My circle swings plant based and we talk about B12 and iron the way some people talk about sports. But the more a product promises across many unrelated problems, the more cautious I get. The simplest test is to ask your real doctor or pharmacist. That step used to feel inconvenient to me. Now it feels like part of being a responsible adult.

Platforms and policy still need to catch up

To be fair, some videos do get taken down. TikTok told Full Fact that delayed moderation on at least one account was a mistake. YouTube added labels to make “altered or synthetic content” clearer. Meta says it removes harmful misinformation, though Full Fact couldn’t see obvious enforcement in the cases they flagged. That mixed response is the story. Enforcement happens, but not consistently or fast enough to blunt the early spike of views.

Policy will keep evolving, and it should. Labels help a bit. Better detection helps more. Real penalties for repeat offenders and for affiliate schemes that reward deception could change incentives. Until those levers align, the scariest part is the scale. Anyone can now clone a voice, stitch a face onto a talking head, and generate a convincing script. If you can make a fake doctor in an afternoon, you can make fifty by the weekend. That’s the reality we’re living in.

How I’m keeping my home feed cleaner

At our kitchen island in Itaim Bibi, mornings are loud and full. We drink coffee while our toddler throws strawberries on the floor and my husband reads headlines out loud. I want our home to be a calm place, even online. So I curate hard. I unfollow accounts that mix health advice with product pitches when those pitches lean on fear or urgency. I report impersonations. I save content from sources that show their references, not just their results.

There’s also a mental filter I’m building. If a claim makes me feel rushed, I stop. If it promises a shortcut that sounds like magic, I stop. If it uses a real doctor’s face to push a miracle, I stop and search their name plus the word statement. That tiny habit saves time and money.

A note to the wellness community

As someone who eats a lot of plants, reads supplement labels, and hangs out with friends who go fully vegan, I know this space well. Many people here are thoughtful and careful. That’s the energy we need online too. If you create content, be transparent. If you sell products, vet your affiliates and be strict about claims. If you’re a consumer, keep receipts and ask questions. Confidence is great. Integrity is better.

The bottom line

The “doctor” on your screen might be a model with a swapped face and a cloned voice. The quote might be stitched together from an old lecture. The link might be a commission trap dressed up as care. Thanks to Full Fact’s investigation and careful reporting by newsrooms, we have a clearer picture of how this scam works and how it spreads. We also have a responsibility to slow it down in our own feeds by choosing how we click, share, and buy.

Three small actions make a real dent. Search the claim outside the platform. Look for an official statement from the named expert. Ask a clinician before you try a product with medical promises. That’s not being cynical. That’s being a good steward of your health and your attention.

If you saw a convincing fake, you’re not alone. I almost fell for one too. The trick is to learn the pattern and move smarter the next time it appears.

Just launched: Laughing in the Face of Chaos by Rudá Iandê

Exhausted from trying to hold it all together?
You show up. You smile. You say the right things. But under the surface, something’s tightening. Maybe you don’t want to “stay positive” anymore. Maybe you’re done pretending everything’s fine.

This book is your permission slip to stop performing. To understand chaos at its root and all of your emotional layers.

In Laughing in the Face of Chaos, Brazilian shaman Rudá Iandê brings over 30 years of deep, one-on-one work helping people untangle from the roles they’ve been stuck in—so they can return to something real. He exposes the quiet pressure to be good, be successful, be spiritual—and shows how freedom often lives on the other side of that pressure.

This isn’t a book about becoming your best self. It’s about becoming your real self.

👉 Explore the book here

 

Ainura Kalau

Ainura was born in Central Asia, spent over a decade in Malaysia, and studied at an Australian university before settling in São Paulo, where she’s now raising her family. Her life blends cultures and perspectives, something that naturally shapes her writing. When she’s not working, she’s usually trying new recipes while binging true crime shows, soaking up sunny Brazilian days at the park or beach, or crafting something with her hands.

More Articles by Ainura

More From Vegout