Saturday, May 24, 2025
HomeAINew AI-Generated 'TikDocs' Exploits Trust in the Medical Profession to Drive Sales

New AI-Generated ‘TikDocs’ Exploits Trust in the Medical Profession to Drive Sales

Published on

SIEM as a Service

Follow Us on Google News

AI-generated medical scams across TikTok and Instagram, where deepfake avatars pose as healthcare professionals to promote unverified supplements and treatments.

These synthetic “doctors” exploit public trust in the medical field, often directing users to purchase products with exaggerated or entirely fabricated health claims.

With advancements in generative AI making deepfakes increasingly accessible, experts warn that such deceptive practices risk eroding confidence in legitimate health advice and endangering vulnerable populations.

- Advertisement - Google News

ESET researchers identified over 20 TikTok and Instagram accounts in Latin America using AI-generated avatars disguised as gynecologists, dietitians, and other specialists.

Example of a misleading TikTok video

These avatars, often positioned in the corner of videos, deliver scripted endorsements for products ranging from “natural extracts” to unapproved pharmaceuticals.

For instance, one campaign promoted a supposed Ozempic alternative labeled as “relaxation drops” on Amazon-a product with no proven weight-loss benefits.

The videos leverage polished production quality and authoritative tones to mimic credible medical advice, blurring the line between education and advertisement.

The technology behind these scams relies on legitimate AI tools designed for content creation.

Platforms offering avatar-generation services enable users to input minimal footage and produce lifelike videos, which scammers then repurpose for fraudulent campaigns.

This misuse highlights a critical vulnerability: while such tools empower creators, they lack robust safeguards to prevent malicious applications.

In some cases, deepfakes even hijack the likenesses of real doctors, as seen in UK campaigns impersonating TV figures like Dr. Michael Mosley.

Exploiting Trust in Healthcare Authority

Deepfake scams prey on the inherent trust placed in medical professionals. By framing sales pitches as expert recommendations, these videos sidestep skepticism typically directed at overt advertisements.

For example, a synthetic gynecologist with “13 years of experience” urged followers to purchase unregulated supplements, despite the account being linked to a generic avatar library.

Video (left) and avatars (right)

Similarly, fake endorsements from celebrities like Tom Hanks have promoted “miracle cures,” capitalizing on their public personas to lend credibility to dubious products.

The consequences extend beyond financial loss. Victims may delay evidence-based treatments in favor of ineffective-or harmful-alternatives.

Researchers note that deepfakes promoting fake cancer therapies or unapproved drugs could exacerbate health disparities, particularly among populations with limited healthcare access.

Moreover, the proliferation of such content undermines trust in telehealth and online medical resources, which surged during the COVID-19 pandemic.

Detection and Mitigation Strategies

While AI-generated content grows more sophisticated, experts recommend vigilance through both technical scrutiny and policy reforms. Key red flags include:

  • Mismatched lip movements that don’t sync with the audio.
  • Robotic or overly polished vocal patterns.
  • Visual glitches, such as blurred edges or sudden lighting shifts.
  • Hyperbolic claims like “miracle cures” or “guaranteed results”.
  • Accounts with few followers, minimal history, or inconsistent posting.

Social media users should scrutinize accounts promoting “doctor-approved” products, particularly new profiles with minimal followers or suspicious activity.

On a systemic level, advocates urge platforms to enforce stricter content moderation and labeling requirements for AI-generated media.

The EU’s Digital Services Act and proposed U.S. legislation like the Deepfakes Accountability Act aim to mandate transparency, though enforcement remains fragmented.

Technological solutions, such as AI-driven detection tools that analyze facial micro-expressions or voice cadence, are also being developed to flag synthetic content in real time.

Public education remains critical. Initiatives like New Mexico’s deepfake literacy campaigns and Australia’s telehealth guidelines emphasize verifying health claims through accredited sources like the WHO or peer-reviewed journals.

As ESET cybersecurity advisor Jake Moore notes, “Digital literacy is no longer optional-it’s a frontline defense against AI-driven exploitation.”

The escalation of deepfake medical scams underscores an urgent need for collaborative action.

While AI holds transformative potential for healthcare, its weaponization demands equally innovative countermeasures-from regulatory frameworks to public awareness-to safeguard both individual and collective well-being.

Find this News Interesting! Follow us on Google NewsLinkedIn, & X to Get Instant Updates!

Kaaviya
Kaaviya
Kaaviya is a Security Editor and fellow reporter with Cyber Security News. She is covering various cyber security incidents happening in the Cyber Space.

Latest articles

Zero-Trust Policy Bypass Enables Exploitation of Vulnerabilities and Manipulation of NHI Secrets

A new project has exposed a critical attack vector that exploits protocol vulnerabilities to...

Threat Actor Sells Burger King Backup System RCE Vulnerability for $4,000

A threat actor known as #LongNight has reportedly put up for sale remote code...

Chinese Nexus Hackers Exploit Ivanti Endpoint Manager Mobile Vulnerability

Ivanti disclosed two critical vulnerabilities, identified as CVE-2025-4427 and CVE-2025-4428, affecting Ivanti Endpoint Manager...

Hackers Target macOS Users with Fake Ledger Apps to Deploy Malware

Hackers are increasingly targeting macOS users with malicious clones of Ledger Live, the popular...

Resilience at Scale

Why Application Security is Non-Negotiable

The resilience of your digital infrastructure directly impacts your ability to scale. And yet, application security remains a critical weak link for most organizations.

Application Security is no longer just a defensive play—it’s the cornerstone of cyber resilience and sustainable growth. In this webinar, Karthik Krishnamoorthy (CTO of Indusface) and Phani Deepak Akella (VP of Marketing – Indusface), will share how AI-powered application security can help organizations build resilience by

Discussion points


Protecting at internet scale using AI and behavioral-based DDoS & bot mitigation.
Autonomously discovering external assets and remediating vulnerabilities within 72 hours, enabling secure, confident scaling.
Ensuring 100% application availability through platforms architected for failure resilience.
Eliminating silos with real-time correlation between attack surface and active threats for rapid, accurate mitigation

More like this

Zero-Trust Policy Bypass Enables Exploitation of Vulnerabilities and Manipulation of NHI Secrets

A new project has exposed a critical attack vector that exploits protocol vulnerabilities to...

Threat Actor Sells Burger King Backup System RCE Vulnerability for $4,000

A threat actor known as #LongNight has reportedly put up for sale remote code...

Chinese Nexus Hackers Exploit Ivanti Endpoint Manager Mobile Vulnerability

Ivanti disclosed two critical vulnerabilities, identified as CVE-2025-4427 and CVE-2025-4428, affecting Ivanti Endpoint Manager...