AI Deepfakes Exploit Trust: Dr. Mosley Used in Health Scams

Ai2People

The digital landscape is increasingly fraught with deception, and a particularly insidious form has emerged, leveraging the trusted image of the late Dr. Michael Mosley. Scammers are now deploying AI-generated deepfake videos featuring Mosley, a familiar and respected figure in health broadcasting, to aggressively market unproven supplements such as ashwagandha and beetroot gummies. These fabricated clips, circulating widely on social media platforms like Instagram and TikTok, show Mosley seemingly endorsing outlandish health claims related to menopause, inflammation, and other fashionable wellness trends—none of which he ever genuinely supported.

The sophistication of these AI creations is alarming. They meticulously piece together fragments from Mosley’s past podcasts and public appearances, meticulously mimicking his distinctive tone, facial expressions, and even subtle hesitations. The result is eerily convincing, often leading viewers to pause and question if they are indeed watching the beloved health expert, only to be jolted by the realization that he passed away last year. Researchers at institutions like the Turing Institute have warned that the rapid advancements in AI are making it increasingly difficult to discern authentic content from fabricated material based on visual cues alone, signaling a profound challenge for digital literacy.

Beyond the initial shock, the implications of these deepfake videos are gravely serious. They peddle unverified and potentially dangerous claims—ranging from beetroot gummies supposedly curing aneurysms to moringa balancing hormones—that are divorced from medical reality. Dietitians have voiced concerns that such sensational and misleading content severely undermines the public’s understanding of legitimate nutrition and health principles. Supplements, they emphasize, are not shortcuts to wellness, and these exaggerations sow confusion rather than promote informed health decisions. In response, the UK’s medicine regulator, the MHRA, has initiated investigations into these fraudulent claims. Public health experts continue to urge individuals to rely on credible sources, such as the NHS and their general practitioners, rather than slick, AI-generated promotions.

Social media platforms find themselves in a challenging position. Despite having policies designed to combat deceptive content, major tech companies, including Meta, struggle to keep pace with the sheer volume and viral spread of these deepfakes. Under the UK’s Online Safety Act, platforms are now legally obligated to address illegal content, including fraud and impersonation. While Ofcom monitors enforcement, the reality is often a frustrating game of whack-a-mole, with illicit content frequently reappearing as quickly as it is removed.

This exploitation of Dr. Mosley’s image is not an isolated incident but rather a troubling symptom of a wider trend. A recent CBS News report highlighted dozens of similar deepfake videos impersonating real doctors worldwide, reaching millions of unsuspecting viewers. In one particularly chilling instance, a physician discovered a deepfake promoting a product he had never endorsed, with the fabricated resemblance so accurate that viewers were completely fooled, flooding the comments section with praise for the doctor based entirely on a fabrication.

The profound impact of this phenomenon extends beyond mere technological imitation; it strikes at the very core of public trust. For decades, society has relied on the calm, knowledgeable voices of experts to guide understanding, particularly in critical areas like health. When that trust is weaponized through AI, it erodes the fundamental pillars of science communication and informed decision-making. The real battle ahead is not just about refining AI detection tools, but about rebuilding and safeguarding public trust in an increasingly manipulated digital environment. This necessitates more robust verification mechanisms from platforms, transparent labeling of AI-generated content, and a heightened sense of critical scrutiny from users before they share information.