AI-Generated Fake Books Flood Amazon, Misusing Expert Identities

Decoder

A disturbing new trend has emerged on Amazon, where dozens of fabricated cookbooks and health guides are being sold under the prominent name and image of Dr. Eric Topol, a renowned physician and scientist. Dr. Topol has publicly condemned these listings as “outright fraud,” asserting that these books are entirely manufactured and published without his consent. His repeated attempts to report the fraudulent ISBNs to Amazon have reportedly yielded no concrete results, with the e-commerce giant’s customer service responding only with generic links.

The impact of this deception is already being felt by consumers. One buyer recounted purchasing a book, confident in the credibility associated with Dr. Topol’s name, only to discover the content was a profound disappointment. This incident underscores a growing vulnerability for consumers who rely on trusted names when making purchasing decisions online.

While the issue of AI-generated fake books is not entirely new to Amazon’s vast marketplace, the sheer volume of titles illicitly using Dr. Topol’s identity highlights the escalating scale of the problem. The widespread availability of sophisticated generative AI tools, such as ChatGPT, combined with the ease of self-publishing platforms, has created a fertile ground for scammers. These tools enable malicious actors to flood the market with books that mimic the style, branding, and perceived authority of well-known figures, making it increasingly difficult for platforms to police content effectively.

Amazon has responded to the burgeoning crisis by implementing several policy changes. The company has limited self-publishers to a maximum of three books per day and now requires authors to declare any use of AI-generated text, images, or translations within their submissions. However, this crucial information about AI generation is not currently disclosed to customers, leaving them unaware of the synthetic nature of the content they are purchasing. Furthermore, Amazon has tightened regulations concerning summaries and workbooks, which frequently plagiarize substantial portions of original works. Despite these measures, Dr. Topol’s ongoing predicament serves as a stark reminder that AI-generated fraudulent books continue to circumvent safeguards, effectively hijacking the hard-earned reputations of trusted experts.

This particular form of digital deception is part of a broader spectrum of fraudulent activities powered by AI. Generative AI is increasingly deployed on a large scale to generate advertising revenue through deceptive means, including fake celebrity endorsements and highly personalized phishing campaigns. Beyond commercial fraud, the technology is also misused to create emotionally charged synthetic images or text on politically controversial topics, and to fabricate entire media narratives, contributing to a wider landscape of misinformation and distrust. The case of Dr. Topol’s hijacked identity on Amazon is a clear illustration of how rapidly evolving AI capabilities are challenging the foundational principles of intellectual property, consumer trust, and platform accountability in the digital age.