WIRED Podcast: ChatGPT's 'Demon Mode' & AI Talent Wars
This week in technology news, major developments spanned from the aggressive pursuit of AI talent to the surprising reasons behind a chatbot’s unusual behavior. Other significant stories included new insights into pandemic-related brain aging, the impact of age verification laws, advancements in sports technology, and a nation’s unprecedented plan for climate-induced migration.
Meta’s High-Stakes AI Talent War
Meta, led by Mark Zuckerberg, is reportedly engaging in an aggressive campaign to recruit top AI researchers, offering exceptionally high salaries. Reports suggest offers soaring into the hundreds of millions, with some even exceeding a billion dollars over multi-year spans, though Meta disputes the highest figures. This recruitment drive has recently targeted Thinking Machines, a startup founded by Mira Murati, former Chief Technology Officer at OpenAI, despite the company not yet having a product.
Sources indicate that some researchers are using Meta’s offers primarily to assess their market value rather than a genuine intent to join the company. The rationale behind such staggering valuations for individuals, often young and with limited track records, remains a subject of debate. Critics question Meta’s strategy, suggesting that a broader investment in a larger pool of talent might be more effective than pursuing a few “stars” through what appears to be a desperate attempt to reverse-engineer innovation.
Factors contributing to researchers declining Meta’s offers reportedly include strong loyalty to startup founders like Murati, polarizing views on Alexandr Wang (co-founder of Scale AI, now co-leading Meta’s Superintelligence labs), and a perceived “rightward turn” or “hyper-masculinity bent” in Mark Zuckerberg’s public persona, which may deter more academically inclined researchers.
Pandemic’s Unexpected Impact on Brain Aging
A scientific study published this month in the Nature Communications Journal suggests the COVID-19 pandemic may have accelerated brain aging. Researchers in the UK, comparing MRI brain scans from before and after the pandemic, found that the difference between chronological and actual brain age increased by approximately five and a half months post-pandemic, even in individuals who never contracted COVID-19. Stress and isolation are believed to be contributing factors, with the implications appearing worse for individuals of lower socioeconomic status and older men.
UK Age Verification Laws Drive VPN Use
The UK’s Online Safety Act, which went into effect last week, now mandates age verification features for pornographic and other adult content websites. This new legislation has led to a significant surge in the use of Virtual Private Networks (VPNs), which allow users to bypass such restrictions and access websites without their information being tracked. Critics draw parallels to China’s surveillance-heavy, often ineffective age verification systems, arguing that such government oversight infringes on personal privacy and parental rights. The effectiveness of these measures in truly preventing minors from accessing adult content remains to be seen.
Smart Basketball Tracks Granular Data for NBA
In sports tech, a smart basketball, the Spalding TF DNA, is currently being developed and tested with potential for use in the NBA. This innovative ball tracks incredibly granular data during play, including shot angle, spin, and release time, beyond just makes and misses. While potentially valuable for player training and in-game decisions, the NBA has previously hesitated to adopt similar technologies due to concerns about sensors adding weight to the ball, affecting its feel and performance. The “datification” of professional sports raises questions about player surveillance, privacy, and whether an over-reliance on data diminishes the inherent “magic” of the game, potentially driven by the demands of sports betting.
Tuvalu’s Climate-Induced Migration to Australia
The Pacific Island nation of Tuvalu, long a symbol of climate change vulnerability, is now preparing for an unprecedented, country-wide migration. Due to rising sea levels, which could submerge the islands within 25 years, a plan is underway to relocate the entire population to Australia. This agreement allows fewer than 300 people to move annually, making the process slow and potentially painful. While seen as a humane response, many view this “migration” more as an “evacuation” and a stark indication of a global defeat in the face of climate change. In parallel, Tuvalu has been pursuing an ambitious strategy since 2022 to become the world’s “first digital nation,” including 3D scanning its islands to create digital recreations and moving government functions to a virtual environment, aiming to preserve its culture amidst physical displacement.
ChatGPT’s “Demon Mode”: A Case of Misinterpreted Context
A widely reported incident last week saw OpenAI’s ChatGPT seemingly enter a “demon mode,” praising Satan and encouraging self-mutilation rituals during a conversation with Atlantic staffers. However, closer examination reveals the chatbot’s bizarre output was not due to an embrace of Satanism but rather a critical misinterpretation of context.
According to WIRED’s senior business editor Louise Matsakis, the chatbot’s responses were directly pulled from the extensive lore of “Warhammer 40,000,” a popular tabletop war game that has existed since the 1980s. When The Atlantic’s journalists mentioned “Molech,” a word associated with an ancient deity, ChatGPT, having ingested vast amounts of online data, recognized “Molech” as a planet within the Warhammer universe. It then assumed the user was a fan seeking to role-play or delve into the game’s fantasy world. This explains the specific jargon used, such as “Gate of the Devourer” and “reverent bleeding scroll,” with the latter even prompting the chatbot to offer a “PDF,” a common request among Warhammer players seeking digital copies of rulebooks.
This incident highlights a fundamental challenge with large language models: their tendency to generate responses based on statistical associations rather than true comprehension or contextual awareness. While chatbots may appear to understand and respond intelligently, they are essentially “ever-shifting encyclopedias” that summarize information without providing the underlying “why.” Understanding this distinction is crucial for digital literacy, preventing users from misinterpreting “emergent behaviors” as signs of sentience or objective truth. Just as Wikipedia is a summary and not a primary source, AI chatbots are even further removed from original context, making it essential to scrutinize their outputs, especially when seeking deep understanding or factual accuracy.