OpenAI pulls ChatGPT feature exposing private chats on Google
OpenAI has swiftly withdrawn a controversial opt-in feature from its ChatGPT platform after reports revealed that private user conversations were inadvertently appearing in Google search results. The move comes following an investigation by Fast Company, which uncovered instances of highly sensitive discussions, including those related to drug use and sexual health, becoming publicly accessible online.
The privacy lapse originated from a “Share” feature within the ChatGPT application. When users opted to share a conversation, they were presented with a checkbox labeled “Make this chat discoverable.” Below this option, in smaller, lighter text, was a crucial caveat: checking the box would allow the chat to appear in search engine results. Critics argue that the design and language of this prompt were ambiguous, potentially misleading users into unknowingly making their private dialogues public.
Within hours of the issue gaining traction and sparking widespread backlash across social media platforms, OpenAI acted decisively. The company not only disabled the problematic feature but also initiated efforts to scrub the already exposed conversations from search engine indexes. Dane Stuckey, OpenAI’s Chief Information Security Officer, confirmed the decision in a public statement, acknowledging the flaw. “Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to, so we’re removing the option,” Stuckey stated, adding that the company was actively working to remove the indexed content from relevant search engines. This marks a notable reversal from OpenAI’s earlier position, which had maintained that the feature’s labeling was sufficiently clear.
The rapid response from OpenAI garnered praise from some cybersecurity experts. Rachel Tobac, a cybersecurity analyst and CEO of SocialProof Security, commended the company for its prompt action once the extent of the unintentional data sharing became apparent. “We know that companies will make mistakes sometimes, they may implement a feature on a website that users don’t understand and impact their privacy or security,” Tobac remarked. “It’s great to see swift and decisive action from the ChatGPT team here to shut that feature down and keep user’s privacy a top priority.”
However, not all reactions were entirely positive. While OpenAI’s Stuckey characterized the feature as a “short-lived experiment,” this framing raised concerns among AI ethicists. Carissa Véliz, an AI ethicist at the University of Oxford, voiced a more critical perspective on the approach. “Tech companies use the general population as guinea pigs,” Véliz asserted, highlighting a broader ethical dilemma where new features are deployed to a wide user base without fully anticipating or mitigating potential privacy risks, only to be withdrawn if significant public outcry ensues. This incident underscores the delicate balance tech companies must strike between rapid innovation and ensuring robust user privacy and data security, particularly as AI tools become increasingly integrated into daily life.