OpenAI pulls ChatGPT public chat feature after privacy outcry
OpenAI has discontinued a controversial opt-in feature in its ChatGPT application after private user conversations began appearing in Google search results. The move follows a report by Fast Company that highlighted how sensitive dialogues, some concerning deeply personal topics, were becoming publicly accessible.
Earlier this week, the report revealed that private ChatGPT conversations, which included discussions on subjects such as drug use and sexual health, were unexpectedly discoverable via Google’s search engine. The root of the problem appeared to lie in the application’s “Share” feature, which presented an option that may have inadvertently led users to make their chats publicly searchable.
When users selected the “Share” option, they were given the choice to tick a box labeled “Make this chat discoverable.” While smaller, lighter text beneath this option explained that the conversation could then appear in search engine results, many users reportedly overlooked or misunderstood this crucial caveat, leading to unintended public disclosures.
Within hours of the issue sparking widespread concern on social media, OpenAI acted swiftly, disabling the feature and initiating efforts to remove the exposed conversations from search engine indexes.
Dane Stuckey, OpenAI’s chief information security officer, addressed the situation in a public statement. “Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to, so we’re removing the option,” Stuckey stated. He added, “We’re also working to remove indexed content from the relevant search engines.” This statement marked a significant shift from the company’s earlier stance, which had maintained that the feature’s labeling was sufficiently clear.
The prompt response from OpenAI was commended by cybersecurity analyst Rachel Tobac, CEO of SocialProof Security. Tobac acknowledged that companies can make mistakes in implementing features that impact user privacy or security. “It’s great to see swift and decisive action from the ChatGPT team here to shut that feature down and keep user’s privacy a top priority,” she remarked.
However, the incident also drew criticism regarding the nature of such features. OpenAI’s Stuckey characterized the now-removed option as a “short-lived experiment.” But Carissa Véliz, an AI ethicist at the University of Oxford, expressed concern over the implications of such trials. “Tech companies use the general population as guinea pigs,” Véliz commented. “They do something, they try it out on the population, and see if somebody complains.”
The episode underscores the ongoing challenges and responsibilities faced by technology companies in balancing innovation with user privacy and data security, particularly in rapidly evolving fields like artificial intelligence.