Sci-fi fans raise real-world AI concerns at Worldcon 2025
Seattle, a city often at the forefront of technological innovation, is currently hosting Worldcon 2025, the world’s premier science fiction convention. Yet, amidst the celebration of speculative futures, a very real and pressing concern about artificial intelligence is stirring significant debate, highlighting a growing tension between technological advancement and human creativity within the information ecosystem.
The core of the current controversy stems from a revelation made by Seattle Worldcon 2025 organizers in April: they had utilized ChatGPT to vet over 1,300 potential panelists for the event. This admission sparked immediate and widespread condemnation from the science fiction and fantasy community, which views the use of generative AI in such a capacity as a direct affront to human authorship and intellectual property. The backlash was so intense that the convention’s chair, Kathy Bond, issued an apology, acknowledging the mistake and committing to redoing the entire vetting process without any AI tools. The incident also led to the resignation of several key Worldcon staff members, including the Hugo Awards Administrator, Nicholas Whyte, who emphasized that no large language models (LLMs) or generative AI had been used in the Hugo Awards process itself. Organizers had stated the AI vetting process, which involved prompting ChatGPT to search for “scandals” like homophobia, racism, or sexual misconduct associated with applicants’ names, was intended to save “hundreds of hours of volunteer staff time” and that results were human-reviewed. However, the community’s swift reaction underscored a deep-seated mistrust.
This Worldcon incident is not an isolated tremor but a significant symptom of broader anxieties rippling through the creative industries. For many authors, artists, and musicians, the rise of generative AI represents an “existential issue.” A primary concern is the unauthorized use of copyrighted material to train AI models, essentially “plagiarizing” human-created works to generate new content. Major tech firms like Meta, OpenAI, and Anthropic are currently facing legal challenges over their alleged use of vast quantities of copyrighted text, video, and audio without compensation or consent. This sentiment is echoed across the globe, with over 400 British musicians, including industry titans Elton John and Paul McCartney, signing an open letter in April 2025 demanding copyright law reforms to protect artists from AI exploitation.
The debate also extends to the very definition of authorship and ownership in the age of AI. While the U.S. Copyright Office clarified in May 2025 that AI-assisted works can receive copyright protection if they demonstrate “sufficient human creativity,” purely machine-generated content without significant human input remains outside this scope. This legal ambiguity creates uncertainty for creators and businesses alike. Meanwhile, the UK government is exploring proposals that could allow AI developers to use copyrighted content for training unless the owner explicitly “opts out,” a position that has generated considerable opposition from creative rights organizations.
Beyond intellectual property, concerns about AI’s impact on the information ecosystem itself are profound. The ability of AI to generate realistic but fabricated content, from news articles to deepfakes, challenges the discernment of truth and the integrity of information. As acclaimed Chinese sci-fi author Liu Cixin, known for “The Three-Body Problem,” observed in February, while current AI-generated writing may lack “thought processes” and “new ideas,” its rapid development could soon make human writers “the last generation of sci-fi authors whose writing is undoubtedly done by humans.” The prospect of AI-driven content flooding the market raises questions about the devaluation of human creativity and the potential for a future where proving human authorship becomes increasingly difficult.
In a direct response to the Worldcon controversy and these broader concerns, a one-day, AI-free alternative conference, ConCurrent Seattle, is taking place today, August 14, 2025, near the Worldcon venue. This independent event, explicitly committed to “no genAI/LLM usage ever,” serves as a tangible manifestation of the creative community’s desire for spaces where human artistry remains paramount. The irony is palpable: the very community that has long envisioned and explored artificial intelligence in their narratives is now grappling with its complex, real-world implications, demanding ethical frameworks and safeguards to protect the human element in creativity and information.