AI Propaganda's New Era: GoLaxy's Sophisticated Influence Operations
The landscape of international influence operations has undergone a profound transformation. The era of crude, low-tech interference, characterized by generic bot messaging and low-quality content, is giving way to a new, more insidious form of digital manipulation. With the rapid advancements in generative artificial intelligence (AI), the primary threat is no longer merely a deluge of overt falsehoods, but rather the subtle, corrosive shaping of online communication. This new approach involves AI-generated narratives designed to seamlessly integrate into everyday digital discussions, shifting public opinion without drawing explicit attention.
Evidence of this shift comes from a cache of documents recently uncovered by the Vanderbilt University Institute of National Security, revealing the activities of a Chinese company named GoLaxy. These materials suggest GoLaxy is emerging as a frontrunner in technologically advanced, state-aligned influence campaigns. The company reportedly deploys human-like bot networks and sophisticated psychological profiling to target individuals. Its operations and claims indicate potential ties to the Chinese government.
GoLaxy has reportedly already deployed its technology in Hong Kong and Taiwan, with the documents suggesting potential expansion into the United States. This indicates that AI-driven propaganda is no longer a theoretical concern but an operational and sophisticated tool actively reshaping how public opinion can be manipulated on a large scale.
A representative for GoLaxy, however, stated that the company focuses on business intelligence services and denied developing bot networks or psychological profiling tools targeting individuals. The company also denied being under the authority of any government agency or organization.
What distinguishes GoLaxy, according to the documents, is its integration of generative AI with vast amounts of personal data. Its systems reportedly continuously mine social media platforms to construct dynamic psychological profiles. Content is then customized to an individual’s values, beliefs, emotional tendencies, and vulnerabilities. AI personas can subsequently engage users in what appears to be authentic, real-time conversations, designed to avoid detection. This creates a highly efficient propaganda engine engineered to be nearly indistinguishable from legitimate online interaction, delivered instantaneously and at an unprecedented scale.
While specific examples of these conversations were not provided in the documents, they detail how the technology generates personalized content. By extracting user data and analyzing broader patterns, AI can construct synthetic messages appealing to a wide spectrum of the public. It can adapt to a user’s tone, values, habits, and interests, mimicking real users by liking posts, leaving comments, and pushing targeted content.
According to the uncovered documents, GoLaxy utilized its technology in 2020 to mitigate opposition to a national security law in Hong Kong that suppressed political dissent. The company reportedly identified thousands of participants and thought leaders from 180,000 Hong Kong Twitter accounts, then used its network of fake profiles to “correct” what it perceived as lies and misconceptions.
The company’s activities reportedly extended to the lead-up to the 2024 Taiwanese election. Amidst false claims of corruption and deepfakes propagated by China-aligned groups on social media, GoLaxy allegedly suggested strategies to undermine Taiwan’s Democratic Progressive Party, which opposes Beijing’s claims over the island. The company reportedly gathered and likely supplied information on trends in Taiwanese political debate, recommending the deployment of bot networks to exploit existing political divisions. GoLaxy had already amassed extensive data on Taiwan to support such intrusions, including detailed organizational maps of government institutions and profiles of over 5,000 Taiwanese accounts.
In a written statement, GoLaxy denied providing technical support for activities in Hong Kong and Taiwan.
While GoLaxy’s active deployments appear to have been confined to the Indo-Pacific region thus far, evidence within the documents suggests the company is preparing for expanded operations, potentially including the United States. GoLaxy has reportedly compiled data profiles on at least 117 members of the U.S. Congress and over 2,000 American political figures and thought leaders. GoLaxy stated it has not collected data targeting U.S. officials.
GoLaxy reportedly operates in close alignment with China’s national security priorities, though no formal government control has been publicly confirmed. The company was founded in 2010 by a research institute at the state-controlled Chinese Academy of Sciences, and its chair has been a deputy director from the same institute. The documents suggest GoLaxy has since collaborated with top-level intelligence, party, and military bodies, indicating deep integration with China’s political system. Further reinforcing these connections, GoLaxy received funding in 2021 from Sugon, a Beijing-based supercomputing company identified by the Pentagon as a Chinese military affiliate. GoLaxy’s public-facing AI platform reportedly coordinates with Sugon’s supercomputers and DeepSeek-R1, one of China’s leading AI models.
These connections underscore that influence operations are no longer a peripheral concern but are evolving into core instruments of statecraft. Modern battlefields now encompass not only geographic territories but also the online platforms we use daily.
The strategy employed by GoLaxy and similar entities weaponizes the very openness that underpins democratic societies. Debate, transparency, and pluralism—hallmarks of democratic strength—are simultaneously points of vulnerability. Technological tools like GoLaxy’s exploit these qualities, blurring the line between surveillance and persuasion at an accelerating pace. The danger lies in the stealth and scale of these methods, and their rapid improvement. AI-generated content can be deployed quietly across entire populations with minimal resistance, operating continually to shape opinion and subtly corrode democratic institutions.
To counter the escalating threat of AI-driven foreign influence operations, a coordinated and urgent response is essential. Academic researchers must work to map how artificial intelligence, open-source intelligence, and online influence campaigns converge to serve hostile state objectives. The U.S. government must take the lead in disrupting the infrastructure behind these operations, with the Department of Defense targeting foreign influence networks and the Federal Bureau of Investigation collaborating closely with digital platforms to identify and counter false personas. Simultaneously, the private sector needs to accelerate the development of AI detection capabilities to bolster our ability to identify synthetic content. Without the capacity to detect it, effective countermeasures become impossible.
The world is entering a new era of “gray-zone conflict,” defined by information warfare executed with unprecedented scale, speed, and sophistication. A failure to rapidly develop defenses against this form of AI-driven influence will leave societies critically exposed.