AI Bots Simulate Social Media, Confirm Inevitable Polarization

Futurism

Social media platforms have long been criticized as fertile ground for disinformation and extreme polarization, often evolving into echo chambers that prioritize engagement over healthy discourse. Despite promises of fostering a “digital town square” where diverse viewpoints can coexist, these platforms frequently seem to amplify outrage, trapping users in cycles of divisive content. A recent and sobering experiment conducted by researchers at the University of Amsterdam suggests this trajectory may be difficult, if not impossible, to alter.

Petter Törnberg, an assistant professor specializing in AI and social media, and research assistant Maik Larooij embarked on a unique simulation: they created an entire social network populated exclusively by AI chatbots, powered by OpenAI’s advanced GPT-4o large language model. Their objective, detailed in a study that is yet to undergo peer review, was to investigate whether specific interventions could prevent such a platform from devolving into a polarized environment.

Their methodology involved testing six distinct intervention strategies, including the implementation of chronological news feeds, the deliberate promotion of diverse viewpoints, the concealment of social metrics like follower counts, and the removal of account biographies. The hope was that one or more of these adjustments would mitigate the formation of echo chambers and curb the spread of extreme content.

To their considerable disappointment, none of the interventions proved effective to a satisfactory degree, with only a few demonstrating modest impacts. More concerning still, some strategies reportedly exacerbated the very issues they aimed to mitigate. For instance, while ordering the news feed chronologically did reduce the inequality of attention — meaning more posts received some visibility — it inadvertently brought more extreme content to the forefront of users’ feeds.

This outcome presents a stark contrast to the idealistic vision of harmonious online communities often espoused by platform creators. It suggests that, with or without external interventions, social media platforms may be inherently predisposed to devolve into highly polarized environments, fostering extremist thinking.

Törnberg observed that the problem extends beyond merely triggering pieces of content. Toxic content, he explained, actively shapes the network structures that form within these platforms, creating a feedback loop where the content users see is continually reinforced by the network itself. This dynamic leads to an “extreme inequality of attention,” where a tiny minority of posts garner the vast majority of visibility, further solidifying existing biases.

The advent of generative AI promises to intensify these effects. Törnberg warns that a growing number of actors are already leveraging AI to produce content designed to maximize attention, often in the form of misinformation or highly polarized narratives, driven by the monetization models of platforms like X. As AI models become increasingly sophisticated, such content is poised to overwhelm the digital landscape. Törnberg candidly expressed his doubt that conventional social media models, as they currently exist, can withstand this impending flood.