Democrats' New AI Playbook: Responsible Tech for Election Wins

Wired

The 2024 election cycle marked a significant turning point, witnessing the inaugural deployment of artificial intelligence by political campaigns. While candidates largely navigated this new terrain without major public missteps, the technology was often utilized with minimal overarching guidance or established safeguards. Now, in a proactive move ahead of the midterms, the National Democratic Training Committee (NDTC) is rolling out what it describes as the first official playbook aimed at demonstrating how Democratic campaigns can responsibly integrate AI.

Through a newly launched online training program, the NDTC has outlined a comprehensive strategy for Democratic candidates to harness AI. This includes generating social media content, crafting voter outreach messages, and conducting in-depth research on their districts and opponents. Since its inception in 2016, the NDTC reports having trained over 120,000 Democrats aspiring to political office, offering a range of virtual lessons and in-person bootcamps covering everything from ballot registration and fundraising to data management and field organizing. The current AI course specifically targets smaller campaigns, often those with limited resources, aiming to empower teams as compact as five people to operate with the efficiency typically associated with a fifteen-person operation.

“AI, and its responsible adoption, is no longer a luxury; it’s a competitive necessity,” stated Donald Riddle, senior instructional designer at NDTC. He emphasized the importance of learners understanding and comfortably implementing these tools to gain a competitive edge, driving progressive change effectively and responsibly.

The training program is structured into three parts, beginning with foundational explanations of how AI functions. However, the core of the course delves into practical AI applications for campaigns. It encourages candidates to utilize AI for preparing text across various platforms, including social media posts, emails, speeches, phone banking scripts, and internal training materials. Crucially, the program stipulates that all AI-generated content must undergo human review before publication.

Equally important, the training explicitly outlines the forbidden uses of AI. It firmly discourages candidates from employing AI to create deepfakes of opponents, impersonate real individuals, or produce images and videos that could “deceive voters by misrepresenting events, individuals, or reality.” Such practices, the training asserts, “undermine democratic discourse and voter trust.” Furthermore, the NDTC advises against replacing human artists and graphic designers with AI, a stance taken to “maintain creative integrity” and support working creatives within the industry.

The final segment of the course addresses transparency, urging candidates to disclose AI usage when content features AI-generated voices, appears “deeply personal,” or is instrumental in developing complex policy positions. The training underscores that “when AI significantly contributes to policy development, transparency builds trust.” This emphasis on disclosure is particularly vital to Hany Farid, a generative AI expert and professor of electrical engineering at UC Berkeley. Farid stresses that transparency is essential not only to identify what is artificial but also to reinforce trust in what is authentic.

For video content, the NDTC suggests campaigns leverage tools like Descript or Opus Clip to craft scripts and swiftly edit material for social media, streamlining clips by removing pauses and awkward moments. This complimentary course was developed in collaboration with the Higher Ground Institute, the nonprofit arm of the progressive tech incubator Higher Ground Labs. Both organizations intend to update the training continually as new AI tools and applications emerge.

Kelly Dietrich, Founder and CEO of NDTC, articulated the initiative’s broader goal, stating that their approach focuses on “turning fear into a force multiplier,” enabling thousands of Democratic campaigns to compete effectively at any scale. Dietrich views this as a significant opportunity for the party to regain electoral ground in 2026.

This NDTC course represents the first substantial effort to equip Democrats with the knowledge to bolster their campaigns using AI. In the previous election cycle, Democrats primarily confined AI use to routine administrative tasks, such as drafting fundraising emails, largely avoiding its application in more strategic functions. The NDTC now contends that Democrats risk falling behind, noting that Republicans have already integrated AI more broadly across their campaign operations. Kate Gage, cofounder of the Higher Ground Institute and executive director at the Cooperative Impact Lab, highlighted the need for Democratic campaigns to “really invest in it and try it,” observing that AI is not yet integrated at every level within the party.

During the 2024 election, Republican campaigns, including that of former President Donald Trump, actively embraced AI. Groups supporting Republican Florida Governor Ron DeSantis, for instance, published videos featuring AI-generated aircraft and fabricated audio of Trump in campaign advertisements and social media posts. Weeks before the election, Trump himself shared a deepfaked image of Taylor Swift appearing to endorse the then-Republican nominee. According to Bloomberg Government, the GOP collectively spent over $1.2 million on Campaign Nucleus, a company founded by former Trump campaign manager Brad Parscale, which offers AI tools for targeted advertising and task automation.

Farid acknowledges the differing approaches, posing an “interesting question as to whether both sides of the political aisle will play with the same rules.” He suggests that since political parties often operate under distinct norms, a divergence in AI usage and ethical adherence is likely, which will undoubtedly complicate the future landscape of political campaigning.