top of page
D_Davis.png

Danielle Davis

Manipulating the Republic: The Impact of AI-Driven Disinformation on Black Voter Engagement and What We Can Do About It

Manipulating the Republic: The Impact of AI-Driven Disinformation on Black Voter Engagement and What We Can Do About It

Danielle A. Davis, Esq.

Director of the Technology Policy Program, Joint Center for Political and Economic Studies

 

Misinformation and disinformation have become ingrained in the American political landscape, resulting in cynicism and anxiety among voters, and ultimately leading them to question the value of participating in elections. Since the 2016 election, researchers have uncovered evidence of these online misinformation campaigns, some of which were openly visible while others operated in more clandestine ways through social media pages created by foreign nationals, taking on the identities of Black Americans. According to a 2016 Senate Report, “no single group of Americans was targeted by [Russia’s Internet Research Agency] information operatives more than [Black] Americans” (S. Rep. No. 116-XX, 2016).  

Although Black Americans represent a relatively small percentage of the U.S. population, audience segments in the categories of “African American Politics and Culture” and “Black Identity and Nationalism” accounted for over 38 percent of the ads purchased on Facebook by the Russian Agency (Overton, 2020). Black audiences were targeted with ads that either ignored the election, discouraged Black Americans from voting, or advocated for third-party candidates with minimal chances of winning (DiResta et al, 2019). These factors collectively contributed to a significant decrease in Black voter turnout during the 2016 election, dropping from 66.6 percent in 2012 to 59.6 percent (Krogstad & Lopez, 2017).

 

The Implications of AI (Artificial Intelligence) in the 2024 Elections

The implications for Black Americans regarding the 2024 campaigns and elections are both significant and concerning, as generative AI will likely be used widely in political campaigns. The emergence of this technology in political campaigns poses significant promise, as it enables the rapid production of targeted campaign communications—allowing a candidate to respond in record time at little to no cost. However, it also provides the opportunity for nefarious actors to take the deceptive practices of the 2016 election and combine it with generative AI technology to further exploit and deceive voters, allow for the impersonation of candidates, and undermine the integrity of elections at an unprecedented scale and velocity.

​

Furthermore, more recent developments in the use of AI in political campaigns have further highlighted these significant concerns. One notable incident involved AI-generated robocalls impersonating President Joe Biden in the January 2024 New Hampshire primary. The message played an AI-generated voice similar to President Biden’s that used his phrase “what a bunch of malarkey” and falsely suggested that voting in the primary would prevent voters from casting ballots in November (Ramer & Swenson, 2024).

​

Similarly, AI-generated images of former President Donald Trump interacting with Black voters have also surfaced (Daniels, 2024), adding another dimension to the potential misuse of AI in political campaigns.  These images are designed to create specific narratives and influence public perception within the Black community, complicating the task of distinguishing authentic campaign materials from AI-generated fabrications. 

Another notable instance of Artificial Intelligence (AI)I misuse involved a pro-Governor Ron DeSantis ad that used an AI-generated voice to mimic former President Trump, based on his tweet criticizing Iowa Governor Kim Reynolds. Although President Trump never actually spoke the words, the ad falsely presented them as if he had, showcasing how AI can also fabricate content to manipulate political narratives within the Republican Party (Jacobsen & Loreben, 2023).

​

In 2023, the Republican National Committee (RNC) released an AI-generated campaign ad in response to President Joe Biden’s reelection announcement. The ad, created entirely with AI, depicted a grim hypothetical second term for the Biden-Harris Administration, invoking worrisome apocalyptic images. While the ad included a faint disclaimer stating it was “built entirely with AI imagery,” without federal action, voters cannot assume all political ads using generative AI will disclose this, making it difficult for voters to differentiate between real and AI-generated ads. 

 

Federal Government Responses to AI Misinformation

The federal government has acknowledged the need to address the challenges posed by AI in political campaigns. The Federal Communication Commission (FCC) approved a rule classifying AI-generated voice calls as “artificial” under the Telephone Consumer Protection Act, making voice-cloning technology in robocall scams illegal (FCC, 2024). FCC Chairwoman Jessica Rosenworcel proposed a rule in May that would require political advertisers to disclose AI-generated content in TV and radio ads (FCC, 2024). The Federal Election Commission is also moving toward requiring clear disclosures on AI-generated content in political ads to ensure transparency and prevent voter deception.

​

Further, in May 2023, Rep. Yvette Clarke (D-NY) introduced the REAL Political Advertisements Act, mandating that political ads disclose the use of generative AI. Additionally, other bills aim to ensure AI transparency and accountability in elections, supported by the Bipartisan Senate AI Working Group’s newly released AI roadmap emphasizing ethical AI use in elections. On June 7, 2024, Rep. Joseph D. Morelle (D-NY) with co-sponsor Rick Larsen (D-WA) introduced HR 8668, addressing “transparency” in the use of generative AI in political advertisements. Despite these efforts, more substantive work is needed to fully address the complexities and potential risks associated with AI within the context of political campaigns.

 

Strengthening Black Voter Protection Against AI-Driven Disinformation

Combating this new wave of AI-driven voter disinformation will require concerted efforts from both the public and private sectors, along with vigilant participation from the Black community.

To protect Black voters from deceptive tactics, a combination of legislative action, public awareness, community engagement, and technological solutions is essential. Proactive measures such as voter education programs, collaborations with tech companies, legal support, and ongoing research are crucial for safeguarding electoral integrity and ensuring a transparent and fair electoral system in 2024 and beyond. Promoting responsible sharing practices within the Black community can further help mitigate the impact of misinformation. For instance, it is important to avoid sharing information too quickly within our community before verifying its authenticity. Always check if the information comes from a reputable news source, use fact-checking websites, and be cautious of sensational headlines. 

​

By implementing these strategies, we can help protect voters from manipulation, effectively addressing the impact of AI-driven disinformation on Black voter engagement and countering the manipulation of the republic.

 

References

Daniels, C. M. (2024, March 4). Fake AI Images of Trump with Black Voters Circulate on Social Media. The Hill. https://thehill.com/policy/technology/4507279-fake-ai-images-of-trump-with-black-voters-circulate-on-social-media/  

​

DiResta, R., Shaffer, K., Ruppel, B., Sullivan, D., Matney, R., Fox, R., ... & Johnson, B. (2019). The Tactics & Tropes of the Internet Research Agency.

​

Federal Communications Commission. (2023, February 8). FCC Makes AI-Generated Voices in Robocalls Illegal. Retrieved from https://docs.fcc.gov/public/attachments/DOC-400393A1.pdf.  

​

Federal Communications Commission. (2024, May 22). Chairwoman Rosenworcel Unveils First Step in New AI Transparency Effort to Disclose AI-Generated Content in Political Ads on TV and Radio. Retrieved from https://docs.fcc.gov/public/attachments/DOC-402740A1.pdf.  

​

Jacobson, L., & Loreben, T. (2023, July 18). A Pro-Ron Desantis Ad Used AI to Create Trump’s Voice. It Won’t Be The Last, Experts Say. Politifact. https://www.politifact.com/article/2023/jul/18/a-pro-ron-desantis-ad-used-ai-to-create-donald-tru/  

​

Krogstad, J. M., & Lopez, M. H. (2017, May 12). Black Voter Turnout Fell in 2016, Even as a Record Number of Americans Cast Ballots. Pew Research Center. https://www.pewresearch.org/short-reads/2017/05/12/black-voter-turnout-fell-in-2016-even-as-a-record-number-of-americans-cast-ballots/  

​

Overton, S. (2020). State Power to Regulate Social Media Companies to Prevent Voter Suppression. GWU Legal Studies Research Paper, (2020-23), 2020-23.

​

Ramer, H., & Swenson, A. (2024, May 23). Political Consultant Behind AI-generated Biden Robocalls Faces $6 Million Fine and Criminal Charges. PBS. https://www.pbs.org/newshour/politics/political-consultant-behind-ai-generated-biden-robocalls-faces-6-million-fine-and-criminal-charges 

S. Rep. No. 116-XX, vol.2 (2016). https://www.intelligence.senate.gov/sites/default/files/documents/Report_Volume2.pdf

For any media inquiries, please contact newmedia@ncbcp.org 

1300 L. Street NW

2nd Floor, Suite 200

Washington, DC 20005 

Thanks for submitting!

© 2035 NCBCP

bottom of page