If the American electorate is not careful, a new form of political art may trump election integrity and workers’ rights
On August 18, an image of Taylor Swift in an Uncle Sam costume endorsing Donald Trump materialized on Trump’s Truth Social account. The same day, Trump posted an image to his X (formerly known as Twitter) page of his Democratic opponent, Kamala Harris, speaking at the Democratic National Convention, with Chicago’s United Center draped in communist paraphernalia. Months prior in March, Trump shared an image of himself praying, bathed in sunlight.
None of it was real.
Swift has not endorsed any candidate in the 2024 election. Last week’s Democratic National Convention did not present imagery of a hammer and sickle as Harris accepted her party’s presidential nomination. And Trump did not pray in that sunlit room – because that room does not exist. All three of these images were AI-generated political content.
This is the first presidential election in American history where high-quality AI generation tools are widely available to consumers and voters. For many, social media feeds have already been flooded with fake images, phony videos and fraudulent audio recordings of notable politicians. As their supporters embrace these tools, political campaigns must shun them completely. This election cycle, candidates seeking political office must not use AI art at all.
[Related: The Future: Art needs protection from AI’s undermining of human creativity]
Trump’s own posts plainly highlight why: all AI art is inherently fabricative, presenting an altered version of reality. There are few limitations to the output from many consumer-grade image generation tools, allowing any situation – real or otherwise – to become an infinitely-shareable JPEG in just a few keystrokes. Without providing context or otherwise indicating that the images present a false reality, political campaigns can spread misinformation more easily and effectively than ever before.
The most disturbing aspect of AI-generated images is that they can be invisible to the average person. A University of Waterloo study found that when 260 participants were tasked to sort a set of photos based on how they were created, only 61% of participants were able to distinguish the AI-generated images from the real photographs. In a social media feed – where someone may only glance at an image briefly while skimming quickly through other content – the likelihood of a person detecting a fabricated image is likely diminished, further increasing the risk that potential voters mistake an AI-generated image for reality.
If a political candidate or other public figure intends to deceive their audience, AI-generated art is an alarmingly effective way to do so. In cases where an image is posted as satire and never meant to misinform, that context may disappear as the photo is shared and reposted. When even a well-informed individual can fail to recognize an AI fabrication, it is irresponsible to expect the entire electorate to understand these digital nuances as the posts disseminate away from the source.
Admittedly, these issues mainly apply to photo-realistic AI images. A campaign could potentially use AI to generate a political cartoon which viewers could be less likely to interpret as depicting real events. Precedent has shown campaigns are more interested in using photo-realistic AI generations, but illustrations and cartoons are viable outputs that campaigns may consider as alternatives.
[Related: Editorial: As AI advances, instructors should adjust teaching methods to combat misuse]
But this judicious approach is not enough. Even if a politician or their campaign took every precaution – generating a factually-correct cartoon and properly watermarking it as AI-generated – using this art would still be morally and ethically wrong. At some level, the intersection of current AI tools and politics will always undermine the rights of a politician’s constituents.
One does not need to be a Luddite to see how these tools undermine copyright rights and take advantage of artists’ work. AI generation tools are built on the back of human artists’ portfolios, sucking up countless photographs and illustrations into an opaque black box. The artists whose work was pilfered are not financially compensated for their non consensual contributions, nor are they credited. This unethical practice born from convenience – it is easier to ask for forgiveness than permission, after all – may also be illegal, as several lawsuits have alleged. It is irresponsible, if nothing else, to use these tools until the law catches up to meet the moment.
Of course, this potential to misinform voters and plagiarize human artists is not unique to AI-generated art. All art is a facsimile of the real world, and will never perfectly mirror reality even with truth-seeking intentions. Furthermore, the Trump campaign could just as easily ask a human painter to craft Swift’s forged endorsement or Harris’s communist crowd. The end result – a misleading and factually erroneous image – would effectively be the same, no matter who or what creates the image.
But where these old-fashioned fakes took time and money to produce, AI tools offer the opportunity to mass-produce falsehoods without a second thought. In this way, AI art is unfortunately a fantastic tool for misinformation. Even with good intentions, it has the potential to irresponsibly bend reality for the average voter. Likewise, using these tools instead of hiring human artists is an ethically-dubious practice. Although these issues may be addressed with time, the variables at play in American politics – power-hungry politicians, an impressionable electorate and behind-the-times laws – are not prepared to experiment with this pervasive technology amid this election cycle.
Trump has opened the door to a deeply-flawed political tool. Hopefully, the time in the limelight for AI political art will be swift.
Comments are closed.