By now you’ve heard that 2024 is the ultimate election year. With over 60 countries around the world heading to the polls, concerns about generative artificial intelligence (AI) and its effects on electoral integrity have received plenty of media coverage.
As in other places, fears in South Africa centred on the use of “deepfakes” to share “highly realistic fake videos or audio recordings”, as the Electoral Institute for Sustainable Democracy in Africa put it.
As fact-checkers, we were ready to drown in a sea of president Cyril Ramaphosa impersonations and AI-generated mystery ballot boxes. The reality was a little more complicated.
Here’s what we found.
Impersonations are often really bad fakes – but they are engaging
Of all the AI-generated content we fact-checked during the election period, the most common were deepfake impersonations of politicians and celebrities. By this, we mean videos created with tools that use a form of artificial intelligence to make it look like an individual said or did something they didn’t.
For example, tools can be used to replicate someone’s voice to create new sentences, change their facial expressions, and swap out facial features or the entire face.
Readers may have seen the deepfake of former US president Barack Obama back in 2018 or, more recently, scores of hyper-realistic celebrity impersonations. But the videos circulating in South Africa had little in common with these, aside from featuring prominent public figures.
One of the first examples we saw, which began circulating in August 2023, was an impersonation of Ramaphosa addressing the nation in a style reminiscent of the so-called “family meetings” of the Covid era, announcing a controversial plan to remedy the country’s energy crisis. We noted that while the video was unconvincing, it was engaging and inflammatory enough to spread widely on social media.
This trend would only continue in the lead-up to the May 2024 election. Later examples include a clip of former US president Donald Trump endorsing former South African president Jacob Zuma’s uMkhonto weSizwe or MK Party. The video was shared to the platform X by Duduzile Zuma-Sambudla, Zuma’s daughter, who has since been implicated in several election disinformation campaigns.
In another video, an impersonation of US president Joe Biden threatened to impose sanctions on South Africa if the African National Congress (ANC) won the election. In another particularly sensational example, Biden laughed along with a controversial song sung by political party the Economic Freedom Fighters (EFF).
In a clip taken from footage of an old interview with the US rapper known as Eminem, he appeared to denounce the ANC in support of the EFF. Member of the former opposition party the Democratic Alliance, Glynnis Breytenbach, was also reportedly the subject of an impersonation in the form of an audio clip, which she called “not even a good fake”.
Like the earlier iterations, these videos were sloppily produced impersonations that many would be able to identify as fakes. They showed the same static, glitchy subjects with robotic speech and poorly replicated accents, despite substantial improvements in deepfake technology in recent years. Some even retained the watermark indicating which AI tool they had been made with.
It seems these videos didn’t need to appear real to reach far and wide across our social media feeds. They were viewed hundreds of thousands of times.
Does election disinformation even need generative AI?
Apart from these impersonations and some isolated examples of generated images, there just wasn’t as much AI-powered false information as we’d expected and especially not powered by coordinated campaigns.
This could be attributed to a general increase in public awareness of generative AI, or better detection methods, making would-be disinformers less inclined to use these as their tools of choice. But that’s hypothetical.
A simpler explanation is that the false information ecosystem in South Africa is surviving, even thriving, just fine without AI. This aligns with what we saw more broadly while covering the election.
Most of the false information we saw during the election period had nothing to do with AI. Instead, we saw real images and videos shared out of context, the explicit fabrication of events, and poorly executed edits.
For example, there were claims of vote rigging based on zero actual evidence, and bogus “leaked” memos or correspondence claiming to expose political support or conspiracies. News headlines and rules about electoral processes were entirely fabricated.
Audiovisual content was posted with misleading or outright false captions and descriptions, including a flurry of photos and videos of voting ballots supposedly showing election fraud but, in fact, harmless.
One series of posts tried to pass off a May Day gathering in the Caribbean as an MK Party rally and claimed real photos of police-seized forgeries showed identity documents created by the EFF, to gather migrant votes for the party.
These and a range of other examples, largely in the days just before and after voting day, point to the same conclusion – the false information machine ain’t broke, so why fix it?
Narratives of vote rigging, foreign interference and others that pose a serious threat to electoral integrity have flourished without the need for generative AI.
AI-powered deception is a moving target
As is the nature of any fast-evolving technology, AI-powered deception is a moving target. There was a great deal of uncertainty around the scale and form it would take. Generative AI, and particularly the dystopian allure of deepfakes, received much of the attention and many of us fighting online disinformation also got caught up in the hype.
But AI technology has implications well beyond the subset of deepfake impersonations. Micro-targeting for political campaigning, chatbot disinformers and the broader role of social media algorithms in shifting perception around elections received less coverage, not to mention the flipside of AI-powered false information: the ability of politicians to cast doubt on the authenticity of real events.
More research, better access to data from social media platforms, and collaboration between fact-checkers, disinformation researchers, tech companies and governments are needed to arrive at a meaningful understanding of the issue.
With this, we can get beyond an analysis that is at worst speculative and at best retrospective.
Published by Afrika Check