This was a long and technical process by skilled university researchers. Despite significant advances in technology since then (see some recent deepfake examples here), this type of hyper-realistic manipulation is still resource-intensive and technical, according to Le Roux.

On the other end of the spectrum, “cheap fakes” are quicker and less resource-intensive. They can be similarly misleading, though less realistic. Cheap fakes range from videos taken out of context to simple edits such as speeding up or slowing down video or audio to misrepresent events. Cruder face-swapping and lip-syncing methods also fall into this category.

According to Le Roux, a fake being completely convincing isn’t as important as you might think. Take this video, shared on social media in March 2023, which shows South African president Cyril Ramaphosa appearing to announce controversial changes to tackle the country's energy crisis. Despite being unrealistic, this cheap fake went viral, and seemed to be believed by some social media users. (See our fact-checking report on the video here.)

At the moment, ultra-cheap fakes are more of a problem for disinformation, Le Roux told Africa Check, than deepfakes. It takes lots of time, effort and resources to make a really convincing deepfake. And if, after all that, people still identified it as a fake, that investment would have been wasted. It is easier to produce large numbers of quick cheap fakes. Although individually less convincing, these provide more opportunities to fool people.

Because of various psychological mechanisms, something may not need to be convincing for it to be shared as real on social media. Some research suggests that people are quick to share false information if it confirms or fits with their existing beliefs. Other studies suggest that the social media environment itself may distract people from prioritising the accuracy of what they share.

Detecting AI … with AI?

With all the talk of AI-powered disinformation campaigns and their potentially catastrophic impact on society, companies are scrambling to develop effective detection tools to identify AI-generated images, videos, and text.

Software is trained using machine learning to distinguish between real and AI-generated content. These tools, as the magazine Scientific American put it, could, in theory, perform better than people, as “algorithms are better equipped than humans to detect some of the tiny, pixel-scale fingerprints of robotic creation”.

But they still have major limitations. For example, OpenAI, creators of the popular ChatGPT text generator, admitted that even their own detection tool had a dismal 26% success rate in identifying when the text had been generated by an AI tool.

Tools for detecting AI-generated images and videos don’t seem much more promising. Experts say that because image detectors are trained to identify content from one specific generator, they may not be able to detect content generated by other algorithms. They are also vulnerable to generating false positives, where real images are labelled as AI-generated.

Another major limitation is that these detectors have difficulty identifying AI-generated images that are low-quality or have been edited. When images are generated, information in each pixel contains clues about their authenticity. But if these are changed, for example by lowering the image resolution or adding grain, even images that are very obviously fake to humans can fool software.

But the core problem, some experts say, is that the very nature of detecting AI-generated content means that it will always be a game of cat and mouse. Detection tools will always need to be reactive, constantly adapting to advances in image generators.

The ‘liar’s dividend’ – when doubt is all you need

Lack of public awareness of deepfake technologies has been identified as a challenge in the fight against disinformation. There is little research on this in Africa, but a 2023 survey of 800 adults in five African countries, including South Africa, found that around half of respondents were unaware of deepfakes.

According to KnowBe4, the cybersecurity awareness company that conducted the survey, participants had some awareness of visual disinformation, with 72% saying they did not believe every photo or video they had seen was real. However, the company also pointed out that the remaining 28% “believed that ‘the camera never lies’”. This suggests a possible vulnerability to this type of deception.

On the flip side, public awareness poses its challenge. The dilemma, identified as the liar’s dividend in a 2018 research paper, is this: the more people are aware of AI and its ability to generate convincing content, the more people might doubt the authenticity of something real.

“The fact we know AI generation technology is there means it’s a useful excuse,” Le Roux told Africa Check.

The concept is not new – we saw a similar tactic play out in the political arena during Donald Trump’s US presidency. Almost any piece of information he deemed unflattering was denounced as “fake news”. As the BBC wrote in 2018: “What began as a way to describe misinformation was quickly diverted into a propaganda tool.”

The concept of evidence

The liar’s dividend further raises the bar for what counts as convincing evidence – or undermines the concept of “evidence” altogether.

This isn’t theoretical, either. At a conference in 2016, Elon Musk, billionaire tech entrepreneur and chief executive of Tesla Motors, claimed that Tesla’s Model S and Model X self-driving cars could “drive autonomously with greater safety than a person. Right now”. A video recording of this statement has been available on YouTube since 2016.

Two years later, a person was killed when a Model X car in autopilot mode crashed into a safety barrier. The victim’s family sued Tesla, with lawyers claiming he was killed because the car’s autopilot mode failed, citing Musk’s 2016 statement about its safety.

In response, Musk’s lawyers tried to cast doubt on the accuracy of the statement, saying Musk did not remember making it. They cited examples where deepfakes had been made using Musk’s likeness before. In court, Tesla reportedly said: “[Musk], like many public figures, is the subject of many ‘deepfake’ videos and audio recordings that purport to show him saying and doing things he never actually said or did.”

The judge in the case was not convinced and expressed concern, saying that this kind of argument could allow famous people to “avoid taking ownership of what they did actually say and do”.

Similar examples of doubt have emerged in politics. In January 2019, a small group of soldiers in the West African state of Gabon attempted a coup, motivated in part by the poor health of president Ali Bongo Ondimba. He had suffered a stroke the previous year and a lack of public appearances or details about his health in the following months sparked rumours that he was unfit to govern.

When Bongo finally released a video addressing the nation on New Year’s Day, something was off. The president looked very different from previous appearances, the Washington Post noted, including not moving his face much.

While these features are consistent with the appearance of someone who has had a stroke, they also led some to speculate that the video was not authentic. Opposition members reportedly called the video a deepfake, and social media users suggested it could have been created by “machine learning software”. This contributed to a general state of confusion and controversy, culminating in a group of soldiers taking control of the national news radio, before being overpowered.

A balancing act

Deepfakes, cheap fakes, and everything in between will all become part of the disinformation landscape on the continent, especially as AI technologies become more accessible and less resource-intensive. When they do, the public will have to contend with an erosion of the concept of evidence as we know it.

But it’s also important to balance these fears with an awareness of current threats. AI is one in a long list of tools used to mislead. And at Africa Check, it’s overwhelmingly outweighed by good old-fashioned forms of deception, such as images and videos taken out of context or crudely manipulated visuals.

While we keep an eye on developments in AI and hope we don’t have to update this report too soon, this is where our current focus as fact-checkers should be.

Published by AFRICA CHECK