People are seen engaging with interactive exhibits during a media preview of ACMI's major winter exhibition Beings by Universal Everything at ACMI, Australia's national museum of screen culture, in Melbourne in Melbourne, Tuesday, May 21, 2024. (AAP Image/James Ross) NO ARCHIVING

How to spot AI-based visual disinformation

October 23, 2024

Generative Artificial Intelligence (AI) is rapidly changing the information environment, giving everyone the ability to spin up text and hyperrealistic images, videos and audio to order within seconds.  

These technological advances have the potential to revolutionise the way we live and work, but they are also accompanied by risks and challenges. 

Content created by sophisticated generative AI tools is becoming near-impossible to detect, and bad actors are seizing on the opportunity to create disinformation at an unprecedented speed and scale. 

When images of Pope Francis wearing a puffer jacket went viral in early 2023, the surprising fashion choice fooled plenty of social media users. 

Midjourney, the generative AI tool used to create the picture, had only been around for nine months but had already achieved a convincing level of realism. The picture was posted to Reddit by an artist with the username trippy_art_special, and from there it spread around the world. 

Since then, the technology has improved even  more, and while people continue to make art, others are generating AI images for more nefarious reasons. Scammers and spammers are making money from fake pictures of elaborate sand sculptures, wood carvings and homemade cakes that gain thousands of clicks and shares on social media. 

AI-generated video and audio clips are similarly being used by criminals seeking to make money through deception. 

Political propagandists, meanwhile, have turned to AI-generated images as a quick and easy way to promote fake endorsements of their preferred candidate or criticise the inaction of opponents

Now, more than ever, consumers must rely on critical thinking and consider the broader context of the content they consume in order to determine if it is reliable. The foundations of media literacy include considering who published the content, and their motives for doing so.

The good news is everyone can learn how to avoid being influenced by problematic AI content through a combination of critical thinking and some simple research skills. 

Based on years of experience in debunking disinformation and our strong connections to other experts in the field, AAP FactCheck has compiled a list of practical tips for spotting audio and visual disinformation, including that made or manipulated with AI tools.

Tips for spotting AI images, videos and audio

1. Does the image or video ring true?

If the image or video in question seems too incredible to be true, then it needs further investigation. 

In particular, does what you see in the image or video match with what you would expect to see and hear? If the subject is a politician or other public figure, is what they are saying consistent with their usual style of communication or policy position? 

Dr T.J. Thomson, a visual communication and misinformation expert at RMIT University, told AAP FactCheck that people should be extra wary of content intended to spark a strong emotional reaction. 

“Evocative visuals aren’t always false, of course, but ones that are seeking to mislead or deceive are often emotive in nature or are shared with rousing accompanying text,” he says.

2. Check the source. 

The source of the content is an important clue to its authenticity. 

Does it come from a trustworthy news or government source? If the source of the content is unfamiliar and doesn’t have a history of reliability, tread cautiously.

If reputable people or organisations are sharing the content, that should improve your confidence in its authenticity – but keep in mind that even professionals get it wrong sometimes. 

3. Look for visual inconsistencies.

A shrinking subset of AI images include visual clues that give away their computer-generated origins. Check the image or video for inconsistencies and mistakes. 

Early versions of AI models struggled to accurately reproduce human hands and limbs. Letters and numbers in AI images are often garbled and unreadable

Textures and shadows have also proven difficult for computers to replicate.

Scrub through videos frame by frame to see if any anomalies jump out. In this deepfake video of Barack Obama, for example, his wrinkles appear and disappear frame by frame. 

As AI models improve, it will become increasingly harder to distinguish a fake image from a real one. That’s why it is important to consider how the image was produced and shared, not just what is in front of your eyes. 

4. Is the content presented out of context?

Consider if the image or video could have been manipulated in the production or editing process to change its context. 

Could the scene have been cropped or selectively chosen to misrepresent what is happening? 

In late 2023, for example, a video of a Palestinian prisoner who claimed to have broken both hands was selectively edited to suggest he showed no such injuries when boarding a bus. 

Similarly, a video’s playback speed can be altered to mislead viewers. For example, some social media users shared a video of Kamala Harris that had been slowed down to make it appear she was slurring her words. 

Cheap, easy-to-use editing tools have made it easy for anyone to adapt visual and audio content. 

5. Try a reverse image search. 

Research image search tools like Google Images and TinEye can be used to find other versions of the image online. 

Doing this can help you assess if the image has been published before in a different context or with a different description. 

AAP FactCheck has previously used reverse image searches to debunk fake images falsely purporting to show a “rare breed” of pink peacock, British farmers blocking immigrant boats and Taylor Swift dressed in a satanic costume

Reverse image searches can also sometimes be helpful when assessing the authenticity of a video. Take a screenshot of key parts of a video and do a reverse image search to see where else it might have been published.

6. Use AI detection tools

Numerous AI detection tools are available online, including AI or Not, Huggingface, Winston AI and Illuminarty

Simply upload the image in question and the tool will tell you if it thinks it was made by a human or a computer. 

AI detection tools can provide a useful second opinion on the authenticity of an image, but they are not always accurate, so can’t be relied on as a definitive source of truth. 

7. Look for AI content labels. 

Some social media platforms, including Facebook and TikTok, have started identifying AI-generated or AI-edited content with a label. 

But while AI labels may help in certain circumstances, they will not stop the spread of fake imagery. 

Dr Thomson says people should keep in mind that the labels don’t work for all kinds of AI content and knowing that AI was involved in the production or editing of an image doesn’t mean it wasn’t created to mislead or deceive. 

Additionally, Associate Professor Amy Dawel, a psychologist at the Australian National University, told AAP FactCheck that people who misuse AI to deceive others are unlikely to add an AI label to their images. 

8. Find out if fact-checkers have already debunked the image. 

Professional fact-check organisations like AAP FactCheck regularly investigate the authenticity of images and videos shared online. 

If you think an image or video might be fake, type a short description of the image into a search engine along with the phrase “fact check” to see if fact-checkers have already investigated it. 

For example, a search for the words ‘video Anthony Albanese investment platform fact check’ should lead you to this fact-check debunking an AI-generated video of the Australian prime minister. 

You can also use Google’s dedicated fact-check search tool to find relevant fact-check articles about the topic in question. 

How are AI images made? 

AI is a field of science that includes building computers to act in a way that would normally require human intelligence. 

AI systems often involve complex algorithms that learn and improve by analysing vast amounts of data. 

The technology is evolving at a rapid speed, making AI software cheaper and easier to access. 

Numerous AI image generators are freely available online, including popular versions offered by Google, Microsoft and Adobe

When the user enters a few keywords, the algorithms convert those instructions into an image. 

The algorithm works by analysing millions of real images and associated captions to interpret what it thinks the user is looking for. 

If the user isn’t happy with the resulting image, they can hone their keywords until they get a better result. 

“AI algorithms learn patterns, or what things typically go together from the data they are trained on,” says Assoc Prof Dawel, who has researched why AI faces seem so real

She says the high volume of images processed by AI algorithms means AI-generated faces often fool people because they look very “average” with no stand-out characteristics. 

“The most alarming thing is that we don’t have very good insight into how poor we are at (mis)identifying AI people,” she says. 

“In fact, people are more confident when making more mistakes!”

AAP FactCheck thanks Dr T.J. Thomson from RMIT and Associate Professor Amy Dawel from the Australian National University for providing the tips and other content for this article.