With the rise of deepfakes, will we ever be able to trust AI?

cyber security defence responsibility

“I’ll believe it when I see it” is an expression that suggests our own eyes can discern fake from forgery.

But humans have always been flawed when it comes to spotting a fake and the playing field is getting tougher by the day.

Deepfakes, digitally altered images created by deep-learning neural networks that make convincing replicas, can be almost impossible to spot against the real thing. These pictures and videos, often created with malicious intent, are even beginning to trick computers.

So how on earth can we actually trust what we see?

The Australian Cyber Security Centre highlights councils as targets due to essential services. Amidst the rise of deepfakes, the trust in AI raises concerns about reliability and security.

And so they should.

The technology that underpins deepfakes does have benign, even entertaining uses. It allows people to swap their faces somewhat convincingly with celebrities or insert themselves into blockbuster movies.

We’ve seen an AI-generated Pope in a puffer jacket, Elon Must protesting in New York and Donald Trump resisting arrest. But there are far more sinister uses too, a new reality show using deepfake technology to create “evidence” their partners are cheating on them with very attractive people. A cruel twist on a trusted dating show format.

Deepfakes have also opened the door for almost anyone to fall victim to revenge porn.

Celebrities, such as Scarlett Johansson, confront the unsettling trend of their faces digitally imposed on explicit content—a dark facet of fame. Everyday individuals suffer dire consequences, facing ruined reputations, shattered careers, familial rifts, and the looming threat of depression and anxiety. Meanwhile, AI-generated content undermines trust in traditional media.

During the pandemic, conspiracy theorists harnessed AI-generated content, escalating from internet fringes to street protests. In Australia, these techniques amplify the “no” campaign for the Voice referendum, breeding misinformation noise.

Australia, like other jurisdictions, faces challenges with emerging technologies. In the US, only a few states address deepfakes through election and child exploitation laws, lacking federal policy. The EU proposes a risk-based approach for AI regulation. In Australia, policymakers are deliberating the next steps post-consultation period closure.

Regulation is crucial, but legislation must balance the need for oversight without hindering innovation. Australia must stay competitive globally, aligning with nations investing in technological advancement to avoid falling behind.

To fall behind would pose a significant threat to Australia’s safety. Any policy would need to focus on the misuse of deepfakes and deceptive content while recognising the vast opportunities it presents. 

Whenever a major technological disruptor hits the scene, it is met with catastrophising, and AI is no different.

The introduction of computers came with predictions of entire careers being wiped out. But these doomsday predictions never materialised, instead entire new occupations, ones that could never have been imagined, grew from the revolution.

The same will happen as we learn to harness the vast potential of artificial technology, safely and bravely.

Now is the time to advance our technologies to spot forgeries, bolster our identification technology, and digital authentication standards.

Because while pretending is powerful… there’s nothing like the real thing.

By Ches Rafferty, Scantek CEO.

First published in Government News, August 7, 2023.

Get in touch