Key takeaways

  • Deepfake technology is being used to create fake videos in everything from pornography to political speeches.
  • As the technology becomes more sophisticated, it will be increasingly difficult to tell fake videos apart from real ones.
  • While these manipulations could have immense ramifications for society, businesses too could be at risk, and preparing now is necessary for protection.

In a speech that John F Kennedy never gave, he was to talk of America’s steps “to carry our message of truth and freedom to all the far corners of the earth.”1

Recently, an initiative by The Times recreated the 22-minute speech that JFK was meant to give in Dallas, in his own voice, using artificial intelligence.2 It is, they say, the ‘unsilencing’ of JFK, and listeners can hear it as it would have been delivered had he not been assassinated that day.3

Is it ironic that his message of truth is somewhat untruthful, recreated as it has been from an event that never actually happened?

At the very least, it’s a question that society and business will have to grapple with as the manipulation of deepfakes – computer generated replications of people saying and doing things they didn’t do – continues to grow in sophistication and frequency.

Deep
truth

There is a blurring of reality happening in media, and it isn’t just about fake news. In China, the world’s first AI news anchor was unveiled – replicating a human newsreader, Xinhua’s Qiu Hao.4 Qiu’s digital personage will be able to deliver news 24/7, from anywhere his image can be superimposed, and it will say whatever text it is programmed with.

On the other end of the spectrum, deepfakes – fake videos concocted from real ones – came murkily out of the realm of online forum Reddit. Using machine learning, specifically, a form of deep learning (hence the ‘deep’), AI has learnt to generate new data from old. The technique was then used to create fake pornographic videos by superimposing celebrity faces on adult film stars’ bodies.5

Soon enough, free software appeared that would easily allow anyone to make the videos regardless of technical aptitude. Since then, presidents have been morphed into delivering statements they didn’t make, movie stars de-aged or brought back from the dead for films (one of the rare legitimate uses of deepfakery), and Nicholas Cage inserted into every movie known to man.6 The genre is only getting more sophisticated, moving from lip syncing to whole body swaps.

 

Real concerns
for society

We’ve all heard the moniker ‘seeing is believing’.

But as Eric Goldman, a Santa Clara University professor, recently told The Verge, “It absolutely bears repeating that so much of our brains’ cognitive capacities are predicated on what we see. The proliferation of tools to make fake photos and fake videos that are indistinguishable from real photos and videos is going to test that basic, human capacity.”7

With fake news anchors reporting real news, and real politicians seemingly presenting fake news, the implications for society of the proliferation of these manipulations could be immense. In an era where trust is becoming more important, how will we be able to apply it to what we see?

Journalists are already sensing the potential threat. The Wall Street Journal has launched its own task force of editors trained in deepfake detection.8 Others, such as the Australian Broadcasting Corporation, are essentially training readers in how to identify doctored videos.9

The ability to tell truth from fiction is essential – and can be required quickly in critical moments – for understanding and reacting to the world around us. From politics and the potential for war, history and the need for human rights, or justice and its abuses of power – all could be irreparably altered with one convincing fake.

The business implications
of fakery

On a less global but still potentially dramatic scale, individual businesses too will need to be aware of the implications of deepfake videos. In an age of shareholder activism and corporate machinations, it is certainly not hard to envision video manipulation as being used for brand and reputational damage either by competitors, ex-employees (or employees’ exes, as with revenge porn) or professional scammers.

While companies could spend time educating staff in how to spot fake videos (and there are ways, such as odd blinking patterns, strange continuity, pixel and metadata manipulation, metallic-sounding audio) the technology will continue to get better, to the point where it will be virtually impossible to identify a fake without in depth forensics.

Technologies are being developed that may help, both in spotting altered video and in verifying the veracity of a video as it’s taken.10 But the faking technology is ever changing, and it will be difficult for detection to keep up.

PwC UK’s Arnav Joshi, an expert in data ethics and digital trust, believes that for now, business needs to be practical in its approach. He suggests that companies keep in mind the three following principles:

  1. If something looks too good (or bad) to be true, it probably is. Exercise caution and diligence, and don’t be quick to believe things at face value, especially where controversial or suspicious. Extend fact-checking to video.
  2. Invest in authenticity technology for your video content. For example, trust seals, digital rights management, video encryption, blockchain based verification, two-factor authentication.
  3. Rely on official channels of communication. For both accessing and posting content, look to official websites, verified YouTube channels and Twitter handles. Of course, these can still be hacked so keep rule number one in mind, regardless.

Trust
no one?

Deception-identifying software may eventually help us to identify fact from fiction at scale, but it remains true that we will probably have to get used to the idea that we need to check. As Goldman puts it in in his Verge article, “I think we have to prepare for a world where we are routinely exposed to a mix of truthful and fake photos and videos.”

It will undoubtedly become commonplace, strange even to think of a time when seeing something – at least outside of entertainment – would automatically be assumed to be authentic.

The Guardian’s Alex Hern believes we may already be there, deciding after a year of observing the phenomenon that, “deepfakes aren’t dangerous because they’ll change the world. They’re dangerous because the world has already changed, and we’re less ready to tackle their reality distortion than we have been for decades.”11

Whether in a new era, or still approaching one, it will remain true that everyone, from individuals to businesses and greater society, will need to be vigilant in their consumption, and production, of video content.

Contributor

Amy Gibbs

Dr Amy Gibbs is a manager at PwC Australia, and the global content editor for Digital Pulse.

More About Amy Gibbs