Eskenzi PR ad banner Eskenzi PR ad banner
  • About Us
Wednesday, 29 March, 2023
IT Security Guru
Eskenzi PR banner
  • Home
  • Features
  • Insight
  • Events
    • Most Inspiring Women in Cyber 2022
  • Topics
    • Cloud Security
    • Cyber Crime
    • Cyber Warfare
    • Data Protection
    • DDoS
    • Hacking
    • Malware, Phishing and Ransomware
    • Mobile Security
    • Network Security
    • Regulation
    • Skills Gap
    • The Internet of Things
    • Threat Detection
    • AI and Machine Learning
    • Industrial Internet of Things
  • Multimedia
  • Product Reviews
  • About Us
No Result
View All Result
  • Home
  • Features
  • Insight
  • Events
    • Most Inspiring Women in Cyber 2022
  • Topics
    • Cloud Security
    • Cyber Crime
    • Cyber Warfare
    • Data Protection
    • DDoS
    • Hacking
    • Malware, Phishing and Ransomware
    • Mobile Security
    • Network Security
    • Regulation
    • Skills Gap
    • The Internet of Things
    • Threat Detection
    • AI and Machine Learning
    • Industrial Internet of Things
  • Multimedia
  • Product Reviews
  • About Us
No Result
View All Result
IT Security Guru
No Result
View All Result

Deepfake Videos: When good tech goes bad

To counter the threat of deepfakes effectively, we need to see much better data sharing

by The Gurus
June 17, 2020
in Featured
Ben Lorica, Chief Data Scientist at O’Reilly
Share on FacebookShare on Twitter

By Ben Lorica, Chief Data Scientist at O’Reilly

More than a decade ago leading UK investigative journalist Nick Davies published Flat Earth News, an exposé of how the mass media had abdicated its responsibility to the truth. Newsroom pressure to publish more stories, faster than their competitors had, Davies argued, led to journalists becoming mere “churnalists”, rushing out articles so fast that they could never check on the truth of what they were reporting.

Shocking as Davies’ revelations seemed back in 2008, they seem pretty tame by today’s standards. We now live in a post-truth world of Fake News and ‘alternative facts’; where activists don’t just seek to manipulate the news agenda with PR but now use advanced technology to fake images and footage. A particularly troubling aspect of these ‘deepfake’ videos is their use of artificial intelligence to fabricate people saying or doing things with almost undetectable accuracy.

The result is that publishers risk running completely erroneous stories – as inaccurate as stating that the world is flat – with little or any ability to check their source material and confirm whether it is genuine. The rise of unchecked fakery has serious implications for our liberal democracy and our ability to understand what’s truly going on in the world. And while technology has an important role in defeating deepfake videos, we all have a responsibility to change the way we engage with the ‘facts’ we encounter online.

Faking the news

The technology to manipulate imagery has come a long way since Stalin had people airbrushed out of history. Creating convincing yet fake digital content no longer requires advanced skills or a well-resourced (mis)information bureau. Anyone with a degree of technical proficiency can create content that will fool even the experts.

Take the faked footage of Nancy Pelosi earlier this year, which was doctored to make her look incoherent and was viewed two and a half million times before Facebook took it down. This story shows how social media is giving new life to the old aphorism that “a lie can go halfway around the world before the truth has a chance to put its boots on”.

The propagation of lies and misinformation is immeasurably enhanced by platforms like Twitter and Facebook that enable virality. What’s more, the incentives for creating fake content now favour malicious actors, with clear economic and political advantages for disseminating false footage. Put simply, the more shocking or extreme the content, the more people will share it and the longer they will stay on the platform.

Meanwhile, counterfeiters can manipulate the very tools being developed to detect and mitigate deepfake content. Just as the security industry inadvertently supplies software that can be misused for cybercrime, so we risk the emergence of a parallel media industry – one focused on obfuscation and lies.

Combating the counterfeiters

None of this is inevitable. There are plenty of advanced tools for detecting faked content, including machine learning algorithms that analyse an individual’s style of speech and movement, known as a “softbiometric signature”. Researchers from UC Berkeley and the University of Southern California used this technique to identify deepfakes – including face swaps and ‘digital puppets’ with at least 92% accuracy.

Technology such as AI, machine learning and generative adversarial networks are, of course, crucial in the fight against deepfakes, but just as important is that we all learn to think critically about the content we view.

Sadly (but necessarily) we all need to get better at questioning the provenance of videos, articles and imagery. In many cases, this can be as simple as never sharing content that we haven’t actually read or watched ourselves – something that six in ten of us do. But we also need to interrogate what we consume, for example by investigating metadata. If you watch a video titled “Boris Johnson in Sri Lanka, June 2014”, it’s well worth checking out that he was actually there during that month.

Similarly, snippets can be misleading; it’s always worth watching extended or complete segments, as the Lincoln Memorial video incident shows. And always resist the temptation to pile on when something goes viral: taking the time to investigate content properly can save you from serious embarrassment or even prosecution for defamation.

Sophisticated as deepfake technology has become, there are still some telltale signs that the vigilant viewer can use to identify footage that has been manipulated. These include infrequent or entirely missing eye-blinking; odd-looking lighting and shadows; discoloration, blurriness and distortion; and a failure perfectly to sync sound and video (“read my lips”).

Unite for truth 

It may be everybody’s responsibility to check content before we share it widely on social platforms, but it will take much more than individual effort if we are to stamp out the scourge of deepfakes.

At the moment, the odds are stacked firmly towards the fakers. As digital forensics expert Hany Farid points out, for every person involved in detecting misleading content there are another 100 creating it.

Improved regulation of media platforms and other publishers will be key, with sufficient sanctions against content creators who have a record of creating and disseminating fake content. We are seeing regulators beginning to grope towards a solution, for example with the DEEPFAKES Accountability Act. Problematical as this proposed legislation may be, it at least shows that lawmakers are aware of the problem and are committed to tackling it.

To counter the threat of deepfakes effectively, we need to see much better data sharing so that regulators and researchers can fully understand the nature of the challenge, build better solutions and craft truly effective regulations.

This battle won’t be won quickly or easily, but it’s one that we all need to fight. Everyone can do their bit by remaining vigilant to the threat of faked content. If we train our brains to think critically about everything we consume online, we can all help to minimise our involvement in sharing counterfeit content.

So the next time you see a video that shocks or surprises you, do a little background digging and watch out for the signs of fakery before you share it – and tell your friends and followers whenever you find content that’s as fake as the claim that the world is flat.

FacebookTweetLinkedIn
Share14TweetShare
Previous Post

Phishing attacks are increasingly sophisticated: here’s how to stay safe

Next Post

SmarterASP.net customers knocked offline due to Ransomware attack

Recent News

Blue Logo OUTPOST24

New Research Examines Traffers and the Business of Stolen Credentials

March 28, 2023

How to Succeed As a New Chief Information Security Officer (CISO)

March 28, 2023

The Importance of Data Security and Privacy for Individuals and Businesses in the Digital Age

March 28, 2023
penetration testing

Cymulate’s 2022 Cybersecurity Effectiveness Report reveals that organizations are leaving common attack paths exposed

March 28, 2023

The IT Security Guru offers a daily news digest of all the best breaking IT security news stories first thing in the morning! Rather than you having to trawl through all the news feeds to find out what’s cooking, you can quickly get everything you need from this site!

Our Address: 10 London Mews, London, W2 1HY

Follow Us

© 2015 - 2019 IT Security Guru - Website Managed by Calm Logic

  • About Us
No Result
View All Result
  • Home
  • Features
  • Insight
  • Events
    • Most Inspiring Women in Cyber 2022
  • Topics
    • Cloud Security
    • Cyber Crime
    • Cyber Warfare
    • Data Protection
    • DDoS
    • Hacking
    • Malware, Phishing and Ransomware
    • Mobile Security
    • Network Security
    • Regulation
    • Skills Gap
    • The Internet of Things
    • Threat Detection
    • AI and Machine Learning
    • Industrial Internet of Things
  • Multimedia
  • Product Reviews
  • About Us

© 2015 - 2019 IT Security Guru - Website Managed by Calm Logic

This site uses functional cookies and external scripts to improve your experience.

Privacy settings

Privacy Settings / PENDING

This site uses functional cookies and external scripts to improve your experience. Which cookies and scripts are used and how they impact your visit is specified on the left. You may change your settings at any time. Your choices will not impact your visit.

NOTE: These settings will only apply to the browser and device you are currently using.

GDPR Compliance

Powered by Cookie Information