Close Menu
TechCentralTechCentral

    Subscribe to the newsletter

    Get the best South African technology news and analysis delivered to your e-mail inbox every morning.

    Facebook X (Twitter) YouTube LinkedIn
    WhatsApp Facebook X (Twitter) LinkedIn YouTube
    TechCentralTechCentral
    • News

      MultiChoice may unbundle SuperSport from DStv

      12 June 2025

      MVNO boom is reshaping South Africa’s mobile market

      12 June 2025

      South African law is failing gig-economy workers

      12 June 2025

      MultiChoice’s TV empire shrinks – but its ‘side hustles’ are holding strong

      12 June 2025

      MultiChoice is bleeding subscribers

      11 June 2025
    • World

      Qualcomm shows off new chip for AI smart glasses

      11 June 2025

      Trump tariffs to dim 2025 smartphone shipments

      4 June 2025

      Shrimp Jesus and the AI ad invasion

      4 June 2025

      Apple slams EU rules as ‘flawed and costly’ in major legal pushback

      2 June 2025

      Mark Zuckerberg has finally found a use for his metaverse

      30 May 2025
    • In-depth

      Grok promised bias-free chat. Then came the edits

      2 June 2025

      Digital fortress: We go inside JB5, Teraco’s giant new AI-ready data centre

      30 May 2025

      Sam Altman and Jony Ive’s big bet to out-Apple Apple

      22 May 2025

      South Africa unveils big state digital reform programme

      12 May 2025

      Is this the end of Google Search as we know it?

      12 May 2025
    • TCS

      TechCentral Nexus S0E1: Starlink, BEE and a new leader at Vodacom

      8 June 2025

      TCS+ | The future of mobile money, with MTN’s Kagiso Mothibi

      6 June 2025

      TCS+ | AI is more than hype: Workday execs unpack real human impact

      4 June 2025

      TCS | Sentiv, and the story behind the buyout of Altron Nexus

      3 June 2025

      TCS | Signal restored: Unpacking the Blue Label and Cell C turnaround

      28 May 2025
    • Opinion

      Beyond the box: why IT distribution depends on real partnerships

      2 June 2025

      South Africa’s next crisis? Being offline in an AI-driven world

      2 June 2025

      Digital giants boost South African news media – and get blamed for it

      29 May 2025

      Solar panic? The truth about SSEG, fines and municipal rules

      14 April 2025

      Data protection must be crypto industry’s top priority

      9 April 2025
    • Company Hubs
      • Africa Data Centres
      • AfriGIS
      • Altron Digital Business
      • Altron Document Solutions
      • Altron Group
      • Arctic Wolf
      • AvertITD
      • Braintree
      • CallMiner
      • CYBER1 Solutions
      • Digicloud Africa
      • Digimune
      • Domains.co.za
      • ESET
      • Euphoria Telecom
      • Incredible Business
      • iONLINE
      • Iris Network Systems
      • LSD Open
      • NEC XON
      • Network Platforms
      • Next DLP
      • Ovations
      • Paracon
      • Paratus
      • Q-KON
      • SkyWire
      • Solid8 Technologies
      • Telit Cinterion
      • Tenable
      • Vertiv
      • Videri Digital
      • Wipro
      • Workday
    • Sections
      • AI and machine learning
      • Banking
      • Broadcasting and Media
      • Cloud services
      • Contact centres and CX
      • Cryptocurrencies
      • Education and skills
      • Electronics and hardware
      • Energy and sustainability
      • Enterprise software
      • Fintech
      • Information security
      • Internet and connectivity
      • Internet of Things
      • Investment
      • IT services
      • Lifestyle
      • Motoring
      • Public sector
      • Retail and e-commerce
      • Science
      • SMEs and start-ups
      • Social media
      • Talent and leadership
      • Telecoms
    • Events
    • Advertise
    TechCentralTechCentral
    Home » In-depth » The coming wave of deepfake propaganda

    The coming wave of deepfake propaganda

    By The Conversation13 October 2020
    Twitter LinkedIn Facebook WhatsApp Email Telegram Copy Link
    News Alerts
    WhatsApp

    An investigative journalist receives a video from an anonymous whistle-blower. It shows a candidate for president admitting to illegal activity. But is this video real? If so, it would be huge news – the scoop of a lifetime – and could completely turn around the upcoming elections. But the journalist runs the video through a specialised tool, which tells her that the video isn’t what it seems. In fact, it’s a “deepfake,” a video made using artificial intelligence with deep learning.

    Journalists all over the world could soon be using a tool like this. In a few years, a tool like this could even be used by everyone to root out fake content in their social media feeds.

    As researchers who have been studying deepfake detection and developing a tool for journalists, we see a future for these tools. They won’t solve all our problems, though, and they will be just one part of the arsenal in the broader fight against disinformation.

    The problem with deepfakes

    Most people know that you can’t believe everything you see. Over the last couple of decades, savvy news consumers have got used to seeing images manipulated with photo-editing software. Videos, though, are another story. Hollywood directors can spend millions of dollars on special effects to make up a realistic scene. But using deepfakes, amateurs with a few thousand dollars of computer equipment and a few weeks to spend could make something almost as true to life.

    Deepfakes make it possible to put people into movie scenes they were never in – think Tom Cruise playing Iron Man – which makes for entertaining videos. Unfortunately, it also makes it possible to create pornography without the consent of the people depicted. So far, those people, nearly all women, are the biggest victims when deepfake technology is misused.

    Deepfakes can also be used to create videos of political leaders saying things they never said. The Belgian Socialist Party released a low-quality, non-deepfake but still phony video of President Donald Trump insulting Belgium, which got enough of a reaction to show the potential risks of higher-quality deepfakes.

    Perhaps scariest of all, they can be used to create doubt about the content of real videos, by suggesting that they could be deepfakes.

    Given these risks, it would be extremely valuable to be able to detect deepfakes and label them clearly. This would ensure that fake videos do not fool the public, and that real videos can be received as authentic.

    Spotting fakes

    Deepfake detection as a field of research was begun a little over three years ago. Early work focused on detecting visible problems in the videos, such as deepfakes that didn’t blink. With time, however, the fakes have got better at mimicking real videos and become harder to spot for both people and detection tools.

    There are two major categories of deepfake detection research. The first involves looking at the behaviour of people in the videos. Suppose you have a lot of video of someone famous, such as President Barack Obama. Artificial intelligence can use this video to learn his patterns, from his hand gestures to his pauses in speech. It can then watch a deepfake of him and notice where it does not match those patterns. This approach has the advantage of possibly working even if the video quality itself is essentially perfect.

    Other researchers, including our team, have been focused on differences that all deepfakes have compared to real videos. Deepfake videos are often created by merging individually generated frames to form videos. Taking that into account, our team’s methods extract the essential data from the faces in individual frames of a video and then track them through sets of concurrent frames. This allows us to detect inconsistencies in the flow of the information from one frame to another. We use a similar approach for our fake audio detection system as well.

    These subtle details are hard for people to see, but show how deepfakes are not quite perfect yet. Detectors like these can work for any person, not just a few world leaders. In the end, it may be that both types of deepfake detectors will be needed.

    Recent detection systems perform very well on videos specifically gathered for evaluating the tools. Unfortunately, even the best models do poorly on videos found online. Improving these tools to be more robust and useful is the key next step.

    Who should use deepfake detectors?

    Ideally, a deepfake verification tool should be available to everyone. However, this technology is in the early stages of development. Researchers need to improve the tools and protect them against hackers before releasing them broadly.

    At the same time, though, the tools to make deepfakes are available to anybody who wants to fool the public. Sitting on the sidelines is not an option. For our team, the right balance was to work with journalists, because they are the first line of defence against the spread of misinformation.

    Journalists and the social media platforms need to figure out how best to warn people about deepfakes when they are detected

    Before publishing stories, journalists need to verify the information. They already have tried-and-true methods, like checking with sources and getting more than one person to verify key facts. So, by putting the tool into their hands, we give them more information, and we know that they will not rely on the technology alone, given that it can make mistakes.

    It is encouraging to see teams from Facebook and Microsoft investing in technology to understand and detect deepfakes. This field needs more research to keep up with the speed of advances in deepfake technology.

    Journalists and the social media platforms also need to figure out how best to warn people about deepfakes when they are detected. Research has shown that people remember the lie, but not the fact that it was a lie. Will the same be true for fake videos? Simply putting “deepfake” in the title might not be enough to counter some kinds of disinformation.

    Deepfakes are here to stay. Managing disinformation and protecting the public will be more challenging than ever as artificial intelligence gets more powerful. We are part of a growing research community that is taking on this threat, in which detection is just the first step.The Conversation

    • Written by John Sohrawardi, doctoral student in computing and informational sciences, Rochester Institute of Technology, and Matthew Wright, professor of computing security, Rochester Institute of Technology
    • This article is republished from The Conversation under a Creative Commons licence


    top
    Subscribe to TechCentral Subscribe to TechCentral
    Share. Facebook Twitter LinkedIn WhatsApp Telegram Email Copy Link
    Previous ArticleChina bolsters its dominance of global trade
    Next Article SUSE: Reducing downtime in the data centre

    Related Posts

    18GW in unplanned breakdowns cripple Eskom

    2 November 2021

    Nersa kicks the Karpowership can down the road

    13 September 2021

    If you think South African load shedding is bad, try Zimbabwe’s

    13 September 2021
    Company News

    Building a cyber-resilient culture from the boardroom to the front lines

    12 June 2025

    How South Africa’s municipalities are finally getting smart

    12 June 2025

    Ransomware roulette: pay up or power through?

    11 June 2025
    Opinion

    Beyond the box: why IT distribution depends on real partnerships

    2 June 2025

    South Africa’s next crisis? Being offline in an AI-driven world

    2 June 2025

    Digital giants boost South African news media – and get blamed for it

    29 May 2025

    Subscribe to Updates

    Get the best South African technology news and analysis delivered to your e-mail inbox every morning.

    © 2009 - 2025 NewsCentral Media

    Type above and press Enter to search. Press Esc to cancel.