TechCentralTechCentral
    Facebook Twitter YouTube LinkedIn
    Facebook Twitter LinkedIn YouTube
    TechCentral TechCentral
    NEWSLETTER
    • News

      Management shake-up at TymeBank – including a new CEO

      24 May 2022

      Standard Bank CEO apologises for weekend downtime

      24 May 2022

      South Africa fifth in Africa for blockchain funding

      24 May 2022

      Hein Engelbrecht to lead Mustek on interim basis

      24 May 2022

      Datatec in talks over Analysys Mason unit

      24 May 2022
    • World

      Terra collapse triggers $83-billion DeFi slump

      24 May 2022

      Zuckerberg sued in personal capacity over Cambridge Analytica

      24 May 2022

      Is the end of the bitcoin winter nigh?

      24 May 2022

      Zoom leaps higher on upbeat forecast

      24 May 2022

      Michael Dell becomes kingmaker in Broadcom, VMware deal

      23 May 2022
    • In-depth

      Bernie Fanaroff – the scientist who put African astronomy on the map

      23 May 2022

      Chip giant ASML places big bets on a tiny future

      20 May 2022

      Elon Musk is becoming like Henry Ford – and that’s not a good thing

      17 May 2022

      Stablecoins wend wobbly way into the unknown

      17 May 2022

      The standard model of particle physics may be broken

      11 May 2022
    • Podcasts

      The rewarding and lucrative careers to be had in infosec

      23 May 2022

      Dean Broadley on why product design at Yoco is an evolving art

      18 May 2022

      Everything PC S01E02 – ‘AMD: Ryzen from the dead – part 2’

      17 May 2022

      Everything PC S01E01 – ‘AMD: Ryzen from the dead – part 1’

      10 May 2022

      Llew Claasen on how exchange controls are harming SA tech start-ups

      2 May 2022
    • Opinion

      A proposed solution to crypto’s stablecoin problem

      19 May 2022

      From spectrum to roads, why fixing SA’s problems is an uphill battle

      19 April 2022

      How AI is being deployed in the fight against cybercriminals

      8 April 2022

      Cash is still king … but not for much longer

      31 March 2022

      Icasa on the role of TV white spaces and dynamic spectrum access

      31 March 2022
    • Company Hubs
      • 1-grid
      • Altron Document Solutions
      • Amplitude
      • Atvance Intellect
      • Axiz
      • BOATech
      • CallMiner
      • Digital Generation
      • E4
      • ESET
      • Euphoria Telecom
      • IBM
      • Kyocera Document Solutions
      • Microsoft
      • Nutanix
      • One Trust
      • Pinnacle
      • Skybox Security
      • SkyWire
      • Tarsus on Demand
      • Videri Digital
      • Zendesk
    • Sections
      • Banking
      • Broadcasting and Media
      • Cloud computing
      • Consumer electronics
      • Cryptocurrencies
      • Education and skills
      • Energy
      • Fintech
      • Information security
      • Internet and connectivity
      • Internet of Things
      • Investment
      • IT services
      • Motoring and transport
      • Public sector
      • Science
      • Social media
      • Talent and leadership
      • Telecoms
    • Advertise
    TechCentralTechCentral
    Home»Top»Scientists gather to plot AI doomsday scenarios

    Scientists gather to plot AI doomsday scenarios

    Top By Agency Staff2 March 2017
    Facebook Twitter LinkedIn WhatsApp Telegram Email

    Artificial intelligence boosters predict a brave new world of flying cars and cancer cures. Detractors worry about a future where humans are enslaved to an evil race of robot overlords. Veteran AI scientist Eric Horvitz and Doomsday Clock guru Lawrence Krauss, seeking a middle ground, gathered a group of experts in the Arizona desert to discuss the worst that could possibly happen — and how to stop it.

    Their workshop took place last weekend at Arizona State University with funding from Tesla co-founder Elon Musk and Skype co-founder Jaan Tallinn. Officially dubbed “Envisioning and Addressing Adverse AI Outcomes”, it was a kind of AI doomsday games that organised some 40 scientists, cybersecurity experts and policy wonks into groups of attackers (the red team) and defenders (blue team) playing out AI-gone-very-wrong scenarios, ranging from stock-market manipulation to global warfare.

    Horvitz is optimistic — a good thing because machine intelligence is his life’s work — but some other, more dystopian-minded backers of the project seemed to find his outlook too positive when plans for this event started about two years ago, said Krauss, a theoretical physicist who directs ASU’s Origins Project, the programme running the workshop. Yet Horvitz said that for these technologies to move forward successfully and to earn broad public confidence, all concerns must be fully aired and addressed.

    “There is huge potential for AI to transform so many aspects of our society in so many ways. At the same time, there are rough edges and potential downsides, like any technology,” said Horvitz, MD of Microsoft’s Research Lab in Redmond, Washington. “To maximally gain from the upside we also have to think through possible outcomes in more detail than we have before and think about how we’d deal with them.”

    Participants were given “homework” to submit entries for worst-case scenarios. They had to be realistic — based on current technologies or those that appear possible — and five to 25 years in the future. The entrants with the “winning” nightmares were chosen to lead the panels, which featured about four experts on each of the two teams to discuss the attack and how to prevent it.

    Turns out many of these researchers can match science-fiction writers Arthur C Clarke and Philip K Dick for dystopian visions. In many cases, little imagination was required — scenarios like technology being used to sway elections or new cyberattacks using AI are being seen in the real world, or are at least technically possible. Horvitz cited research that shows how to alter the way a self-driving car sees traffic signs so that the vehicle misreads a “stop” sign as “yield.”

    The possibility of intelligent, automated cyberattacks is the one that most worries John Launchbury, who directs one of the offices at the US’s Defence Advanced Research Projects Agency, and Kathleen Fisher, chairwoman of the computer science department at Tufts University, who led that session. What happens if someone constructs a cyber weapon designed to hide itself and evade all attempts to dismantle it? Now imagine it spreads beyond its intended target to the broader Internet. Think Stuxnet, the computer virus created to attack the Iranian nuclear programme that got out in the wild, but stealthier and more autonomous.

    “We’re talking about malware on steroids that is AI-enabled,” said Fisher, who is an expert in programming languages. Fisher presented her scenario under a slide bearing the words “What could possibly go wrong?”, which could have also served as a tagline for the whole event.

    How did the defending blue team fare on that one? Not well, said Launchbury. They argued that advanced AI needed for an attack would require a lot of computing power and communication, so it would be easier to detect. But the red team felt that it would be easy to hide behind innocuous activities, Fisher said. For example, attackers could get innocent users to play an addictive videogame to cover up their work.

    To prevent a stock-market manipulation scenario dreamed up by University of Michigan computer science professor Michael Wellman, blue team members suggested treating attackers like malware by trying to recognise them via a database on known types of hacks. Wellman, who has been in AI for more than 30 years and calls himself an old-timer on the subject, said that approach could be useful in finance.

    Beyond actual solutions, organisers hope the doomsday workshop started conversations on what needs to happen, raised awareness and combined ideas from different disciplines. The Origins Project plans to make public materials from the closed-door sessions and may design further workshops around a specific scenario or two, Krauss said.

    Darpa’s Launchbury hopes the presence of policy figures among the participants will foster concrete steps, like agreements on rules of engagement for cyberwar, automated weapons and robot troops.

    Krauss, chairman of the board of sponsors of the group behind the Doomsday Clock, a symbolic measure of how close we are to global catastrophe, said some of what he saw at the workshop “informed” his thinking on whether the clock ought to shift even closer to midnight. But don’t go stocking up on canned food and moving into a bunker in the wilderness just yet.

    “Some things we think of as cataclysmic may turn out to be just fine,” he said.  — (c) 2017 Bloomberg LP

    Elon Musk Eric Horvitz Jaan Tallinn Lawrence Krauss Microsoft Skype Tesla
    Share. Facebook Twitter LinkedIn WhatsApp Telegram Email
    Previous ArticleMTN ‘comfortable’ with Blue Label deal
    Next Article Vumatel now in KZN

    Related Posts

    Terra collapse triggers $83-billion DeFi slump

    24 May 2022

    Zuckerberg sued in personal capacity over Cambridge Analytica

    24 May 2022

    Is the end of the bitcoin winter nigh?

    24 May 2022
    Add A Comment

    Comments are closed.

    Promoted

    Generalists tend to outperform specialists when the going gets tough

    24 May 2022

    Vodacom champions innovation acceleration in Africa

    23 May 2022

    Kyocera answers top 10 questions on enterprise content management

    23 May 2022
    Opinion

    A proposed solution to crypto’s stablecoin problem

    19 May 2022

    From spectrum to roads, why fixing SA’s problems is an uphill battle

    19 April 2022

    How AI is being deployed in the fight against cybercriminals

    8 April 2022

    Subscribe to Updates

    Get the best South African technology news and analysis delivered to your e-mail inbox every morning.

    © 2009 - 2022 NewsCentral Media

    Type above and press Enter to search. Press Esc to cancel.