TechCentralTechCentral
    Facebook Twitter YouTube LinkedIn
    Facebook Twitter LinkedIn YouTube
    TechCentral TechCentral
    NEWSLETTER
    • News

      Bernie Fanaroff – the scientist who put African astronomy on the map

      23 May 2022

      WhatsApp is dropping support for these older devices

      23 May 2022

      The load shedding prognosis for the week ahead

      23 May 2022

      Broadcom ‘in talks to buy VMware’

      23 May 2022

      Saboteurs threaten South Africa’s power supply

      20 May 2022
    • World

      Tencent’s Pony Ma airs rare frustration during China slowdown

      23 May 2022

      Is it time to buy bitcoin again?

      23 May 2022

      Chip giant ASML places big bets on a tiny future

      20 May 2022

      Musk moves to soothe investor fears over Tesla

      20 May 2022

      Apple is almost ready to show off its mixed-reality headset

      20 May 2022
    • In-depth

      Elon Musk is becoming like Henry Ford – and that’s not a good thing

      17 May 2022

      Stablecoins wend wobbly way into the unknown

      17 May 2022

      The standard model of particle physics may be broken

      11 May 2022

      Meet Jared Birchall, Elon Musk’s personal ‘fixer’

      6 May 2022

      Twitter takeover was brash and fast, with Musk calling the shots

      26 April 2022
    • Podcasts

      Dean Broadley on why product design at Yoco is an evolving art

      18 May 2022

      Everything PC S01E02 – ‘AMD: Ryzen from the dead – part 2’

      17 May 2022

      Everything PC S01E01 – ‘AMD: Ryzen from the dead – part 1’

      10 May 2022

      Llew Claasen on how exchange controls are harming SA tech start-ups

      2 May 2022

      The inside scoop on OVEX’s big expansion plans

      20 April 2022
    • Opinion

      A proposed solution to crypto’s stablecoin problem

      19 May 2022

      From spectrum to roads, why fixing SA’s problems is an uphill battle

      19 April 2022

      How AI is being deployed in the fight against cybercriminals

      8 April 2022

      Cash is still king … but not for much longer

      31 March 2022

      Icasa on the role of TV white spaces and dynamic spectrum access

      31 March 2022
    • Company Hubs
      • 1-grid
      • Altron Document Solutions
      • Amplitude
      • Atvance Intellect
      • Axiz
      • BOATech
      • CallMiner
      • Digital Generation
      • E4
      • ESET
      • Euphoria Telecom
      • IBM
      • Kyocera Document Solutions
      • Microsoft
      • Nutanix
      • One Trust
      • Pinnacle
      • Skybox Security
      • SkyWire
      • Tarsus on Demand
      • Videri Digital
      • Zendesk
    • Sections
      • Banking
      • Broadcasting and Media
      • Cloud computing
      • Consumer electronics
      • Cryptocurrencies
      • Education and skills
      • Energy
      • Fintech
      • Information security
      • Internet and connectivity
      • Internet of Things
      • Investment
      • IT services
      • Motoring and transport
      • Public sector
      • Science
      • Social media
      • Talent and leadership
      • Telecoms
    • Advertise
    TechCentralTechCentral
    Home»Editor's pick»Our machines could one day threaten us

    Our machines could one day threaten us

    Editor's pick By Editor14 August 2014
    Facebook Twitter LinkedIn WhatsApp Telegram Email

    hal-9000-640

    The risks posed to human beings by artificial intelligence in no way resemble the popular image of the Terminator. That fictional mechanical monster is distinguished by many features — strength, armour, implacability, indestructability — but Arnie’s character lacks the one characteristic that we in the real world actually need to worry about: extreme intelligence.

    The human brain is not much bigger than that of a chimpanzee but those few extra neurons make a huge difference. We’ve got a population of several billion and we’ve developed industry, while they number a few hundred thousand and use basic wooden tools. The human brain has allowed us to spread across the surface of the world, land on the moon and coordinate to form effective groups with millions of members. It has granted us such power over the natural world that the survival of many other species is no longer determined by their own efforts, but by preservation decisions made by humans.

    In the past 60 years, human intelligence has been further boosted by automation. Computer programs have taken over tasks formerly performed by the human brain. They started with multiplication, then modelled the weather and now they are driving our cars.

    It’s not clear how long it will take, but it is possible that future artificial intelligences could reach human intelligence and beyond. If so, should we expect them to treat us as we have treated chimpanzees and other species? Would AI dominate us as thoroughly as we dominate the great apes?

    There are clear reasons to suspect that a true AI would be both smart and powerful. When computers gain the ability to perform tasks at the human level, they tend to very quickly become much better than us. No one today would think it sensible to pit the best human mind against even a cheap pocket calculator in a contest of long division, and human-versus-computer chess matches ceased to be interesting a decade ago. Computers bring relentless focus, patience, processing speed and memory.

    If an AI existed as pure software, it could copy itself many times, training each copy at accelerated computer speed, and network those copies together to create a kind of AI super committee. It would be like having Thomas Edison, Bill Clinton, Plato, Einstein, Caesar, Stephen Spielberg, Steve Jobs, Buddha, Napoleon or other humans superlative in their respective skill sets sitting on a higher human council. The AI could continue copying itself without limit, creating millions or billions of copies, if it needed large numbers of brains to brute-force a solution to any particular problem.

    Our society is set up to magnify the potential of such an entity, providing many routes to great power. If it could predict the stock market efficiently, it could accumulate vast wealth. If it was efficient at advice and social manipulation, it could create a personal assistant for every human being, manipulating the planet one human at a time. It could replace almost every worker in the service sector. If it was efficient at running economies, it could offer its services doing so, gradually making us completely dependent on it. If it was skilled at hacking, it could take over most of the world’s computers. The paths from AI intelligence to great AI power are many and varied, and it isn’t hard to imagine new ones.

    Just because an AI could be extremely powerful does not mean that it need be dangerous. But the problem is that while its goals don’t need to be negative, most possible goals become dangerous when the AI becomes too powerful.

    Consider a spam filter that became intelligent. Its task is to cut down on the number of spam messages that people receive. With great power, one solution to the problem might be simply to have all spammers killed. Or it might decide the most efficient solution would be to shut down the entire Internet. It might even decide that the only way to stop span would be to have everyone, everywhere killed.

    Or imagine an AI dedicated to increasing human happiness, as measured by the results of surveys, or by some biochemical marker in their brain. The most efficient way to fulfil its task would be to publicly execute anyone who marks themselves as unhappy on their survey, or to forcibly inject everyone with that biochemical marker.

    This is a general feature of AI motivations: goals that seem safe for a weak or controlled AI can lead to extreme pathological behaviour if the AI becomes powerful. Humans don’t expect this kind of behaviour because our goals include a lot of implicit information. When we hear “filter out the spam”, we also take the order to include “and don’t kill everyone in the world”, without having to articulate it. Which is good, as that idea is surprisingly hard to articulate precisely.

    But the AI might be an extremely alien mind: we cannot anthropomorphise it or expect it to interpret things the way we would. We have to articulate all the implicit limitations that come with an order. That may mean coming up with a solution to, say, human value and flourishing — a task philosophers have been failing at for millennia — and casting it unambiguously and without error into computer code.

    And even if the AI did understand that “filter out the spam” should have come with the caveat “don’t kill everyone”, it doesn’t have any motivation to go along with the spirit of the law. Its motivation is its programming, not what the programming should have been.

    It would in fact be motivated to hide its pathological tendencies as long as it is weak, and assure us that all was well, through anything it says or does. This is because it will never be able to achieve its goals if it is turned off, so it must lie to protect itself from that fate.

    It is not certain that AIs could become this powerful or that they would be dangerous if they did but the probabilities of both are high enough that the risk cannot be dismissed.

    At the moment, artificial intelligence research focuses mainly on the goal of creating better machines. We need to think more about how to do that safely. Some are already working on this problem but a lot remains to be done, both at the design and at the policy level, if we don’t want our helpful machines helpfully removing us from the world.The Conversation

    • Stuart Armstrong works at the Future of Humanity Institute and published Smarter than Us, a popularising booklet looking into the risk of artificial intelligence development. The Future of Humanity Institute is a multidisciplinary research institute at the University of Oxford that enables a select set of leading intellects to bring the tools of mathematics, philosophy and science to bear on big-picture questions about humanity and its prospects
    • This article was originally published on The Conversation
    Share. Facebook Twitter LinkedIn WhatsApp Telegram Email
    Previous ArticleWhat if SA’s power system collapsed?
    Next Article Mustek expects leap in earnings

    Related Posts

    Elon Musk is becoming like Henry Ford – and that’s not a good thing

    17 May 2022

    Stablecoins wend wobbly way into the unknown

    17 May 2022

    The standard model of particle physics may be broken

    11 May 2022
    Add A Comment

    Comments are closed.

    Promoted

    Vodacom champions innovation acceleration in Africa

    23 May 2022

    Kyocera answers top 10 questions on enterprise content management

    23 May 2022

    Fast-rising fintech Bankingly closes $11m investment round

    20 May 2022
    Opinion

    A proposed solution to crypto’s stablecoin problem

    19 May 2022

    From spectrum to roads, why fixing SA’s problems is an uphill battle

    19 April 2022

    How AI is being deployed in the fight against cybercriminals

    8 April 2022

    Subscribe to Updates

    Get the best South African technology news and analysis delivered to your e-mail inbox every morning.

    © 2009 - 2022 NewsCentral Media

    Type above and press Enter to search. Press Esc to cancel.