Close Menu
TechCentralTechCentral

    Subscribe to the newsletter

    Get the best South African technology news and analysis delivered to your e-mail inbox every morning.

    Facebook X (Twitter) YouTube LinkedIn
    WhatsApp Facebook X (Twitter) LinkedIn YouTube
    TechCentralTechCentral
    • News

      Blue Label Telecoms to change its name as restructuring gathers pace

      11 July 2025

      Get your ID delivered like pizza – home affairs’ latest digital shake-up

      11 July 2025

      EFF vows to stop Starlink from launching in South Africa

      11 July 2025

      Apple plans product blitz to reignite growth

      11 July 2025

      Nissan doubles down on South Africa despite plant uncertainty

      11 July 2025
    • World

      Grok 4 arrives with bold claims and fresh controversy

      10 July 2025

      Bitcoin pushes higher into record territory

      10 July 2025

      Cupertino vs Brussels: Apple challenges Big Tech crackdown

      7 July 2025

      Grammarly acquires e-mail start-up Superhuman

      1 July 2025

      Apple considers ditching its own AI in Siri overhaul

      1 July 2025
    • In-depth

      Siemens is battling Big Tech for AI supremacy in factories

      24 June 2025

      The algorithm will sing now: why musicians should be worried about AI

      20 June 2025

      Meta bets $72-billion on AI – and investors love it

      17 June 2025

      MultiChoice may unbundle SuperSport from DStv

      12 June 2025

      Grok promised bias-free chat. Then came the edits

      2 June 2025
    • TCS

      TCS+ | MVNX on the opportunities in South Africa’s booming MVNO market

      11 July 2025

      TCS | Connecting Saffas – Renier Lombard on The Lekker Network

      7 July 2025

      TechCentral Nexus S0E4: Takealot’s big Post Office jobs plan

      4 July 2025

      TCS | Tech, townships and tenacity: Spar’s plan to win with Spar2U

      3 July 2025

      TCS+ | First Distribution on the latest and greatest cloud technologies

      27 June 2025
    • Opinion

      In defence of equity alternatives for BEE

      30 June 2025

      E-commerce in ICT distribution: enabler or disruptor?

      30 June 2025

      South Africa pioneered drone laws a decade ago – now it must catch up

      17 June 2025

      AI and the future of ICT distribution

      16 June 2025

      Singapore soared – why can’t we? Lessons South Africa refuses to learn

      13 June 2025
    • Company Hubs
      • Africa Data Centres
      • AfriGIS
      • Altron Digital Business
      • Altron Document Solutions
      • Altron Group
      • Arctic Wolf
      • AvertITD
      • Braintree
      • CallMiner
      • CambriLearn
      • CYBER1 Solutions
      • Digicloud Africa
      • Digimune
      • Domains.co.za
      • ESET
      • Euphoria Telecom
      • Incredible Business
      • iONLINE
      • Iris Network Systems
      • LSD Open
      • NEC XON
      • Network Platforms
      • Next DLP
      • Ovations
      • Paracon
      • Paratus
      • Q-KON
      • SevenC
      • SkyWire
      • Solid8 Technologies
      • Telit Cinterion
      • Tenable
      • Vertiv
      • Videri Digital
      • Wipro
      • Workday
    • Sections
      • AI and machine learning
      • Banking
      • Broadcasting and Media
      • Cloud services
      • Contact centres and CX
      • Cryptocurrencies
      • Education and skills
      • Electronics and hardware
      • Energy and sustainability
      • Enterprise software
      • Fintech
      • Information security
      • Internet and connectivity
      • Internet of Things
      • Investment
      • IT services
      • Lifestyle
      • Motoring
      • Public sector
      • Retail and e-commerce
      • Science
      • SMEs and start-ups
      • Social media
      • Talent and leadership
      • Telecoms
    • Events
    • Advertise
    TechCentralTechCentral
    Home » Editor's pick » Our machines could one day threaten us

    Our machines could one day threaten us

    By Editor14 August 2014
    Twitter LinkedIn Facebook WhatsApp Email Telegram Copy Link
    News Alerts
    WhatsApp

    hal-9000-640

    The risks posed to human beings by artificial intelligence in no way resemble the popular image of the Terminator. That fictional mechanical monster is distinguished by many features — strength, armour, implacability, indestructability — but Arnie’s character lacks the one characteristic that we in the real world actually need to worry about: extreme intelligence.

    The human brain is not much bigger than that of a chimpanzee but those few extra neurons make a huge difference. We’ve got a population of several billion and we’ve developed industry, while they number a few hundred thousand and use basic wooden tools. The human brain has allowed us to spread across the surface of the world, land on the moon and coordinate to form effective groups with millions of members. It has granted us such power over the natural world that the survival of many other species is no longer determined by their own efforts, but by preservation decisions made by humans.

    In the past 60 years, human intelligence has been further boosted by automation. Computer programs have taken over tasks formerly performed by the human brain. They started with multiplication, then modelled the weather and now they are driving our cars.

    It’s not clear how long it will take, but it is possible that future artificial intelligences could reach human intelligence and beyond. If so, should we expect them to treat us as we have treated chimpanzees and other species? Would AI dominate us as thoroughly as we dominate the great apes?

    There are clear reasons to suspect that a true AI would be both smart and powerful. When computers gain the ability to perform tasks at the human level, they tend to very quickly become much better than us. No one today would think it sensible to pit the best human mind against even a cheap pocket calculator in a contest of long division, and human-versus-computer chess matches ceased to be interesting a decade ago. Computers bring relentless focus, patience, processing speed and memory.

    If an AI existed as pure software, it could copy itself many times, training each copy at accelerated computer speed, and network those copies together to create a kind of AI super committee. It would be like having Thomas Edison, Bill Clinton, Plato, Einstein, Caesar, Stephen Spielberg, Steve Jobs, Buddha, Napoleon or other humans superlative in their respective skill sets sitting on a higher human council. The AI could continue copying itself without limit, creating millions or billions of copies, if it needed large numbers of brains to brute-force a solution to any particular problem.

    Our society is set up to magnify the potential of such an entity, providing many routes to great power. If it could predict the stock market efficiently, it could accumulate vast wealth. If it was efficient at advice and social manipulation, it could create a personal assistant for every human being, manipulating the planet one human at a time. It could replace almost every worker in the service sector. If it was efficient at running economies, it could offer its services doing so, gradually making us completely dependent on it. If it was skilled at hacking, it could take over most of the world’s computers. The paths from AI intelligence to great AI power are many and varied, and it isn’t hard to imagine new ones.

    Just because an AI could be extremely powerful does not mean that it need be dangerous. But the problem is that while its goals don’t need to be negative, most possible goals become dangerous when the AI becomes too powerful.

    Consider a spam filter that became intelligent. Its task is to cut down on the number of spam messages that people receive. With great power, one solution to the problem might be simply to have all spammers killed. Or it might decide the most efficient solution would be to shut down the entire Internet. It might even decide that the only way to stop span would be to have everyone, everywhere killed.

    Or imagine an AI dedicated to increasing human happiness, as measured by the results of surveys, or by some biochemical marker in their brain. The most efficient way to fulfil its task would be to publicly execute anyone who marks themselves as unhappy on their survey, or to forcibly inject everyone with that biochemical marker.

    This is a general feature of AI motivations: goals that seem safe for a weak or controlled AI can lead to extreme pathological behaviour if the AI becomes powerful. Humans don’t expect this kind of behaviour because our goals include a lot of implicit information. When we hear “filter out the spam”, we also take the order to include “and don’t kill everyone in the world”, without having to articulate it. Which is good, as that idea is surprisingly hard to articulate precisely.

    But the AI might be an extremely alien mind: we cannot anthropomorphise it or expect it to interpret things the way we would. We have to articulate all the implicit limitations that come with an order. That may mean coming up with a solution to, say, human value and flourishing — a task philosophers have been failing at for millennia — and casting it unambiguously and without error into computer code.

    And even if the AI did understand that “filter out the spam” should have come with the caveat “don’t kill everyone”, it doesn’t have any motivation to go along with the spirit of the law. Its motivation is its programming, not what the programming should have been.

    It would in fact be motivated to hide its pathological tendencies as long as it is weak, and assure us that all was well, through anything it says or does. This is because it will never be able to achieve its goals if it is turned off, so it must lie to protect itself from that fate.

    It is not certain that AIs could become this powerful or that they would be dangerous if they did but the probabilities of both are high enough that the risk cannot be dismissed.

    At the moment, artificial intelligence research focuses mainly on the goal of creating better machines. We need to think more about how to do that safely. Some are already working on this problem but a lot remains to be done, both at the design and at the policy level, if we don’t want our helpful machines helpfully removing us from the world.The Conversation

    • Stuart Armstrong works at the Future of Humanity Institute and published Smarter than Us, a popularising booklet looking into the risk of artificial intelligence development. The Future of Humanity Institute is a multidisciplinary research institute at the University of Oxford that enables a select set of leading intellects to bring the tools of mathematics, philosophy and science to bear on big-picture questions about humanity and its prospects
    • This article was originally published on The Conversation


    Subscribe to TechCentral Subscribe to TechCentral
    Share. Facebook Twitter LinkedIn WhatsApp Telegram Email Copy Link
    Previous ArticleWhat if SA’s power system collapsed?
    Next Article Mustek expects leap in earnings

    Related Posts

    Blue Label Telecoms to change its name as restructuring gathers pace

    11 July 2025

    Get your ID delivered like pizza – home affairs’ latest digital shake-up

    11 July 2025

    TCS+ | MVNX on the opportunities in South Africa’s booming MVNO market

    11 July 2025
    Company News

    $125-trillion traded: Binance redefines global finance in just eight years

    11 July 2025

    NEC XON welcomes HPE acquisition of Juniper Networks

    11 July 2025

    LTE Cat 1 vs Cat 1 bis – what’s the difference?

    11 July 2025
    Opinion

    In defence of equity alternatives for BEE

    30 June 2025

    E-commerce in ICT distribution: enabler or disruptor?

    30 June 2025

    South Africa pioneered drone laws a decade ago – now it must catch up

    17 June 2025

    Subscribe to Updates

    Get the best South African technology news and analysis delivered to your e-mail inbox every morning.

    © 2009 - 2025 NewsCentral Media

    Type above and press Enter to search. Press Esc to cancel.