TechCentralTechCentral
    Facebook Twitter YouTube LinkedIn
    Facebook Twitter LinkedIn YouTube
    TechCentral TechCentral
    NEWSLETTER
    • News

      Signs Eskom crisis is creating diesel shortages

      30 June 2022

      Management shake-up at Absa

      30 June 2022

      Eskom ramps up load shedding as crisis deepens

      30 June 2022

      Alviva shares leap higher on R3-billion take-private offer

      30 June 2022

      Huawei, MTN to help build 5G-powered ‘smart mine’

      30 June 2022
    • World

      Graphics card prices plummet as crypto demand dries up

      30 June 2022

      Bitcoin just had its worst quarter in a decade

      30 June 2022

      The NFT party is over

      30 June 2022

      Samsung beats TSMC to 3nm chip production

      30 June 2022

      Napster plots crypto comeback

      29 June 2022
    • In-depth

      The great crypto crash: the fallout, and what happens next

      22 June 2022

      Goodbye, Internet Explorer – you really won’t be missed

      19 June 2022

      Oracle’s database dominance threatened by rise of cloud-first rivals

      13 June 2022

      Everything Apple announced at WWDC – in less than 500 words

      7 June 2022

      Sheryl Sandberg’s ad empire leaves a complicated legacy

      2 June 2022
    • Podcasts

      How your organisation can triage its information security risk

      22 June 2022

      Everything PC S01E06 – ‘Apple Silicon’

      15 June 2022

      The youth might just save us

      15 June 2022

      Everything PC S01E05 – ‘Nvidia: The Green Goblin’

      8 June 2022

      Everything PC S01E04 – ‘The story of Intel – part 2’

      1 June 2022
    • Opinion

      Has South Africa’s advertising industry lost its way?

      21 June 2022

      Rob Lith: What Icasa’s spectrum auction means for SA companies

      13 June 2022

      A proposed solution to crypto’s stablecoin problem

      19 May 2022

      From spectrum to roads, why fixing SA’s problems is an uphill battle

      19 April 2022

      How AI is being deployed in the fight against cybercriminals

      8 April 2022
    • Company Hubs
      • 1-grid
      • Altron Document Solutions
      • Amplitude
      • Atvance Intellect
      • Axiz
      • BOATech
      • CallMiner
      • Digital Generation
      • E4
      • ESET
      • Euphoria Telecom
      • IBM
      • Kyocera Document Solutions
      • Microsoft
      • Nutanix
      • One Trust
      • Pinnacle
      • Skybox Security
      • SkyWire
      • Tarsus on Demand
      • Videri Digital
      • Zendesk
    • Sections
      • Banking
      • Broadcasting and Media
      • Cloud computing
      • Consumer electronics
      • Cryptocurrencies
      • Education and skills
      • Energy
      • Fintech
      • Information security
      • Internet and connectivity
      • Internet of Things
      • Investment
      • IT services
      • Motoring and transport
      • Public sector
      • Science
      • Social media
      • Talent and leadership
      • Telecoms
    • Advertise
    TechCentralTechCentral
    Home»In-depth»Why artificial intelligence is no game changer

    Why artificial intelligence is no game changer

    In-depth By Leonid Bershidsky4 December 2017
    Facebook Twitter LinkedIn WhatsApp Telegram Email

    Not much time passes these days between so-called major advancements in artificial intelligence. Yet researchers are not much closer than they were decades ago to the big goal: actually replicating human intelligence. That’s the most surprising revelation by a team of eminent scholars who just released the first in what is meant to be a series of annual reports on the state of AI.

    The report is a great opportunity to finally recognise that the current methods we now know as AI and deep learning do not qualify as “intelligent”. They are based on the “brute force” of computers and limited by the quantity and quality of available training data. Many experts agree.

    The steering committee of AI Index, November 2017 includes Stanford’s Yoav Shoham and Massachusetts Institute of Technology’s Eric Brynjolfsson, an eloquent writer who did much to promote the modern-day orthodoxy that machines will soon displace people in many professions. The team behind the effort tracked the activity around AI in recent years and found thousands of published papers (18 664 in 2016), hundreds of venture capital-backed companies (743 in July 2017) and tens of thousands of job postings. It’s a vibrant academic field and an equally dynamic market (the number of US start-ups in it has increased by a factor of 14 since 2000).

    All this concentrated effort cannot help but produce results. According to the AI Index, the best systems surpassed human performance in image detection in 2014 and are on their way to 100% results. Error rates in labelling images (“this is a dog with a tennis ball”) have fallen to less than 2.5% from 28.5% in 2010. Machines have matched humans when it comes to recognising speech in a telephone conversation and are getting close to at parsing the structure of sentences, finding answers to questions within a document and translating news stories from German into English. They have also learnt to beat humans at poker and Pac-Man. But, the authors of the index wrote:

    Tasks for AI systems are often framed in narrow contexts for the sake of making progress on a specific problem or application. While machines may exhibit stellar performance on a certain task, performance may degrade dramatically if the task is modified even slightly. For example, a human who can read Chinese characters would likely understand Chinese speech, know something about Chinese culture and even make good recommendations at Chinese restaurants. In contrast, very different AI systems would be needed for each of these tasks.

    The AI systems are such one-trick ponies because they’re designed to be trained on specific, diverse, huge data sets. It could be argued that they still exist within philosopher John Searle’s “Chinese Room”. In that thought experiment, Searle, who doesn’t speak Chinese, is alone in a room with a set of instructions, in English, on correlating sets of Chinese characters with other sets of Chinese characters. Chinese speakers are sliding notes in Chinese under the door, and Searle pushes his own notes back, following the instructions. They can be fooled into thinking his replies are intelligent, but that’s not really the case. Searle devised the “Chinese Room” argument — to which there have been dozens of replies and attempted rebuttals — in 1980. But modern AI is still working in a way that fits his description.

    Machine translation

    Machine translation is one example. Google Translate, which has drastically improved since it started using neural networks, trains the networks on billions of lines of parallel text in different languages, translated by humans. Where lots of these lines exist, Google Translate does okay — about 80% as well as an expert human. Where the data is lacking, it produces hilarious results. I like putting in Russian text and telling Google Translate it’s Hmong. The results, in English or Russian, will often be surprising — like the pronouncements found inside fortune cookies.

    I doubt this is accidental. There are probably not many legitimate calls for translations from Hmong, so idle tricksters must have helped train Google’s translation machine to produce various kinds of exquisite nonsense.

    Researchers are trying to overcome the data insufficiency problem. Two recently published papers show how machine translation can work based on monolingual data sets, using the statistical likelihood of certain words being grouped together. The quality is not as good as with bilingual training data, but it’s still not complete nonsense and workable in a pinch. These are, however, mere crutches that don’t change the general brute force approach.

    Solving complex tasks requires ever more power and ever more data. A computer beat humans at Othello the year Sarle wrote about the Chinese Room and at poker this year — but that’s a quantitative leap rather than a qualitative one.

    This kind of “artificial intelligence” continues to be a promising line of both research and business while there are growing quantities of “big data” to parse. Kai-Fu Lee of Chinese investment firm Sinovation Ventures, one of the experts who contributed essays to AI Index 2017, wrote that China was competitive against the US in artificial intelligence because it generates oodles of data:

    In China, people use their mobile phones to pay for goods 50 times more often than Americans. Food delivery volume in China is 10 times more than that of the US. It took bike-sharing company Mobike 10 months to go from nothing to 20m orders (or rides) per day. There are over 20m bicycle rides transmitting their GPS and other sensor information up to the server, creating 20TB of data everyday. Similarly, China’s ride-hailing operator Didi is reported to connect its data with traffic control in some pilot cities. All of these Internet connected things will yield data that helps make existing products and applications more efficient and enable new applications we never thought of.

    The data dependence, however, isn’t great for AI’s future development. A backlash against the limitless data collection is gathering strength in the West; nation states are putting up barriers to data sharing; the weaponisation of data sets to produce intentionally flawed results and flawed responses to them is not far off. And it’ll be far harder to detect than, for example, the weaponisation of social networks by Russian information warriors has been.

    Meanwhile, the AI Index estimates that modern machines’ capacity for common sense reasoning is far less than that of a five-year-old child. Hardly any progress is being made in that area, and it’s hard to quantify.

    An increasing capacity for data crunching can be both helpful and dangerous to humans. It isn’t, however, a game changer. And it’s up to us to keep this branch of computer science in its place by only giving it as much data as we’re comfortable handing over — and only using it for those applications in which it can’t produce dangerously wrong results if fed lots of garbage. The technology itself is not the kind that can push us away from the controls — entirely new approaches would be necessary to create that threat.  — (c) 2017 Bloomberg LP

    top
    Share. Facebook Twitter LinkedIn WhatsApp Telegram Email
    Previous ArticleCost key factor for SA energy projects, minister says
    Next Article Naspers can’t investigate itself: Yunus Carrim

    Related Posts

    The great crypto crash: the fallout, and what happens next

    22 June 2022

    Goodbye, Internet Explorer – you really won’t be missed

    19 June 2022

    Oracle’s database dominance threatened by rise of cloud-first rivals

    13 June 2022
    Add A Comment

    Comments are closed.

    Promoted

    Billetterie simplifies interactions between law firms and clients

    30 June 2022

    Think herding cats is tricky? Try herding a cloud

    29 June 2022

    How your business can help hybrid workers effectively

    28 June 2022
    Opinion

    Has South Africa’s advertising industry lost its way?

    21 June 2022

    Rob Lith: What Icasa’s spectrum auction means for SA companies

    13 June 2022

    A proposed solution to crypto’s stablecoin problem

    19 May 2022

    Subscribe to Updates

    Get the best South African technology news and analysis delivered to your e-mail inbox every morning.

    © 2009 - 2022 NewsCentral Media

    Type above and press Enter to search. Press Esc to cancel.