Close Menu
TechCentralTechCentral

    Subscribe to the newsletter

    Get the best South African technology news and analysis delivered to your e-mail inbox every morning.

    Facebook X (Twitter) YouTube LinkedIn
    WhatsApp Facebook X (Twitter) LinkedIn YouTube
    TechCentralTechCentral
    • News

      The real cost of a cashless economy

      16 July 2025

      Larry Ellison, 80, is now world’s second richest person

      16 July 2025

      Solly Malatsi seeks out-of-court deal in TV migration fight

      15 July 2025

      South Africa’s telcos battle to monetise 5G as 4G suffices for most

      15 July 2025

      Major new electric car brand launching in South Africa

      15 July 2025
    • World

      Grok 4 arrives with bold claims and fresh controversy

      10 July 2025

      Samsung’s bet on folding phones faces major test

      10 July 2025

      Bitcoin pushes higher into record territory

      10 July 2025

      OpenAI to launch web browser in direct challenge to Google Chrome

      10 July 2025

      Cupertino vs Brussels: Apple challenges Big Tech crackdown

      7 July 2025
    • In-depth

      The 1940s visionary who imagined the Information Age

      14 July 2025

      MultiChoice is working on a wholesale overhaul of DStv

      10 July 2025

      Siemens is battling Big Tech for AI supremacy in factories

      24 June 2025

      The algorithm will sing now: why musicians should be worried about AI

      20 June 2025

      Meta bets $72-billion on AI – and investors love it

      17 June 2025
    • TCS

      TCS+ | MVNX on the opportunities in South Africa’s booming MVNO market

      11 July 2025

      TCS | Connecting Saffas – Renier Lombard on The Lekker Network

      7 July 2025

      TechCentral Nexus S0E4: Takealot’s big Post Office jobs plan

      4 July 2025

      TCS | Tech, townships and tenacity: Spar’s plan to win with Spar2U

      3 July 2025

      TCS+ | First Distribution on the latest and greatest cloud technologies

      27 June 2025
    • Opinion

      A smarter approach to digital transformation in ICT distribution

      15 July 2025

      In defence of equity alternatives for BEE

      30 June 2025

      E-commerce in ICT distribution: enabler or disruptor?

      30 June 2025

      South Africa pioneered drone laws a decade ago – now it must catch up

      17 June 2025

      AI and the future of ICT distribution

      16 June 2025
    • Company Hubs
      • Africa Data Centres
      • AfriGIS
      • Altron Digital Business
      • Altron Document Solutions
      • Altron Group
      • Arctic Wolf
      • AvertITD
      • Braintree
      • CallMiner
      • CambriLearn
      • CYBER1 Solutions
      • Digicloud Africa
      • Digimune
      • Domains.co.za
      • ESET
      • Euphoria Telecom
      • Incredible Business
      • iONLINE
      • Iris Network Systems
      • LSD Open
      • NEC XON
      • Network Platforms
      • Next DLP
      • Ovations
      • Paracon
      • Paratus
      • Q-KON
      • SevenC
      • SkyWire
      • Solid8 Technologies
      • Telit Cinterion
      • Tenable
      • Vertiv
      • Videri Digital
      • Wipro
      • Workday
    • Sections
      • AI and machine learning
      • Banking
      • Broadcasting and Media
      • Cloud services
      • Contact centres and CX
      • Cryptocurrencies
      • Education and skills
      • Electronics and hardware
      • Energy and sustainability
      • Enterprise software
      • Fintech
      • Information security
      • Internet and connectivity
      • Internet of Things
      • Investment
      • IT services
      • Lifestyle
      • Motoring
      • Public sector
      • Retail and e-commerce
      • Science
      • SMEs and start-ups
      • Social media
      • Talent and leadership
      • Telecoms
    • Events
    • Advertise
    TechCentralTechCentral
    Home » AI and machine learning » Thank you, Google, for screwing up so badly

    Thank you, Google, for screwing up so badly

    The laughable screw-ups in the Gemini chatbot’s image generation offered a salutary glimpse of an Orwellian dystopia.
    By Clive Crook13 March 2024
    Twitter LinkedIn Facebook WhatsApp Email Telegram Copy Link
    News Alerts
    WhatsApp

    Google’s investors are entitled to be furious about the stunningly incompetent roll-out of the company’s Gemini artificial intelligence system. For everybody else, including this grateful Google user and committed technology optimist, it was a blessing.

    The laughable screw-ups in the Gemini chatbot’s image generation — racially diverse Nazi soldiers? — offered a salutary glimpse of an Orwellian dystopia. And in so doing, they also highlighted vital questions of opacity, trust, range of application,and truth that deserve more attention as we contemplate where AI will lead.

    AI is a disruptive and potentially transformative innovation — and, like all such innovations, it’s capable of delivering enormous advances in human well-being. A decade or two of AI-enhanced economic growth is just what the world needs. Even so, the exuberance over actually existing AI is premature. The concept is so exciting and the intellectual accomplishment so impressive that one can easily get swept along. Innovators, actual and potential users, and regulators all need to reflect more carefully on what’s going on — and especially on what purposes AI can usefully serve.

    People make mistakes all the time. If AI makes fewer mistakes than humans, would that be good enough?

    Part of the difficulty in grappling with AI’s full implications is the huge effort that has gone into devising AI models that express themselves like humans, presumably for marketing reasons. “Yes, I can help you with that.” Thank you, but who is this “I”? The suggestion is that AI can be understood and dealt with much as one would understand and deal with a person, except that AI is infinitely smarter and more knowledgeable. For that reason, when it comes to making decisions, it claims a measure of authority over its dimwitted users. There’s a crucial difference between AI as a tool that humans use to improve their decisions — decisions for which they remain accountable — and AI as a decision maker in its own right.

    In due course, AI will likely be granted ever-wider decision-making power, not just over the information (text, video and so forth) it passes to human users but also over actions. Eventually, Tesla’s “full self-driving” will actually mean full self-driving. At that point, liability for bad driving decisions will lie with Tesla. Between advisory AI and autonomous-actor AI, it’s harder to say who or what should be held accountable when systems make consequential mistakes. The courts will doubtless take this up.

    ‘Hallucinate’

    Liability aside, as AI advances we’ll want to judge how good it is at making decisions. But that’s a problem, too. For reasons I don’t understand, AI models aren’t said to make mistakes: they “hallucinate”. But how do we know they’re hallucinating? We know for sure when they present findings so absurd that even low-information humans know to laugh. But when AI systems make stuff up, they won’t always be so stupid. Even their designers can’t explain all such errors, and spotting them might be beyond the powers of mere mortals. We could ask an AI system, but they hallucinate.

    Even if errors could be reliably identified and counted, the criteria for judging the performance of AI models are unclear. People make mistakes all the time. If AI makes fewer mistakes than humans, would that be good enough? For many purposes (including full self-driving), I’d be inclined to say yes, but the domain of questions put to AI must be suitably narrow. One of the questions I wouldn’t want AI to answer is, “If AI makes fewer mistakes than humans, would that be good enough?”

    Read: Google apologises for ‘woke’ AI tool

    The point is, judgments like this are not straightforwardly factual — a distinction that goes to the heart of the matter. Whether an opinion or action is justified often depends on values. These might be implicated by the action in itself (for instance, am I violating anybody’s rights?) or by its consequences (is this outcome more socially beneficial than the alternative?). AI handles these complications by implicitly attaching values to actions and/or consequences — but it must infer these either from the consensus, of sorts, embedded in the information it’s trained on or from the instructions issued by its users and/or designers. The trouble is, neither the consensus nor the instructions have any ethical authority. When AI offers an opinion, it’s still just an opinion.

    For this reason, the arrival of AI is unfortunately timed. The once-clear distinction between facts and values is under assault from all sides. Eminent journalists say they never really understood what “objective” meant. The “critical theorists” who dominate many college social studies programmes deal in “false consciousness”, “social construction” and truth as “lived experience” – all of which call the existence of facts into question and see values as instruments of oppression. Effective altruists take issue with values in a very different way – claiming, in effect, that consequences can be judged on a single dimension, which renders values other than “utility” null. Algorithmic ethicists, rejoice!

    Read: Google bars Gemini AI from talking about elections

    As these ideas seep into what AI claims to know, prodded further by designers promoting cultural realignment on race, gender and equity, expect the systems to present value judgments as truths (just as humans do) and deny you information that might lead you to moral error (just as humans do). As Andrew Sullivan points out, at the start Google promised that its search results were “unbiased and objective”; now its principal goal is to be “socially beneficial”. AI systems might reason, or be instructed, that in choosing between what’s true and what’s socially beneficial, they should pick the latter — and then lie to users about having done so. After all, AI is so smart, its “truth” must really be true.

    In a helpfully memorable way, Gemini proved that it’s not. Thank you, Google, for screwing up so badly.  — Clive Crook, (c) 2024 Bloomberg LP

    Get breaking news alerts from TechCentral on WhatsApp



    Gemini Google Google Gemini
    Subscribe to TechCentral Subscribe to TechCentral
    Share. Facebook Twitter LinkedIn WhatsApp Telegram Email Copy Link
    Previous ArticleMusic videos are coming to Spotify
    Next Article 60 crypto platforms set to be licensed in South Africa

    Related Posts

    OpenAI to launch web browser in direct challenge to Google Chrome

    10 July 2025

    What Steve Jobs feared is now the tech industry’s reality

    9 July 2025

    Apple’s AI ambitions rattled by defection to Meta

    8 July 2025
    Company News

    Mental wellness at scale: how Mac fuels October Health’s mission

    15 July 2025

    Banking on LEO: Q-KON transforms financial services connectivity

    14 July 2025

    The future of business calling: Voys brings your landline to the cloud

    14 July 2025
    Opinion

    A smarter approach to digital transformation in ICT distribution

    15 July 2025

    In defence of equity alternatives for BEE

    30 June 2025

    E-commerce in ICT distribution: enabler or disruptor?

    30 June 2025

    Subscribe to Updates

    Get the best South African technology news and analysis delivered to your e-mail inbox every morning.

    © 2009 - 2025 NewsCentral Media

    Type above and press Enter to search. Press Esc to cancel.