Close Menu
TechCentralTechCentral

    Subscribe to the newsletter

    Get the best South African technology news and analysis delivered to your e-mail inbox every morning.

    Facebook X (Twitter) YouTube LinkedIn
    WhatsApp Facebook X (Twitter) LinkedIn YouTube
    TechCentralTechCentral
    • News

      China’s car factories run cold as price war masks deep overcapacity

      19 June 2025

      Yellow Card, Visa in deal to hasten stablecoin uptake in Africa

      19 June 2025

      Jaltech backs solar firm Wetility in R500-million capital raise

      18 June 2025

      MTN CEO edges Vodacom rival in pay stakes – but just barely

      18 June 2025

      Stolen phone? Samsung now buys you an hour to lock it down

      18 June 2025
    • World

      Trump Mobile dials into politics, profit and patriarchy

      17 June 2025

      Samsung plots health data hub to link users and doctors in real time

      17 June 2025

      Beijing’s chip champions blacklisted by Taiwan

      16 June 2025

      China is behind in AI chips – but for how much longer?

      13 June 2025

      Yahoo tries to make its mail service relevant again

      13 June 2025
    • In-depth

      Meta bets $72-billion on AI – and investors love it

      17 June 2025

      MultiChoice may unbundle SuperSport from DStv

      12 June 2025

      Grok promised bias-free chat. Then came the edits

      2 June 2025

      Digital fortress: We go inside JB5, Teraco’s giant new AI-ready data centre

      30 May 2025

      Sam Altman and Jony Ive’s big bet to out-Apple Apple

      22 May 2025
    • TCS

      TCS+ | AfriGIS’s Helen Hulett on how tech can help resolve South Africa’s water crisis

      18 June 2025

      TechCentral Nexus S0E2: South Africa’s digital battlefield

      16 June 2025

      TechCentral Nexus S0E1: Starlink, BEE and a new leader at Vodacom

      8 June 2025

      TCS+ | The future of mobile money, with MTN’s Kagiso Mothibi

      6 June 2025

      TCS+ | AI is more than hype: Workday execs unpack real human impact

      4 June 2025
    • Opinion

      South Africa pioneered drone laws a decade ago – now it must catch up

      17 June 2025

      AI and the future of ICT distribution

      16 June 2025

      Singapore soared – why can’t we? Lessons South Africa refuses to learn

      13 June 2025

      Beyond the box: why IT distribution depends on real partnerships

      2 June 2025

      South Africa’s next crisis? Being offline in an AI-driven world

      2 June 2025
    • Company Hubs
      • Africa Data Centres
      • AfriGIS
      • Altron Digital Business
      • Altron Document Solutions
      • Altron Group
      • Arctic Wolf
      • AvertITD
      • Braintree
      • CallMiner
      • CYBER1 Solutions
      • Digicloud Africa
      • Digimune
      • Domains.co.za
      • ESET
      • Euphoria Telecom
      • Incredible Business
      • iONLINE
      • Iris Network Systems
      • LSD Open
      • NEC XON
      • Network Platforms
      • Next DLP
      • Ovations
      • Paracon
      • Paratus
      • Q-KON
      • SevenC
      • SkyWire
      • Solid8 Technologies
      • Telit Cinterion
      • Tenable
      • Vertiv
      • Videri Digital
      • Wipro
      • Workday
    • Sections
      • AI and machine learning
      • Banking
      • Broadcasting and Media
      • Cloud services
      • Contact centres and CX
      • Cryptocurrencies
      • Education and skills
      • Electronics and hardware
      • Energy and sustainability
      • Enterprise software
      • Fintech
      • Information security
      • Internet and connectivity
      • Internet of Things
      • Investment
      • IT services
      • Lifestyle
      • Motoring
      • Public sector
      • Retail and e-commerce
      • Science
      • SMEs and start-ups
      • Social media
      • Talent and leadership
      • Telecoms
    • Events
    • Advertise
    TechCentralTechCentral
    Home » AI and machine learning » Bots can be brutal

    Bots can be brutal

    AI chatbots are becoming more human-like, to the point that some people may struggle to tell if they're human or machine.
    By The Conversation19 August 2023
    Twitter LinkedIn Facebook WhatsApp Email Telegram Copy Link
    News Alerts
    WhatsApp

    Bots can be brutalArtificial intelligence-powered (AI) chatbots are becoming increasingly human-like by design, to the point that some among us may struggle to distinguish between human and machine.

    This week, Snapchat’s My AI chatbot glitched and posted a story of what looked like a wall and ceiling, before it stopped responding to users. Naturally, the internet began to question whether the ChatGPT-powered chatbot had gained sentience.

    A crash course in AI literacy could have quelled this confusion. But, beyond that, the incident reminds us that as AI chatbots grow closer to resembling humans, managing their uptake will only get more challenging – and more important.

    ChatGPT marked a major leap from simpler ‘rules-based’ chatbots, such as those used in online customer service

    Since ChatGPT burst onto our screens late last year, many digital platforms have integrated AI into their services. Even as I draft this article on Microsoft Word, the software’s predictive AI capability is suggesting possible sentence completions.

    Known as generative AI, this relatively new type of AI is distinguished from its predecessors by its ability to generate new content that is precise, human-like and seemingly meaningful.

    Generative AI tools, including AI image generators and chatbots, are built on large language models (LLMs). These computational models analyse the associations between billions of words, sentences and paragraphs to predict what ought to come next in a given text. As OpenAI co-founder Ilya Sutskever puts it, an LLM is “just a really, really good next-word predictor”.

    Advanced LLMs are also fine-tuned with human feedback. This training, often delivered through countless hours of cheap human labour, is the reason AI chatbots can now have seemingly human-like conversations.

    Flagship

    OpenAI’s ChatGPT is still the flagship generative AI model. Its release marked a major leap from simpler “rules-based” chatbots, such as those used in online customer service.

    Human-like chatbots that talk to a user rather than at them have been linked with higher levels of engagement. One study found the personification of chatbots leads to increased engagement which, over time, may turn into psychological dependence. Another study involving stressed participants found a human-like chatbot was more likely to be perceived as competent, and therefore more likely to help reduce participants’ stress.

    These chatbots have also been effective in fulfilling organisational objectives in various settings, including retail, education, workplace and healthcare settings.

    Google is using generative AI to build a “personal life coach” that will supposedly help people with various personal and professional tasks, including providing life advice and answering intimate questions.

    This is despite Google’s own AI safety experts warning that users could grow too dependant on AI and may experience “diminished health and wellbeing” and a “loss of agency” if they take life advice from it.

    In the recent Snapchat incident, the company put the whole thing down to a “temporary outage”. We may never know what actually happened; it could be yet another example of AI “hallucinating”, or the result of a cyberattack, or even just an operational error.

    Either way, the speed with which some users assumed the chatbot had achieved sentience suggests we are seeing an unprecedented anthropomorphism of AI. It’s compounded by a lack of transparency from developers, and a lack of basic understanding among the public.

    We shouldn’t underestimate how individuals may be misled by the apparent authenticity of human-like chatbots.

    Earlier this year, a Belgian man’s suicide was attributed to conversations he’d had with a chatbot about climate inaction and the planet’s future. In another example, a chatbot named Tessa was found to be offering harmful advice to people through an eating disorder helpline.

    Chatbots may be particularly harmful to the more vulnerable among us, and especially to those with psychological conditions.

    Bots can be brutal

    You may have heard of the “uncanny valley” effect. It refers to that uneasy feeling you get when you see a humanoid robot that almost looks human, but its slight imperfections give it away, and it ends up being creepy.

    It seems a similar experience is emerging in our interactions with human-like chatbots. A slight blip can raise the hairs on the back of the neck.

    One solution might be to lose the human edge and revert to chatbots that are straightforward, objective and factual. But this would come at the expense of engagement and innovation.

    Even the developers of advanced AI chatbots often can’t explain how they work. Yet in some ways (and as far as commercial entities are concerned) the benefits outweigh the risks.

    Generative AI has demonstrated its usefulness in big-ticket items such as productivity, healthcare, education and even social equity. It’s unlikely to go away. So how do we make it work for us?

    Since 2018, there has been a significant push for governments and organisations to address the risks of AI

    Since 2018, there has been a significant push for governments and organisations to address the risks of AI. But applying responsible standards and regulations to a technology that’s more “human-like” than any other comes with a host of challenges.

    Currently, there is no legal requirement for Australian businesses to disclose the use of chatbots. In the US, California has introduced a “bot bill” that would require this, but legal experts have poked holes in it – and the bill has yet to be enforced at the time of writing this article.

    Moreover, ChatGPT and similar chatbots are made public as “research previews”. This means they often come with multiple disclosures on their prototypical nature, and the onus for responsible use falls on the user.

    Read: From Mad Men to machines: big advertisers shift to AI

    The European Union’s AI Act, the world’s first comprehensive regulation on AI, has identified moderate regulation and education as the path forward – since excess regulation could stunt innovation. Similar to digital literacy, AI literacy should be mandated in schools, universities and organisations, and should also be made free and accessible for the public.The Conversation

    • The author, Daswin de Silva, is deputy director of the Centre for Data Analytics and Cognition, La Trobe University
    • This article is republished from The Conversation under a Creative Commons licence

    Get TechCentral’s free daily newsletter



    Subscribe to TechCentral Subscribe to TechCentral
    Share. Facebook Twitter LinkedIn WhatsApp Telegram Email Copy Link
    Previous ArticleInformal settlements turn to renewable energy
    Next Article No more user blocking on X, Musk decrees

    Related Posts

    China’s car factories run cold as price war masks deep overcapacity

    19 June 2025

    Yellow Card, Visa in deal to hasten stablecoin uptake in Africa

    19 June 2025

    Jaltech backs solar firm Wetility in R500-million capital raise

    18 June 2025
    Company News

    Disrupt first, ask questions later – the uncomfortable truth about incident response

    18 June 2025

    Sage brings together HR leaders to explore the future of payroll and people management

    18 June 2025

    Altron: a brand journey, a birthday celebration and a bet on Joburg’s future

    17 June 2025
    Opinion

    South Africa pioneered drone laws a decade ago – now it must catch up

    17 June 2025

    AI and the future of ICT distribution

    16 June 2025

    Singapore soared – why can’t we? Lessons South Africa refuses to learn

    13 June 2025

    Subscribe to Updates

    Get the best South African technology news and analysis delivered to your e-mail inbox every morning.

    © 2009 - 2025 NewsCentral Media

    Type above and press Enter to search. Press Esc to cancel.