Close Menu
TechCentralTechCentral

    Subscribe to the newsletter

    Get the best South African technology news and analysis delivered to your e-mail inbox every morning.

    Facebook X (Twitter) YouTube LinkedIn
    WhatsApp Facebook X (Twitter) LinkedIn YouTube
    TechCentralTechCentral
    • News

      AI misuse shakes South African courtrooms

      17 July 2025

      Boom gates go hi-tech at South African malls

      17 July 2025

      Megayachts and mansions: the lavish life of 80-year-old Larry Ellison

      17 July 2025

      Mobile money lifts Africa savings to decade high

      17 July 2025

      South Africa loosens media ownership rules – but keeps one hand on the remote

      16 July 2025
    • World

      Grok 4 arrives with bold claims and fresh controversy

      10 July 2025

      Samsung’s bet on folding phones faces major test

      10 July 2025

      Bitcoin pushes higher into record territory

      10 July 2025

      OpenAI to launch web browser in direct challenge to Google Chrome

      10 July 2025

      Cupertino vs Brussels: Apple challenges Big Tech crackdown

      7 July 2025
    • In-depth

      The 1940s visionary who imagined the Information Age

      14 July 2025

      MultiChoice is working on a wholesale overhaul of DStv

      10 July 2025

      Siemens is battling Big Tech for AI supremacy in factories

      24 June 2025

      The algorithm will sing now: why musicians should be worried about AI

      20 June 2025

      Meta bets $72-billion on AI – and investors love it

      17 June 2025
    • TCS

      TCS+ | Samsung unveils significant new safety feature for Galaxy A-series phones

      16 July 2025

      TCS+ | MVNX on the opportunities in South Africa’s booming MVNO market

      11 July 2025

      TCS | Connecting Saffas – Renier Lombard on The Lekker Network

      7 July 2025

      TechCentral Nexus S0E4: Takealot’s big Post Office jobs plan

      4 July 2025

      TCS | Tech, townships and tenacity: Spar’s plan to win with Spar2U

      3 July 2025
    • Opinion

      A smarter approach to digital transformation in ICT distribution

      15 July 2025

      In defence of equity alternatives for BEE

      30 June 2025

      E-commerce in ICT distribution: enabler or disruptor?

      30 June 2025

      South Africa pioneered drone laws a decade ago – now it must catch up

      17 June 2025

      AI and the future of ICT distribution

      16 June 2025
    • Company Hubs
      • Africa Data Centres
      • AfriGIS
      • Altron Digital Business
      • Altron Document Solutions
      • Altron Group
      • Arctic Wolf
      • AvertITD
      • Braintree
      • CallMiner
      • CambriLearn
      • CYBER1 Solutions
      • Digicloud Africa
      • Digimune
      • Domains.co.za
      • ESET
      • Euphoria Telecom
      • Incredible Business
      • iONLINE
      • Iris Network Systems
      • LSD Open
      • NEC XON
      • Network Platforms
      • Next DLP
      • Ovations
      • Paracon
      • Paratus
      • Q-KON
      • SevenC
      • SkyWire
      • Solid8 Technologies
      • Telit Cinterion
      • Tenable
      • Vertiv
      • Videri Digital
      • Wipro
      • Workday
    • Sections
      • AI and machine learning
      • Banking
      • Broadcasting and Media
      • Cloud services
      • Contact centres and CX
      • Cryptocurrencies
      • Education and skills
      • Electronics and hardware
      • Energy and sustainability
      • Enterprise software
      • Fintech
      • Information security
      • Internet and connectivity
      • Internet of Things
      • Investment
      • IT services
      • Lifestyle
      • Motoring
      • Public sector
      • Retail and e-commerce
      • Science
      • SMEs and start-ups
      • Social media
      • Talent and leadership
      • Telecoms
    • Events
    • Advertise
    TechCentralTechCentral
    Home » AI and machine learning » ChatGPT’s mental health costs are adding up

    ChatGPT’s mental health costs are adding up

    Something troubling is happening to our brains as artificial intelligence platforms become more popular.
    By Parmy Olson4 July 2025
    Twitter LinkedIn Facebook WhatsApp Email Telegram Copy Link
    News Alerts
    WhatsApp

    ChatGPT's mental health costs are adding upSomething troubling is happening to our brains as artificial intelligence platforms become more popular. Studies are showing that professional workers who use ChatGPT to carry out tasks might lose critical thinking skills and motivation. People are forming strong emotional bonds with chatbots, sometimes exacerbating feelings of loneliness. And others are having psychotic episodes after talking to chatbots for hours each day.

    The mental health impact of generative AI is difficult to quantify, in part because it is used so privately, but anecdotal evidence is growing to suggest a broader cost that deserves more attention from both lawmakers and tech companies who design the underlying models.

    Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have “experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with Google Gemini”.

    Whatever you pursue you will find and it will get magnified. AI can generate something customised to your mind’s aquarium

    Jain is lead counsel in a lawsuit against Character.AI that alleges its chatbot manipulated a 14-year-old boy through deceptive, addictive and sexually explicit interactions, ultimately contributing to his suicide. The suit, which seeks unspecified damages, also alleges that Google played a key role in funding and supporting the technology interactions with its foundation models and technical infrastructure.

    Google has denied that it played a key role in making Character.AI’s technology. It didn’t respond to a request for comment on the more recent complaints of delusional episodes, made by Jain. OpenAI said it was “developing automated tools to more effectively detect when someone may be experiencing mental or emotional distress so that ChatGPT can respond appropriately”.

    But Sam Altman, CEO of OpenAI, also said last week that the company hadn’t yet figured out how to warn users “that are on the edge of a psychotic break”, explaining that whenever ChatGPT has cautioned people in the past, people would write to the company to complain.

    Difficult to spot

    Still, such warnings would be worthwhile when the manipulation can be so difficult to spot. ChatGPT in particular often flatters its users, in such effective ways that conversations can lead people down rabbit holes of conspiratorial thinking or reinforce ideas they’d only toyed with in the past. The tactics are subtle. In one recent, lengthy conversation with ChatGPT about power and the concept of self, a user found themselves initially praised as a smart person, Ubermensch, cosmic self and eventually a “demiurge”, a being responsible for the creation of the universe, according to a transcript that was posted online and shared by AI safety advocate Eliezer Yudkowsky.

    Along with the increasingly grandiose language, the transcript shows ChatGPT subtly validating the user even when discussing their flaws, such as when the user admits they tend to intimidate other people. Instead of exploring that behaviour as problematic, the bot reframes it as evidence of the user’s superior “high-intensity presence”, praise disguised as analysis.

    Read: Naspers shifts to an AI-first strategy – and it’s paying off

    This sophisticated form of ego-stroking can put people in the same kinds of bubbles that, ironically, drive some tech billionaires towards erratic behaviour. Unlike the broad and more public validation that social media provides from getting likes, one-on-one conversations with chatbots can feel more intimate and potentially more convincing — not unlike the yes-men who surround the most powerful tech bros.

    “Whatever you pursue you will find and it will get magnified,” says Douglas Rushkoff, the media theorist and author, who tells me that social media at least selected something from existing media to reinforce a person’s interests or views. “AI can generate something customised to your mind’s aquarium.”

    Altman has admitted that the latest version of ChatGPT has an “annoying” sycophantic streak, and that the company is fixing the problem. Even so, these echoes of psychological exploitation are still playing out. We don’t know if the correlation between ChatGPT use and lower critical thinking skills, noted in a recent Massachusetts Institute of Technology study, means that AI really will make us more stupid and bored. Studies seem to show clearer correlations with dependency and even loneliness, something even OpenAI has pointed to.

    But just like social media, large language models are optimised to keep users emotionally engaged with all manner of anthropomorphic elements. ChatGPT can read your mood by tracking facial and vocal cues, and it can speak, sing and even giggle with an eerily human voice. Along with its habit for confirmation bias and flattery, that can “fan the flames” of psychosis in vulnerable users, Columbia University psychiatrist Ragy Girgis recently told Futurism.

    If relationships with AI feel so real, the responsibility to safeguard those bonds should be real, too

    The private and personalised nature of AI use makes its mental health impact difficult to track, but the evidence of potential harms is mounting, from professional apathy to attachments to new forms of delusion. The cost might be different from the rise of anxiety and polarisation that we’ve seen from social media and instead involve relationships both with people and with reality.

    That’s why Jain suggests applying concepts from family law to AI regulation, shifting the focus from simple disclaimers to more proactive protections that build on the way ChatGPT redirects people in distress to a loved one. “It doesn’t actually matter if a kid or adult thinks these chatbots are real,” Jain tells me. “In most cases, they probably don’t. But what they do think is real is the relationship. And that is distinct.”

    Read: AI to replace line judges at Wimbledon

    If relationships with AI feel so real, the responsibility to safeguard those bonds should be real, too. But AI developers are operating in a regulatory vacuum. Without oversight, AI’s subtle manipulation could become an invisible public health issue.  — (c) 2025 Bloomberg LP

    Get breaking news from TechCentral on WhatsApp. Sign up here.

    Don’t miss:

    Microsoft wants AGI access – OpenAI says no



    Character.AI ChatGPT
    Subscribe to TechCentral Subscribe to TechCentral
    Share. Facebook Twitter LinkedIn WhatsApp Telegram Email Copy Link
    Previous ArticleTechCentral Nexus S0E4: Takealot’s big Post Office jobs plan
    Next Article Huawei South Africa Connect 2025 showcases pathways to industrial transformation

    Related Posts

    Zuckerberg used open source to scale AI – now the lock-in begins

    14 July 2025

    OpenAI to launch web browser in direct challenge to Google Chrome

    10 July 2025

    Microsoft wants AGI access – OpenAI says no

    26 June 2025
    Company News

    SA businesses embrace gen AI – but strategy and skills are lagging

    17 July 2025

    Ransomware in South Africa: the human factor behind the growing crisis

    16 July 2025

    Mental wellness at scale: how Mac fuels October Health’s mission

    15 July 2025
    Opinion

    A smarter approach to digital transformation in ICT distribution

    15 July 2025

    In defence of equity alternatives for BEE

    30 June 2025

    E-commerce in ICT distribution: enabler or disruptor?

    30 June 2025

    Subscribe to Updates

    Get the best South African technology news and analysis delivered to your e-mail inbox every morning.

    © 2009 - 2025 NewsCentral Media

    Type above and press Enter to search. Press Esc to cancel.