Close Menu
TechCentralTechCentral

    Subscribe to the newsletter

    Get the best South African technology news and analysis delivered to your e-mail inbox every morning.

    Facebook X (Twitter) YouTube LinkedIn
    WhatsApp Facebook X (Twitter) LinkedIn YouTube
    TechCentralTechCentral
    • News

      Vodacom’s Maziv deal gets makeover ahead of crucial hearing

      18 July 2025

      Takealot taps Mr D to deliver toys, pet food and future growth

      18 July 2025

      Cut electricity prices for data centres: Andile Ngcaba

      18 July 2025

      ‘Oh, Ani!’: Elon’s edgy bot stirs ethical storm

      18 July 2025

      Trump U-turn on Nvidia spurs talk of grand bargain with China

      18 July 2025
    • World

      Grok 4 arrives with bold claims and fresh controversy

      10 July 2025

      Samsung’s bet on folding phones faces major test

      10 July 2025

      Bitcoin pushes higher into record territory

      10 July 2025

      OpenAI to launch web browser in direct challenge to Google Chrome

      10 July 2025

      Cupertino vs Brussels: Apple challenges Big Tech crackdown

      7 July 2025
    • In-depth

      The 1940s visionary who imagined the Information Age

      14 July 2025

      MultiChoice is working on a wholesale overhaul of DStv

      10 July 2025

      Siemens is battling Big Tech for AI supremacy in factories

      24 June 2025

      The algorithm will sing now: why musicians should be worried about AI

      20 June 2025

      Meta bets $72-billion on AI – and investors love it

      17 June 2025
    • TCS

      TCS+ | Samsung unveils significant new safety feature for Galaxy A-series phones

      16 July 2025

      TCS+ | MVNX on the opportunities in South Africa’s booming MVNO market

      11 July 2025

      TCS | Connecting Saffas – Renier Lombard on The Lekker Network

      7 July 2025

      TechCentral Nexus S0E4: Takealot’s big Post Office jobs plan

      4 July 2025

      TCS | Tech, townships and tenacity: Spar’s plan to win with Spar2U

      3 July 2025
    • Opinion

      A smarter approach to digital transformation in ICT distribution

      15 July 2025

      In defence of equity alternatives for BEE

      30 June 2025

      E-commerce in ICT distribution: enabler or disruptor?

      30 June 2025

      South Africa pioneered drone laws a decade ago – now it must catch up

      17 June 2025

      AI and the future of ICT distribution

      16 June 2025
    • Company Hubs
      • Africa Data Centres
      • AfriGIS
      • Altron Digital Business
      • Altron Document Solutions
      • Altron Group
      • Arctic Wolf
      • AvertITD
      • Braintree
      • CallMiner
      • CambriLearn
      • CYBER1 Solutions
      • Digicloud Africa
      • Digimune
      • Domains.co.za
      • ESET
      • Euphoria Telecom
      • Incredible Business
      • iONLINE
      • Iris Network Systems
      • LSD Open
      • NEC XON
      • Network Platforms
      • Next DLP
      • Ovations
      • Paracon
      • Paratus
      • Q-KON
      • SevenC
      • SkyWire
      • Solid8 Technologies
      • Telit Cinterion
      • Tenable
      • Vertiv
      • Videri Digital
      • Wipro
      • Workday
    • Sections
      • AI and machine learning
      • Banking
      • Broadcasting and Media
      • Cloud services
      • Contact centres and CX
      • Cryptocurrencies
      • Education and skills
      • Electronics and hardware
      • Energy and sustainability
      • Enterprise software
      • Fintech
      • Information security
      • Internet and connectivity
      • Internet of Things
      • Investment
      • IT services
      • Lifestyle
      • Motoring
      • Public sector
      • Retail and e-commerce
      • Science
      • SMEs and start-ups
      • Social media
      • Talent and leadership
      • Telecoms
    • Events
    • Advertise
    TechCentralTechCentral
    Home » AI and machine learning » The insanely powerful supercomputer Microsoft built for AI workloads

    The insanely powerful supercomputer Microsoft built for AI workloads

    When Microsoft invested $1-billion in OpenAI in 2019, it agreed to build a cutting-edge supercomputer for the AI research start-up.
    By Dina Bass13 March 2023
    Twitter LinkedIn Facebook WhatsApp Email Telegram Copy Link
    News Alerts
    WhatsApp

    When Microsoft invested US$1-billion in OpenAI in 2019, it agreed to build a massive, cutting-edge supercomputer for the artificial intelligence research start-up. The only problem: Microsoft didn’t have anything like what OpenAI needed and wasn’t totally sure it could build something that big in its Azure cloud service without it breaking.

    OpenAI was trying to train an increasingly large set of AI programs called models, which were ingesting greater volumes of data and learning more and more parameters, the variables the AI system has sussed out through training and retraining. That meant OpenAI needed access to powerful cloud computing services for long periods of time.

    To meet that challenge, Microsoft had to find ways to string together tens of thousands of Nvidia’s A100 graphics chips — the workhorse for training AI models — and change how it positions servers on racks to prevent power outages. Scott Guthrie, the Microsoft executive vice president who oversees cloud and AI, wouldn’t give a specific cost for the project, but said “it’s probably larger” than several hundred million dollars.

    We built a system architecture that could operate and be reliable at a very large scale.

    “We built a system architecture that could operate and be reliable at a very large scale. That’s what resulted in ChatGPT being possible,” said Nidhi Chappell, Microsoft GM of Azure AI infrastructure. “That’s one model that came out of of it. There’s going to be many, many others.”

    The technology allowed OpenAI to release ChatGPT, the viral chatbot that attracted more than a million users within days of going public in November and is now getting pulled into other companies’ business models. As generative AI tools such as ChatGPT gain interest from businesses and consumers, more pressure will be put on cloud services providers such as Microsoft, Amazon.com and Google to ensure their data centres can provided the enormous computing power needed.

    Now Microsoft uses that same set of resources it built for OpenAI to train and run its own large AI models, including the new Bing search bot introduced last month. It also sells the system to other customers. The software giant is already at work on the next generation of the AI supercomputer, part of an expanded deal with OpenAI in which Microsoft added $10-billion to its investment.

    Better for AI

    “We didn’t build them a custom thing — it started off as a custom thing, but we always built it in a way to generalise it so that anyone that wants to train a large language model can leverage the same improvements,” said Guthrie in an interview. “That’s really helped us become a better cloud for AI broadly.”

    Training a massive AI model requires a large pool of connected graphics processing units in one place like the AI supercomputer Microsoft assembled. Once a model is in use, answering all the queries users pose — called inference — requires a slightly different setup. Microsoft also deploys graphics chips for inference but those processors — hundreds of thousands of them — are geographically dispersed throughout the company’s more than 60 regions of data centres. Now the company is adding the latest Nvidia graphics chip for AI workloads — the H100 — and the newest version of Nvidia’s Infiniband networking technology to share data even faster, Microsoft said on Monday in a blog post.

    The new Bing is still in preview with Microsoft gradually adding more users from a waitlist. Guthrie’s team holds a daily meeting with about two dozen employees they’ve dubbed the “pit crew”, after the group of mechanics that tune race cars in the middle of the race. The group’s job is to figure out how to bring greater amounts of computing capacity online quickly, as well as fix problems that crop up.

    “It’s very much a kind of a huddle, where it’s like, ‘Hey, anyone has a good idea, let’s put it on the table today, and let’s discuss it and let’s figure out, okay, can we shave a few minutes here? Can we shave a few hours? A few days?’” Guthrie said.

    A cloud service depends on thousands of different parts and items — the individual pieces of servers, pipes, concrete for the buildings, different metals and minerals — and a delay or short supply of any one component, no matter how tiny, can throw everything off. Recently, the pit crew had to deal with a shortage of cable trays — the basket-like contraptions that hold the cables coming off the machines. So they designed a new cable tray that Microsoft could manufacture itself or find somewhere to buy. They’ve also worked on ways to squish as many servers as possible in existing data centres around the world so they don’t have to wait for new buildings, Guthrie said.

    When OpenAI or Microsoft is training a large AI model, the work happens at one time. It’s divided across all the GPUs and at certain points, the units need to talk to each other to share the work they’ve done. For the AI supercomputer, Microsoft had make sure the networking gear that handles the communication among all the chips could handle that load, and it had to develop software that gets the best use out of the GPUs and the networking equipment. The company has now come up with software that lets it train models with tens of trillions of parameters.

    Because all the machines fire up at once, Microsoft had to think about where they were placed and where the power supplies were located. Otherwise you end up with the data centre version of what happens when you turn on a microwave, toaster and vacuum cleaner at the same time in the kitchen, Guthrie said.

    Read: Microsoft is infusing AI into business apps, including Teams

    The company also had to make sure it could cool off all of those machines and chips, and uses evaporation, outside air in cooler climates and high-tech swamp coolers in hot ones, said Alistair Speirs, director of Azure global infrastructure.

    Microsoft is going to keep working on customised server and chip designs and ways to optimise its supply chain in order to wring any speed gains, efficiency and cost-savings it can, Guthrie said.

    Read: Microsoft to bake Bing AI into Windows 11

    “The model that is wowing the world right now is built on the supercomputer we started building a couple of years ago. The new models will be built on the new supercomputer we’re training now, which is much bigger and will enable even more sophistication,” he said.  — Reported with Max Chafkin and Ian King, (c) 2023 Bloomberg LP

    Get TechCentral’s daily newsletter



    ChatGPT Microsoft OpenAI
    Subscribe to TechCentral Subscribe to TechCentral
    Share. Facebook Twitter LinkedIn WhatsApp Telegram Email Copy Link
    Previous ArticleAbsa adds to growing chorus of alarm over load shedding
    Next Article MTN takes R695-million hit from load shedding

    Related Posts

    ‘Oh, Ani!’: Elon’s edgy bot stirs ethical storm

    18 July 2025

    Microsoft South Africa to get new MD as Lillian Barnard moves to regional role

    14 July 2025

    Zuckerberg used open source to scale AI – now the lock-in begins

    14 July 2025
    Company News

    Vertiv to acquire custom rack solutions manufacturer

    18 July 2025

    SA businesses embrace gen AI – but strategy and skills are lagging

    17 July 2025

    Ransomware in South Africa: the human factor behind the growing crisis

    16 July 2025
    Opinion

    A smarter approach to digital transformation in ICT distribution

    15 July 2025

    In defence of equity alternatives for BEE

    30 June 2025

    E-commerce in ICT distribution: enabler or disruptor?

    30 June 2025

    Subscribe to Updates

    Get the best South African technology news and analysis delivered to your e-mail inbox every morning.

    © 2009 - 2025 NewsCentral Media

    Type above and press Enter to search. Press Esc to cancel.