ChatGPT was well on its way to becoming a household name even before 2023 kicked off.
Just weeks after the 30 November launch of the generative artificial intelligence-powered chatbot, OpenAI, the non-profit behind ChatGPT, was projected to rake in as much as US$1-billion in revenue in 2024, sources said at the time.
The so-called large language model’s ability to turn prompts into poetry, song and high school essays enchanted 100 million users within two months, accomplishing what took Facebook four-and-a-half years and Twitter five in becoming the fastest growing consumer app ever.
Sometimes, the answers were wrong, despite being delivered with conviction. This happened often enough that “hallucinate”, in the sense of AI producing wrong information, was selected as Dictionary.com’s word of the year, owing to the technology’s deep impressions on society.
Such mistakes did not sap the euphoria or stop the existential dread this new technology inspired. Investors, led by Microsoft’s multibillion-dollar bet on OpenAI, injected $27-billion into generative AI start-ups in 2023, according to Pitchbook. The battle for AI supremacy, stewing in the background between big tech firms for years, was suddenly in focus with Google, Meta and Amazon.com all announcing new milestones.
By March, thousands of scientists and AI experts, including Elon Musk, signed an open letter demanding a pause to training more powerful systems to study their impact on, and potential danger to, humanity. The move drew parallels to Oppenheimer, Christopher Nolan’s box-office hit about the titular atomic bomb maker’s warnings that the relentless pursuit of progress could lead to human extinction.
‘Existential risk’
“This is an existential risk,” said one of the “godfathers of AI”, Geoffrey Hinton, who quit Google in May. “It’s close enough that we ought to be working very hard right now, and putting a lot of resources into figuring out what we can do about it.”
Consultancy PwC estimated AI-related economic impact could reach $15.7-trillion globally by 2030, nearly the gross domestic output of China. Powering this growth optimism is the fact that nearly every industry from finance and legal to manufacturing and entertainment have embraced AI as part of its foreseeable strategy.
The winners and losers in the AI era are only just emerging. As in other eras, beneficiaries will likely be drawn along socioeconomic lines. Civil rights advocates have raised concerns over potential bias in AI in fields such as recruitment, while labour unions have warned of deep disruptions to employment as AI threatens to reduce or eliminate some jobs including writing computer code and drafting entertainment content.
Read: Samsung Galaxy S24 series to be powered by AI
Chip maker Nvidia, whose graphics processors are the hottest commodity in the global AI race, has emerged as a big early winner, with its market capitalisation soaring into the trillion-dollar club alongside Apple and Google parent Alphabet.
In the final months of the year, another winner appeared unexpectedly out of turmoil. In November, the board of OpenAI fired CEO Sam Altman for “not being consistently candid with them”, according to its terse statement.
In the absence of explanation, the spectacle became a referendum over AI evangelism, represented on the one hand by Altman’s push to commercialise AI, versus sceptics and doomsayers who sought a slower and more careful approach.
The optimists — and Altman — won. The ousted CEO was restored just days later, thanks in no small part to OpenAI employees who threatened a mass exodus without him at the helm. In explaining what brought the company to the brink, Altman said people were fretting over the high stakes of developing AI that could surpass human intelligence. “I think that all exploded,” he said at a New York event in December.
Some OpenAI researchers had warned of a new AI breakthrough ahead of Altman’s ouster, through a top-secret model called Q* (pronounced Q-Star).
One question provoked by the OpenAI saga: will the future of AI and its societal impact continue to be deliberated behind closed doors, by a privileged few in Silicon Valley?
Read: 2024 tech trends: AI’s true power will come to the fore
Regulators led by the EU are determined to play a lead role in 2024 with a comprehensive plan to establish guardrails for the technology in the form of the EU AI Act. The details of the draft are due to be disclosed in the coming weeks.
These rules, and others being drafted in the UK and US, come as the world heads into the biggest election year in history, raising concern about AI-generated misinformation targeting voters. In 2023 alone, NewsGuard, a company which established a ratings system for news and information websites, tracked 614 “unreliable” AI-generated sites in 15 languages from English to Arabic and Chinese.
Read: Amazon sellers warn misleading AI product reviews threaten sales
Good or bad, expect AI, which has already been conscripted to make campaign calls in the US, to play an outsize role in many of the elections taking place this year. — (c) 2024 Reuters