Have you heard of ChatGPT yet? It’s a thrilling, vexing, ontologically mesmerising new technology created by the research group OpenAI. It can solve all your problems and answer all your questions. Or at least it will try to.
In essence, ChatGPT is a bot trained to generate human-like responses to user inputs. Through the wonders of machine learning, it’s acquired a remarkably expansive skillset. On request, it can produce basic software code, rudimentary financial analysis, amusing poems and songs, spot-on imitations, reflective essays on virtually any topic, natural-language summaries of technical papers or scientific concepts, chat-based customer service, informed predictions, personalised advice, and answers — for better or worse — to just about any question. Unusually for a chatbot, it can learn as it goes, and thus sustain engaging open-ended conversations.
It is, to borrow Arthur C Clarke’s old formulation, “indistinguishable from magic”.
Almost, anyway. One problem, which its creators concede, is that ChatGPT sometimes offers answers that are precise, authoritative and utterly wrong. A request for an obituary of Mussolini that prominently mentions skateboarding yields a disquisition on the dictator’s interest in the sport that happens to be entirely fictitious. Another soliciting advice for the US Federal Reserve returns an essay that cites ostensibly legitimate sources, but that doctors the data to suit the bot’s purposes. Stack Overflow, a forum for coders, has temporarily banned responses from ChatGPT because its answers “have a high rate of being incorrect”. Students looking for a homework assistant should proceed with care.
The bot also seems easily confused. Try posing a classic riddle: “In total, a bat and a ball cost $1.10. If the bat costs $1.00 more than the ball, how much does the ball cost?” Haplessly for a robot, ChatGPT responds with the instinctive but wrong answer of $0.10. (The correct solution is $0.05.) The Internet’s hivemind has been joyfully cataloging other examples of the bot’s faults and frailties.
Such criticism feels misplaced. The fact is, ChatGPT is a remarkable achievement. Not long ago, a conversational bot of such sophistication seemed hopelessly out of reach. As the technology improves — and, crucially, grows more accurate — it seems likely to be a boon for coders, researchers, academics, policymakers, journalists and more. (Presuming that it doesn’t put them all out of work.) Its effect on the knowledge economy could be profound. In previous eras, wars might’ve been fought for access to such a seemingly enchanted tool — and with good reason.
Intriguingly, OpenAI plans to make the tool available as an application programming interface (or API), which will allow outside developers to integrate it into their websites or apps without needing to understand the underlying technology. That means companies could soon use ChatGPT to create virtual assistants, customer service bots or marketing tools. They could automate document review and other tedious tasks. Down the road, they might use it to generate new ideas and simplify decision making. In all likelihood, no one has thought of the best uses for it yet.
In that respect and others, ChatGPT exemplifies a widening array of AI tools that may soon transform entire industries, from manufacturing to healthcare to finance. Investment has been surging in the field. Breakthroughs seem to proliferate by the day. Many industry experts express unbounded enthusiasm. By one analysis, AI will likely contribute a staggering US$15.7-trillion to the global economy by 2030.
As yet, policymakers seem largely unaware of this revolution, let alone prepared for it. They should greet it in a spirit of optimism, while being attentive to its potential risks — to data security, privacy, employment and more. They might also ponder some rather more existential concerns. For better and worse, ChatGPT heralds a very different world in the making. — (c) 2022 Bloomberg LP