If you’ve heard recent proclamations from leaders of Salesforce and Meta Platforms, we’re living in the year of the “agent”, or artificial intelligence services that act more independently than chatbots. A new model from China called Manus, for instance, went viral over the weekend after its creators claimed it could buy real estate and program videogames.
But building AI that needs little prompting raises uncomfortable questions about where humans will fit into, well, everything in the next decade, as our cognitive labour gets automated away from our work.
Reid Hoffman has spent the last few years banging the drum on AI’s positive effects. The co-founder of LinkedIn, Microsoft board member and partner at venture capital firm Greylock Partners has published a book arguing that as AI systems gain greater abilities, they will enhance human agency — hence its title: Superagency.
Partly, this is a response to unease about how AI looks likely to erode our critical thinking skills and “agency” itself, which the Merriam-Webster dictionary defines as the capacity to exert power. “For every major general-purpose technology, this worry comes up,” Hoffman says in an interview. “A lot of the discourse around the printing press was very similar to the discourse around AI, which is, it’ll reduce human cognitive capabilities.”
He argues that skills we do lose — like the ability to recite Homeric poems — are a price worth paying for all the extra things AI will allow us to do. Chatbots can talk people out of attempting suicide, he says.
Hoffman has a point. On X, the platform previously known as Twitter, users have also been adding “@perplexity” to their replies to use the San Francisco-based chatbot as a kind of arbiter of truth, asking if bold claims made by certain users — including the site’s owner — are true. The bot’s rebuttals and explanations aren’t only useful but an example of how AI could reshape our thinking for the better. It’s a natural part of human evolution to invent tools to do that, according to Hoffman.
Consumer convenience
But “evolution” doesn’t always point to progress. More sedentary lifestyles have led to rising obesity and other health problems, despite being brought on by technological advances aimed at improving our lives. AI’s benefits to everyone will depend on who controls its application and how we approach it. China uses it to bolster the surveillance state with facial recognition, and that obviously impinges on freedom. “It’s not to say that all AI will be great, but the question will be, if we can get Western democracy to shape it and build it in the right way, it’s like AI for you, right?” says Hoffman, who formerly served on the board of OpenAI.
It’s true that technology built in the West has been a boon for consumer convenience. But the addictive qualities of today’s online services have also come with a price for our well-being. Silicon Valley tends to build products that remove friction from our lives, making it easier to order products online, message a friend, draft e-mails or reports, and conduct research.
Read: Apple’s AI-powered Siri assistant hit by big delay
The flipside to that is overreliance. Kester Brewin, an author and the associate director at the UK’s Institute for the Future of Work, tells me that some young professionals starting their first jobs have found themselves hooked on AI tools because they don’t trust their own ideas. He calls it “deference to the machine”.
That could become a problem as AI agents become more integrated with white-collar work — particularly if they’re designed to sound more like a knowledgeable human than a machine with limits. One view of the future held by those like Hoffman is that professionals in areas like finance, law and media will take on more of a role overseeing these agents, which will do more of the work humans accomplish today.
But studies have shown that human performance tends to degrade when people supervise machines, particularly in areas like aviation, healthcare and manufacturing. The reason: people get bored watching machines that rarely fail, and they trust automation too much to notice when mistakes happen. Hence why pilots have been shown to become less attentive when using the autopilot system, sometimes missing critical cues.
Pilots deal with this by taking a defensive approach. When landing a plane they’re taught to be ready to abort at any time in case the runway is blocked, keeping their hand on the throttle or preparing to press the transmit button to call ground control about going back around. “That due diligence process in checking is critical,” says Luis Prato, an insurance executive who’s also flown planes for more than 20 years. “I transfer and routinely use this concept in the business of insurance because it’s all about the management of risk.”
Perhaps this defensive approach should be transferred to AI. Technology companies should program their chatbots to disclose their limitations, admitting ignorance instead of hallucinating answers — as they’re known to do.
Read: Microsoft to invest billions in AI data centres in South Africa
And instead of “deferring” to machines, we should presume limitations on their part. Hoffman says human agency is also determined by a person’s mindset, and on that we both agree. The most beneficial relationship with AI will come from maintaining a healthy scepticism of what it can do. AI promises to augment our capabilities, but, as with any powerful technology, its effect on our autonomy will depend as much on the tech itself as how we choose to engage with it. — (c) 2025 Bloomberg LP
Get breaking news from TechCentral on WhatsApp. Sign up here.