We journalists are by nature a pretty paranoid lot. Permanently worried that somebody somewhere — governments, lawyers, our colleagues, the IT department — is about to do something terrible to us, or our copy.
So far the 21st century has only fed that paranoia. Back in 2006, one of my first covers as editor of The Economist was entitled “Who Killed the Newspaper?” At the time the internet was wrecking the cosy business model of most big city papers that relied on their monopoly of classified advertising.
Looking back, though, it was less a case of assassination than suicide. Far too many quality media brands fell for the tech rhetoric that “legacy media” was dead and that content should be free. Soon they were stuck in a vicious circle of chasing clicks, cutting costs and gradually handing over their business to the tech giants.
But eventually sense prevailed, people started to charge for journalism and legacy media began to recover. The New York Times, which had only 500 000 digital subscribers when Mark Thompson arrived in 2012 and focused the Grey Lady on selling subscriptions, now has more than 10 million paying customers. The “content is free” sirens who lured so many great names onto the rocks have shut up; the new challengers like The Information, Puck and (despite its name) The Free Press make sure people pay sooner or later.
And yet, just as the quality press has come to terms with the internet and social media, along comes another even bigger change: artificial intelligence.
AI promises to get under the bonnet of our industry — to change the way we write and edit stories. It will challenge us, just like it is challenging other knowledge workers like lawyers, scriptwriters and accountants.
Clues
How exactly will this revolution unfold? Before I make my predictions, a little personal humility is in order. When I became editor of The Economist, I had no idea that a company called Twitter had been founded 10 days before; yet by the time I came to Bloomberg nine years later, Twitter was in effect the world’s biggest newspaper. So beware any editor (including this one) peddling certainties.
But I will submit that our newsroom at Bloomberg is quite a good laboratory to look for clues as to how this revolution might progress. Partly because we use more technology, including early versions of AI, than anywhere else. Out of the 5 000 stories we produce every day there is some form of automation in more than a third of them. And partly because our audience is close to the demanding news consumer of the future. Our readers will trade millions of dollars on the basis of what we write. So, accuracy and lack of bias is key for them, but so is time. Our readers, viewers and listeners hate it when we waste their time — and as we shall see, saving time is a key part of what AI offers.
Here are two examples of what AI can already do.
The first is a report we published which showed how oil is being smuggled out of Iran and transferred from ship to ship. Those involved go to all sorts of lengths to avoid being caught doing this — so we built an algorithm that looked at satellite images of ships to detect when two vessels were next to each other. On the 566 days when skies were clear between early January 2020 and 4 October 2024, we found 2 006 of these suspicious side-by-side formations — which our journalists could then investigate.
AI is really good at pattern recognition — sorting through a big pile of images or documents or data to tell a story when the pile is too large and too fuzzy for a human to do it. Our data journalism chief, Amanda Cox, says her favourite analogy for large language models is “infinite interns”. You don’t always totally trust the results they bring, but, just like human interns, the machines keep on getting better every day: from toddler-level intelligence in 2020 to something close to PhD-level intelligence, at least when it comes for specific tasks, with the next iterations of ChatGPT and its ilk.
Most journalists love AI when it helps them uncover Iranian oil smuggling. Investigative journalism is not hard to sell to a newsroom. The second example is a little harder. Over the past month we have started testing AI-driven summaries for some longer stories on the Bloomberg terminal.
The software reads the story and produces three bullet points. Customers like it — they can quickly see what any story is about. Journalists are more suspicious. Reporters worry that people will just read the summary rather than their story. To which the honest answer is: yes, a reader might well do that, but would you prefer that they wasted their time skimming through paragraphs on a topic that they are not actually interested in? To me it’s pretty clear; these summaries, used correctly, both help readers and save time for editors.
So, looking into our laboratory, what do I think will happen in the age of AI? Here are eight predictions.
First, AI will change journalists’ jobs more than it will replace them.
Let’s look at a simple example — covering company earnings announcements. When I first came to Bloomberg, there was a “Speed” team of fast-fingered journalists who specialised in banging out headlines, hoping to beat our closest rivals by a few seconds. Then automation appeared — computers that could scour a company’s press release in fractions of a second. People feared for their jobs. But the machines needed humans. First to tell them what to look for — the number of iPhones sold in China could matter more to Apple’s share price than the actual income. And the machine also needs humans to look for and interpret the unexpected — the sudden resignation of a CEO, for instance, could be meaningful or not.
We still employ roughly the same number of people to look at earnings, but the number of companies whose earnings we cover and the depth of the coverage around those announcements have both increased dramatically. And, I would argue, the job has also become more interesting; it’s not just about fast typing but working out what matters.
Read: SABC crisis: what’s next for South Africa’s troubled public broadcaster?
The same could well happen with AI — multiplying the amount of content that we produce. For instance, a stretched bureau might not have enough time to provide its readers with an explainer on the fall of Assad in Syria; but what if you could run four of your current news stories through an algorithm? In seconds you would have a crude draft of an explainer for a journalist to work on.
Another obvious multiplier of content is automatic translation — more pieces will reach more readers, and more journalists at big global organisations will be able to write in their own language.
Second, breaking news will still be enormously valuable, but for ever smaller amounts of time.
The value of news shows no sign of declining — with political changes now matching economic ones in their worth. Each time we reveal a policy shift in Washington, Paris or Beijing you can see currency markets jump. But crucially the amount of time that this counts as news keeps coming down. In terms of the big set piece announcements — like, say, jobs figures — it has long been down to fractions of a second, and our competition is often hedge funds who are using AI of their own to look at the numbers as quickly as we do. In terms of a news story about an unexpected event, like a takeover or a CEO resignation, it is much harder to measure, but I would hazard an unscientific guess that the time it takes for prices to move has collapsed from several seconds to milliseconds in my time at Bloomberg.
AI is going to speed that process up still faster — and universalise it. A lot depends on how copyright deals are sorted out, but the chances are that ever more news, as it appears, will be immediately ingested into machines like ChatGPT that consider more than just one market — and added to what might be called immediate general knowledge. It will be available to everyone, or at least a much broader set of people than now.
Third, reporting will still have enormous value.
One of the basic points about most of the things that I have mentioned so far is that you need reporting. An AI summary is only as good as the story it is based on. And getting the stories is where the humans still matter. The machine can’t persuade a cabinet minister to tell you that the chancellor has just resigned; it can’t take a chief executive for lunch; it can’t write an original column or cajole an interviewee into admitting something on air.
Crucially, a newsroom will still need boots on the ground. Especially in a world where you can no longer presume that an emerging country like Indonesia or India is going to follow Western models of freedom, and where many countries are trying to clamp down on reporting, you will need people who know people.
Fourth, the change is likely to be greater for editors than for reporters.
Break down most editing jobs into a series of skills. Begin with managing a team of journalists: you will be unsurprised to know that I smugly still think newsrooms will need people like me. Next, commissioning a story: again, I think that will remain mainly a human skill — although at Bloomberg we already use AI to prompt us to consider writing a story (pointing out that a share price has jumped or social media is talking about an explosion).
However, once the story has been delivered and we are in the actual process of changing words on screens, I think you will see AI tools coming into play more and more, restructuring and rewriting drafts, checking facts and so on. Again, I am not talking about New Yorker level editing. But a lot of news reporting is more formulaic.
Read: Radio is surviving – but not thriving – in a digital world
Consider for instance a sports report on a football match. In five years’ time, a British journalist could file her piece on a match at the King Power Stadium to her editor in London. A second later, both she and her editor will get an edited version: it will have been checked for both spelling and house style; there will be queries alongside dodgy assertions (why did the reporter claim that Liverpool dominated the match when Leicester in fact had 51% possession?); photographs and video segments will have been added — and links to the four Leicester players who scored goals. At this point my example is probably becoming unbelievable on multiple different levels, especially to anybody who follows football. But I think you can see how AI probably will change editing jobs more than reporting ones.
Fifth, the world of Search will give way to Question and Answer.
As mass summarisation tools like ChatGPT and Perplexity suck in ever more stories, they are using them to construct answers. You can already see that when you ask Google a question. Rather than getting a long string of links to other stories, you get an answer that runs to a couple of sentences, sometimes close to a paragraph. My colleague Chris Collins, who heads up our product team for Bloomberg News, says that search as we know it could disappear.
That will make an enormous difference to anyone whose business relies on search advertising — and counting eyeballs. At the moment, when a reader clicks on a link, the publisher may receive a few cents from an advertiser. But as you get an ever-longer answer from the search engine (or rather answer engine), then those clicks will stop.
This is yet another reason why building a sustainable subscription business — and investing in long-term relationships with a committed set of readers — is so important for serious news publications. It is also a prompt to sort out copyright; we plainly need much more clarity over what can and can’t be used free of charge from our courts and legislators.
Sixth, hallucinations will be easier to solve in text than video or audio.
If you discuss AI with journalists, it’s likely that somebody will mention hallucinations — the idea that the machine will invent a story or be hoodwinked into inventing one. There will inevitably be a degree of trial and error about AI, and there is no shortage of people who think they can gain commercial or political advantage by scamming us. My hunch is that for the foreseeable future, the main danger is AI being used to generate fake video or audio images that distort or malignly amplify an event that actually happened, rather than inventing completely fake events.
A lot of this is about the interplay between humans and machines. A few years ago, I followed the way that our breaking news team dealt with a subway shooting. They were prompted by social media that something bad had happened. And you could see the electronic chatter rapidly increase, but they would confirm it only once they had a human source that they trusted — in this case an eyewitness who was on the scene.
Read: Google, AIP launch R114-million DNT Fund for small SA news publishers
By contrast, video and audio is much harder to confirm. With the subway shooting a grizzly photograph of an apparently dead person appeared on social media. But was it real? Had it been made up? At speed this is harder to verify. You have to check the picture against photographs of the subway station, inspect it to see whether pixels have been moved, and so on. Perhaps AI will make it easier to root out fraudulent audio and video, but so far most of the examples I have seen are of ever more elaborate fakes.
One footnote to this though. In terms of “fake news,” it is notable that those regimes that long peddled lies now tend to specialize in obscuring the truth in a cloud of fake information rather than insisting on a single untruth. In the old days, for instance, Pravda would simply state a lie — and then repeat it. Now, when something happens that the Kremlin does not like (like an airliner being shot down or a battle being lost), Russia’s army of bots generate a multiplicity of possible outcomes. The main objective is to confuse.
Seventh, personalisation will become more of a reality.
This again is a hunch. Personalisation has been the Holy Grail of digital journalism. Imagine if you only got the news you needed: your own personal newspaper. So far it has happened only rather clumsily. Many people don’t like handing over their details to news organisations — even if it would appear to be in their interest to do so. Some readers get creeped out when you suggest things to them. They worry about being stuck in opinion ghettoes. They miss that element of serendipity — the story that you didn’t know you would be interested in. It is the difference between visiting an old-fashioned bookshop, where you can browse and stumble upon an interesting novel, and being fed suggestions by Amazon.
AI will begin to crack this puzzle. Algorithms are good at working out what you might be interested in — in spotting patterns that people don’t see themselves. The infinite interns will be able to make connections more painlessly than those rather random “news for you” boxes that either give you too much or mean you miss out on the thing everybody else is talking about.
This predictive personalisation of content comes with a dark side. The same algorithms that predict that we might like a gardening course can also lead a teenager who has just been dumped by his girlfriend towards videos about suicide.
At the moment, social media companies are not liable for the content on their networks in the same way that an editor like me would be. Thanks to rules like America’s infamous “section 230”, the tech giants are treated as if they were more like telephone companies than media companies. They are responsible for the wires, but not what is said across them.
That argument is already pretty threadbare, and I expect it will become ever more so, the more powerful AI becomes. For decades, tobacco companies hid behind the argument that it was not their product that killed people — smoking was a matter of personal choice — but eventually that defence crumbled. I think the tech giants will lose that battle too, not least because anybody with children can talk about the addictiveness of their product. Which leads me to my eighth and final prediction:
Regulation is coming
For politicians everywhere, AI will simply become too complicated, too powerful, too intrusive and (if you live outside the US) too American for them to leave it alone. In the 1990s, US politicians wanted to free up young internet companies so they could innovate. Nobody now thinks the likes of Amazon, Microsoft and Facebook need to be protected from anybody. Rather the reverse. Companies have to do more than follow the law. Society only seems happy granting a corporation privileges like limited liability as long as that particular firm is seen as doing good. Various companies and indeed entire industries can lose their franchise from society; you go from being the cool innovators to “the malefactors of great wealth” (as Theodore Roosevelt called the robber barons a century ago when he ushered in antitrust laws).
You can see that happening at the moment with the tech giants. In the US the politics are complicated, because American lawmakers, even if they don’t like the tech giants, still see them as one reason why the US is ahead of China economically. In Brussels, there will be fewer such qualms — especially once the politicians of Europe wake up to how far they are behind on AI. As one businessman told me, “America innovates, China replicates, Europe regulates”.
So those are my broad predictions. Bear in mind, again, that I may have again entirely missed the AI equivalent of Twitter being founded 10 days ago. But where do these eight somewhat-educated guesses leave our world — and the craft that has kept me gainfully employed since 1987? I think on balance we are allowed a degree of paranoid optimism.
Paranoia, because it is not hard to see how it could go wrong — where the amount of fake content proliferates, where journalism is caught between interfering politicians and technology superpowers, and where a lot of people in newsrooms lose their jobs because the machine can edit the copy. At its worst, in places like China and Russia, governments could use AI to further impede independent journalism — to chase down our sources, censor what we do and spin out elaborate webs of fake news of their own.
But at some point, optimism begins to break through. Go back to our story on the Iranian ships. All this new technology will give us even more ways to recognise patterns and to hold powerful people to account. In the past, entire countries could seem out of bounds. Now politicians are always being videoed somewhere. Often doing stupid things. As historian Timothy Snyder put it, “No matter how dark the evil, there is always a corner for ridicule’s little lantern.”
Better prepared
I am optimistic, too, that this time our industry is better prepared for technology. Editors and publishers are more on our guard with AI than we were with the internet and social media, less willing to give away our content, and so the flight to quality will be quicker.
I say “this time” because we often make the mistake of imagining that we are the first generation of journalists that technology has happened to. In fact, what has happened so far this century (and is about to happen again) is really just an old story being retold — of a new technology ushering in a period of madness and upheaval and then some sense.
At the beginning of the 19th century, the arrival of the steam press made it possible to print off pamphlets and scandal sheets in numbers that could say anything about anybody. The titan of the penny press was the New York Sun, which rapidly became the biggest-selling newspaper on the planet. One of its most famous investigative series claimed to have discovered, with the help of a large but strangely very-hard-to-locate telescope, that the moon was populated by a wonderful menagerie of creatures, including half-humans/half-bats who built temples.
Read: South Africa’s most-watched television channel
But gradually things began to sort themselves out. New Yorkers preferred to pay for news that was useful, that told them about the real world; and the new consumer goods companies preferred to advertise their wares alongside stories that were actually true. New titles appeared. The Economist was founded in 1843, The New York Times and Reuters both appeared in 1851, the Financial Times in 1888, The Wall Street Journal in 1889. A flight to quality happened.
As long as we focus on original reporting, on writing stories that people in power don’t want us to publish or that tell us something new about the world, and we do that without fear, favour or bias, we will do well.
- John Micklethwait is a former editor-in-chief of The Economist and current editor-in-chief of Bloomberg News
Get breaking news from TechCentral on WhatsApp. Sign up here.