The rush by regulatory bodies in some regions to regulate artificial intelligence while the technology is still in its infancy could stifle its development and rob humanity of its benefits before they are realised.
That’s the stark warning from Bronwyn Howell, a telecommunications and public policy researcher at the Victoria University of Wellington in New Zealand, who spoke on a recent webinar arranged by the University of South Africa (Unisa).
Howell’s view is based on research done with AI development firms and regulatory authorities in Washington, DC earlier this year while she was on research and study leave from the university.
“The question we must ask is this: are we trying to create something to regulate not a real harm that we are aware of but a feared harm by stopping anyone from actually going into the jungle in the first place? Are we using regulation to assuage anxious consumers that we are seen to be doing something before we fully understand it? Have we overreacted when it would have been better to wait and gather more information?”
Howell is critical of the “risk management approach” taken by US and EU regulators on AI. The EU AI Act defines risk as “the combination of the probability of an occurrence of harm and the severity of that harm”. However, Howell argues that when it comes to AI, regulators are not dealing with risk per se, but rather with the more complex world of uncertainty.
The reluctance to acknowledge high levels of uncertainty in the AI space introduces biases in regulation and risks imposing old ways of thinking on new technologies. She said when regulators don’t know a new technology well enough, they revert to what they already know and try to prevent things that we understood to be harms in the past.
‘Protect people’
However, according to Johan Steyn, founder of AI for Business and an advocate for human-centred AI, the technology has the potential to displace jobs and widen the gap between rich and poor – the downstream effects of which could destabilise economies.
“The goal of regulation is to protect people, not exploit them. The typical way it happens is that regulation follows innovation, but often something has to go wrong before we start regulating. So, there had to be car crashes before we got seatbelts.
Read: Icasa and other regulators create new top-level forum
“But in the age of rapidly expanding technology, the split between regulation and innovation is widening, so what has to go wrong before we wake up to regulate AI?” he said in an interview with TechCentral.
Steyn is part of a working group of experts from various industries that is preparing an advisory report to government that may influence how South Africa regulates AI. He said the EU AI Act is a sound law on the subject and South African legislation should take a similar approach.
But according to Andile Ngcaba, executive chairman at Convergence Partners and president of the Digital Council Africa, South African regulators must be careful not to imitate the mistakes made by their EU peers if the benefits of technology are to be realised locally.
“Our friends in Europe … say that sometimes Europe writes standards and technology about AI before AI is even tested. I have been telling colleagues here to please not copy this [approach] because they are going to hinder innovation… We must not make decisions that are counterproductive to innovation,” Ngcaba said at an event hosted by fibre operator Maziv on Thursday.
According to Howell, generative AI is conceptually different to computing methodologies that have come before. Gen AI models are designed to learn and change, so they are expected to give different outputs for the same inputs over time, making them unpredictable. This complexity, she argued, is why regulating AI from a risk management perspective is not ideal.
For Steyn, on the other hand, the fact that generative AI can learn and change is exactly why regulators should get ahead of the curve and reign in the technology before its capabilities expand beyond human control.
“Imagine we learnt this year how to split the atom and how to create nuclear power and nuclear bombs. Would a wait-and-see approach to regulation work then? AI won’t destroy a city physically like a weapon, but the impact it will have on us in the next few years is so severe, potentially, that we don’t have any time to sit and wait,” he declared. – © 2024 NewsCentral Media
Get breaking news from TechCentral on WhatsApp. Sign up here