Guardrails must be put in place to make artificial intelligence as accessible to non-technical people as possible.
This is according to Hein Badenhorst, software leader at IBM South Africa, who told a recent press conference on AI that all kinds of companies – from law firms to car parts suppliers – will soon be using the technology, and that workshops, and guardrails, are needed to make it accessible to those without technical backgrounds.
“Organisations that push forward without considering the intricacies of AI ethics and data integrity risk damaging their reputation for short-term gains,” he said.
And although 79% of executives in a recent IBM survey said AI ethics were important to their enterprise-wide AI approach, fewer than 25% have operationalised common principles of AI ethics. Some 58% of them said significant ethical risks abound with generative AI adoption, which will be difficult to manage without new governance structures.
Yet many companies are struggling to turn principles into practice. CEOs must take the reins, as 80% of executives say business leaders — not technology leaders — should be primarily accountable for AI ethics and responsible for educating others on emerging ethics issues, according to IBM.
More than half (57%) of consumers already say they are uncomfortable with how companies use their personal or business information, and 37% have switched brands to protect their privacy. Consumers rank companies in many traditional industries, including retail, insurance and utilities, lowest in responsible use of technology.
Compliance
Badenhorst said regulatory compliance when dealing with customer data is vital, and that if this data is compromised – as can happen – what really counts is how quickly a company responds to that breach. “That depends on how AI is being implemented in a company. We need the right tech to track production in large enterprises,” he said.
The EU’s AI Act is imminent, and China is moving ahead with robust regulations and guidelines. Business leaders around the world are feeling pressure to prepare — but globally, fewer than 60% of executives think their organisations are prepared for AI regulation.
“Good data and AI governance is necessary no matter how regulations evolve — and implementing responsible and trustworthy AI from the start will help achieve compliance when the time comes,” said Badenhorst.
Read: Google faces ‘clear and present danger’ of falling short in AI
“An AI Act for South Africa will mean working closely with regulators to understand their timeline, but we usually adopt EU and international regulations to get compliance sooner. The data governance must be built in from the start.” — © 2024 NewsCentral Media