
The Financial Sector Conduct Authority (FSCA) has warned that artificial intelligence is both an opportunity and a risk for South Africa’s financial sector, especially when it comes to information security.
In a wide-ranging report, Artificial Intelligence in the South African Financial Sector, the FSCA said AI presents a number of risks related to consumer protection and market conduct, financial stability, and organisational integrity.
In the financial services sector, the report showed that AI adoption is highest among South Africa’s banks (52%), followed by fintechs (50%). Pension funds (14%), investment firms (11%), the insurance sector (8%) and non-bank lenders (8%) all have relatively low rates of AI adoption.
The banking sector made the largest investments into AI adoption in 2024, with more than 45% of organisations surveyed reporting investments of R30-million or more, each.
“AI can be a double-edged sword for cyber resilience. AI can enhance cybersecurity by detecting threats and identifying vulnerabilities through data analysis. It can forecast potential cyberattacks and improve security measures. However, cybercriminals can also use AI to conduct sophisticated attacks, making them harder to detect and prevent,” said the report.
The FSCA outlined AI’s impact on the stability and resilience of the financial system as a key concern, especially if organisations in the financial sector end up relying on a small set of third-party AI service providers.
Third-party risk
Third-party vendor risk has become increasingly prominent. Capitec, South Africa’s largest bank by customer numbers, experienced an outage affecting all its customer channels last year when security software vendor CrowdStrike experienced a glitch. The issue, blamed on a faulty update, affected companies worldwide, including US airline Delta, which had to ground about 7 000 flights over four days.
The FSCA said the concentration of AI capabilities in a handful of third-party vendors presents similar risks to South Africa’s financial sector. “A cyberattack on one of these providers could lead to a cascading failure across the sector,” it said.
Read: AI is rewriting cybercrime – and Microsoft warns companies are dangerously behind
Another risk related to increased usage of AI in the sector is the potential exposure of confidential customer data. AI models could reveal or infer personal or sensitive information present in training data sets, leading to the violation of regulatory frameworks such as the Protection of Personal Information Act and the EU’s General Data Protection Regulation.
How AI models are trained remains a focal point of potential risk. The FSCA highlighted the risk of “data poisoning”, where training data is deliberately infused with incorrect information in efforts to distort a large language model’s output. But even when input data has not been “poisoned”, biases in training data sets still exist. In the financial sector, these could end up marginalising certain groups by charging them higher interest rates on loans or higher insurance premiums, for example.

“Financial institutions should provide clear explanations of AI-driven decisions, allowing customers and regulators to understand and trust these systems. Disclosure requirements play a significant role in maintaining transparency, with institutions encouraged to inform customers when AI is used in decision-making processes that affect them,” said the report.
Contributing to the risks posed by AI in the financial sector is the lack of a uniform governance framework. Existing frameworks such as the Organisation for Economic Co-operation and Development’s Framework for the Classification of AI Systems or the EU’s AI Act are not binding on South African organisations.
“AI systems can introduce new risks, such as model risk, operational risk and cybersecurity threats. Financial institutions could benefit from developing comprehensive risk management frameworks to identify, assess and mitigate these risks,” the report said.
Read: South Africa faces ‘triple-edged sword’ as AI fuels next-gen cyber threats
“This includes conducting thorough testing and validation of AI models to ensure their accuracy and reliability. Additionally, institutions might consider establishing robust incident response plans to address potential AI-related failures or breaches.” — © 2025 NewsCentral Media
Get breaking news from TechCentral on WhatsApp. Sign up here.




