South Africa’s financial regulator has warned that artificial intelligence brings both great opportunities and serious risks to the country’s financial system, particularly in the area of information security.
In a report entitled Artificial Intelligence in the South African Financial Sector, the Financial Sector Conduct Authority said that while AI has the potential to improve efficiency and innovation, it also poses risks to consumer protection, market conduct, financial stability and institutional soundness.
The report found that AI adoption is uneven across sectors. Banks are the most popular choice at 52%, followed by fintech companies at 50%. Adoption rates remain relatively low at 14% for pension funds, 11% for investment companies, 8% for insurance companies, and 8% for non-bank financial institutions.
The biggest investments in AI in 2024 were also in banks. More than 45% of banking institutions surveyed reported spending more than R30 million on AI-related initiatives.
The FSCA said AI can strengthen cyber resilience by detecting threats, identifying vulnerabilities and predicting attacks through advanced data analysis. At the same time, it warned that cybercriminals are increasingly using AI to launch more sophisticated attacks that are harder to detect and prevent.
A key concern highlighted in the report is the increasing reliance on third-party technology providers. The FSCA warned that concentrating AI capabilities in a small number of vendors could create systemic risks.
The regulator pointed to an outage last year at Capitec, South Africa’s largest bank by number of customers, after a software failure at cybersecurity firm CrowdStrike disrupted all customer channels. The flawed update affected companies around the world, including Delta Air Lines, which grounded approximately 7,000 flights over four days.
The FSCA said a similar failure at a major AI service provider could cause cascading disruption across the financial sector.
Another risk identified is the potential for sensitive customer data to be compromised. AI models can reveal or infer sensitive personal information in their training datasets, potentially violating regulations such as South Africa’s Privacy Act and the European Union’s General Data Protection Regulation.
The report also highlighted concerns about how AI models are trained. Risks include data poisoning, where training data is intentionally manipulated to skew results, and biases embedded in datasets. In financial services, such biases can lead to discriminatory outcomes such as higher loan rates or insurance premiums for certain groups.
The FSCA emphasized the importance of transparency and called on financial institutions to clearly explain AI-driven decision-making. It said customers must be informed when AI is used in decision-making processes that affect them, to build trust and assist with regulatory oversight.
The regulator also noted that there is no unified AI governance framework in South Africa. The international framework established by the Organization for Economic Co-operation and Development and the EU’s AI legislation is not legally binding on South African companies.
“AI systems may introduce new risks, including model risks, operational risks, and cybersecurity threats,” the report states. The report recommended that financial institutions develop comprehensive risk management frameworks, conduct thorough testing and validation of AI models, and establish robust incident response plans to address potential AI-related failures and breaches.


