Generative AI technology is advancing at a rapid pace, presenting new risks in the form of deep fakes, among other opportunities for fraud. The financial services industry needs to be prepared.
We find ourselves at a tricky junction in the evolution of technology. On the one hand, the financial services industry is embracing the many benefits of digitalisation in driving efficiency. On the other, we’re faced with a new rising threat as the same digitalisation that is making our lives easier and our processes faster opens up new possibilities for crime.
One of the biggest cybercrime threats the industry faces right now is the risk posed by generative artificial intelligence (also known as generative AI or GenAI). Most of us had not heard the term “generative AI” until the end of 2022, when ChatGPT exploded onto the scene. Here was a computer programme that you could type a one-sentence prompt into and, like something out of Star Trek, it would create a whole article in seconds.
Hot on its heels came similar tools that could generate images and music, and it wasn’t long before video content followed. At first it was quirky. The content generated by these tools was not of a particularly high standard and the images, in particular, were laughably unbelievable.
But like all things digital, it advanced rapidly. The technology, which was designed to learn and improve with use, did exactly that. The algorithms powering the tools were refined as testing revealed where improvements could be made. And people got more adept at using them.
In just two short years, generative AI tools have gone from science fiction to curiosity to mainstream use, with capabilities that are nothing short of astounding.
AI in finance
“The implementation of GenAI into an insurance business has many advantages,” says Graham Charlton, Financial Director at Consort Technical Underwriting Managers. “Although the use of GenAI in the insurance industry is a new technological concept, the potential for its use is significant and without a doubt, there will be a ramp up in the development of this technology in the short to medium term. This development will potentially lead to advancements in automated underwriting, accelerated claims handling and processing, fraud detection, predictive analytics and hopefully overall better customer experience.
“The insurance industry has, up until now, relied on human expertise to analyse and handle claims. Depending on the severity and complexity of a claim, finalising a claim can take anything from a day or two, to weeks or months – or even years in some circumstances. Most of us would strive to improve on this and find ways to make the process more efficient and effective.”
However, as useful as the technology may be, he adds that the risks associated with it cannot be ignored: “As with anything new, there are certain risks from an insurance perspective that need to be considered and safeguarded against.”
Michael Petersen, Chief Executive at Risk Benefit Solutions (RBS), agrees that generative AI has great potential to transform financial services for the better. “This technology has the potential to substantially improve operational efficiency, consumer experiences, and productivity in the financial services sector if it is utilised effectively.” Petersen highlights several areas where AI can make a difference:
- Enhanced productivity and efficiency: “Generative AI can automate repetitive and time-consuming tasks, enabling employees to concentrate on high-value activities. For instance, it can summarise intricate insurance reports, generate policy proposals, and prepare regulatory documents.”
- Enhanced customer interactions: “AI-powered conversational interfaces can enhance customer satisfaction and loyalty by allowing customers to interact with insurers more naturally and intuitively.”
- Fraud prevention and detection: “Generative AI can assist in detecting and preventing fraudulent claims by analysing immense quantities of data and identifying anomalies, thereby reducing the probability of substantial financial losses.”
- Enhanced risk management: “AI can analyse intricate data sets to identify potential risks, enabling insurers to make more informed decisions and manage risk more effectively. This capability could improve the stability and resilience of the short-term insurance market.”
- Personalised services: “Generative AI can customise insurance products and services to meet customers’ unique requirements, increasing customer satisfaction and loyalty. This personalisation can also facilitate financial inclusion by reaching underserved populations.”
- Improved compliance: “AI has the potential to automate data analysis and document generation, thereby enabling insurers to avoid non-compliance risks and comply with regulatory requirements.”
- Improved transparency: “Generative AI can enhance transparency in insurance transactions, enabling customers to make more informed decisions and comprehensively understand their coverage and claims processes.”
- Claims automation: “AI is currently being implemented to optimise and, in numerous instances, automate the claims process. As a result, customer satisfaction may be enhanced through the expedited and more precise settlement of claims.”
There are other areas where generative AI technology is already being deployed with great success. “This technology is used to power smart chatbots, code generators and Natural Language Generation (NLG), allowing financial services organisations to handle many time-consuming tasks, such as developing new product images, generating code or preparing first drafts of marketing material,” says Modeen Malick, Principal Systems Engineer at Commvault.
He adds that AI-driven smart chatbots are used to personalise services and offerings and to upgrade and enhance customer-facing chatbots to have the ability to understand and respond to natural language.
On the flip side
Of course, generative AI has not been without its controversies. One issue is the question around the reliability of its outputs – AI tools learn from training material they’re given access to. If the training material contains inaccuracies, these will be reflected in the outputs the AI produces. (This has already landed more than a few lazy-yet-tech-savvy students in hot water with AI-generated school and university assignments.)
Another issue is bias. Bias in the material that an AI tool is trained on can lead to a perpetuation of that bias in its outputs. For example, AI image generators have been found to perpetuate Western beauty standards and ideologies because of who is training them and with what material. Uninformed members of the public who have asked AI for investment advice have learnt the hard way that it’s easy to be led astray when you place all your trust in a machine and don’t have the knowledge to critically evaluate its recommendations.
And there’s the issue of copyright. Because AI uses existing material to produce its outputs, the tech companies behind the AI tools have been accused of plagiarism by musicians, artists, writers, and even individuals whose likenesses and voices have found their way into AI-generated replicas.
It’s a minefield we were not prepared for. But while lawmakers scramble to catch up, criminals have wasted no time embracing the technology and all its capabilities.
Scary possibilities
It’s important to understand the various ways criminals are using generative AI technology. “Generative AI is capable of creating highly realistic images, audio, and text that can be virtually indistinguishable from real content,” says Kali Bagary, CEO of The Data Company. “This has significant implications for our industry. The capabilities that worry me the most include:
- Deepfakes: These AI-generated videos or audio recordings can impersonate real individuals convincingly, posing a serious risk for identity theft and fraud.
- Synthetic identity creation: AI can craft synthetic identities by merging real and fake information, which can be used for fraudulent activities.
- Automated phishing attacks: AI can generate highly personalised and convincing phishing emails, making it easier for scammers to deceive their targets.
- Manipulation of financial data: AI has the potential to create fake financial reports or alter transaction data, undermining trust in financial records.”
Werner Bosman, Chief Executive of PPS Short-Term Insurance, is also concerned about these capabilities, noting that deepfakes pose risks that can be far reaching. “Deepfakes, which use advanced neural networks to create highly realistic but fake images, videos and audio recordings, pose significant risks. These tactics can be used to impersonate executives or other high-ranking individuals, enabling the authorisation of fraudulent transactions or manipulation of financial data,” he says. “Such actions can cause market fluctuations through false statements and damage the reputations of financial institutions or key personnel, ultimately leading to a loss of client trust and business.”
He adds that phishing attacks powered by AI can also be extremely damaging. “Such attacks can lead to data breaches, unauthorised access to sensitive financial data and credential theft, allowing hackers to access accounts and execute unauthorised transactions,” says Bosman. “Moreover, AI can generate realistic synthetic identities using a combination of real and fake data, which can be used to open fraudulent accounts for money laundering, obtaining loans or credit cards and other financial crimes. These advanced patterns can evade traditional fraud detection systems, making it harder to identify and stop fraudulent activities.”
It’s not only FSPs who are at risk of these sophisticated, AI-enabled cyberattacks. Clients are also vulnerable, as Bagary points out. “AI can impersonate customer service representatives, tricking customers into revealing sensitive information or transferring funds.”
What is especially concerning is how quickly things are moving and at what scale. “The rise of AI poses new questions for many working in the insurance industry, given the technology’s abilities to process huge quantities of complex text, audio and video, as well as to generate new content across those three formats. Given its powerful capabilities, AI poses various unique risks for insurers, such as algorithmic bias and decision-making errors, hallucination and IP infringement,” notes Thokozile Mahlangu, Chief Executive Officer at The Insurance Institute of South Africa NPC (IISA).
Mahlangu continues: “Crucially, AI can also render traditional risk models outdated, which means that the insurance industry must adapt swiftly to maintain relevance and efficacy. Additionally, cybercriminals can use AI technology to generate fraudulent claims, perpetrate identity theft and carry out data breaches, as well as to manipulate biometric data such as fingerprints for theft claims. At the same time, concerns have also been raised about the impact of AI on claims management, particularly around data privacy, job displacement, transparency and fairness and potentially increased complexity, leading to increased time and cost.”
Scams in action
As alarming as this sounds in theory, the reality is that generative AI scams are already happening. “Even though the financial services industry in South Africa has yet to encounter a substantial number of generative AI-based schemes, there have been several notable examples on a global scale,” says Petersen.
Below are real-world examples of scams that have been perpetrated using AI.
Deepfake CEO scheme: “In 2019, a European company was defrauded of €220 000 (over R4 million) by criminals who employed artificial intelligence (AI) to clone the CEO’s voice and request a fraudulent wire transfer,” says Petersen.
Automated investment proposal scam: “In 2021, fraudsters employed artificial intelligence (AI) to produce persuasive investment proposals directed at investors in the United States and Europe,” says Petersen.
Crypto trading scams (globally): “AI-generated deepfake videos of celebrities endorsing fraudulent cryptocurrency schemes have circulated online. For instance, fake videos of Elon Musk promoting fake crypto investments have misled many investors into transferring funds to scammers,” says Bosman.
Social media impersonations (various countries): “Scammers have used AI to create deepfake profiles on social media, impersonating executives and celebrities to solicit investments or sensitive information from unsuspecting victims. These deepfakes can generate convincing videos and images that make the scams appear legitimate,” says Bosman.
Synthetic identity fraud: Malick points to a recent incident in which a multinational company lost close to half a billion rand in a scam that saw attackers using deepfake technology to trick an employee at its Hong Kong branch. Using a digitally recreated version of the company’s Chief Financial Officer (CFO), the scammers ordered money to be transferred during a conference call with the employee.
“As it turns out, the scammers used deepfake technology to create convincing versions of the meeting’s participants from publicly available video and other footage. So, apart from the employee, all the participants on the conference call were deepfake representations of real people, such as managers and directors at the company,” says Malick.
Security measures do, of course, exist, but they are not infallible. “The identity verification systems that currently exist in the industry can all be susceptible to synthetic identities even under the assumption that companies in financial services would have implemented a defensive, well-functioning and governed anti-fraud process which has several layers to break through,” cautions Jacob Tshabalala, Head of Data Management at Lombard. “Such readiness for digital fraud attacks is more prevalent in the banking environment, however other industries such as small to medium sized fintechs and insurance companies may be easier to deceive.”
Keeping your guard up
Any tool in the wrong hands is dangerous and it’s evident the criminal threats posed by generative AI are real and numerous. However, there are steps that can be taken to mitigate these risks.
At the policy level, steps are being taken to put best-practice guidelines and regulations in place. “A recent report by Deloitte’s Asia Pacific Centre for Regulatory Strategy (ACRS) sheds light on the evolving regulatory landscape and offers guidance for insurance businesses,” shares Marike van Niekerk, Manager: Legal, Compliance, Marketing & Communications at MUA, adding that some key points from the report include:
- “Policymakers and regulators are reevaluating existing AI frameworks to address new technological risks and ensure they remain fit-for-purpose across insurance services.
- Regulators are balancing the need for technological innovation with the imperative to ensure consumer safety, addressing concerns such as bias, intellectual property, and data security.
- Insurance providers should develop AI governance frameworks to support risk management and future regulatory compliance. They should be accountable for the outputs generated by AI applications.
- Providers should evaluate and mitigate the risk of bias or discrimination against vulnerable policyholders due to generative AI applications.
- Identifying and ensuring compliance with data protection requirements is crucial, especially concerning the parties involved in data collection, storage, and processing.”
Meanwhile, organisations also need to take steps to protect themselves and their clients. “It is no longer a matter of if but rather when an organisation is going to be attacked,” says Malick. “Organisations need to develop cyber resilience, optimise their backups and test their environment’s vulnerability regularly.”
The general consensus is that a comprehensive, multi-faceted approach is required. Key components include:
Training and awareness
“Regularly train employees on the latest types of AI-generated scams and phishing techniques. Simulated phishing exercises can help employees recognise and respond appropriately to suspicious activities,” says Bosman. “Establish clear protocols for verifying unusual or high-value requests, such as voice verification through known contacts or secondary confirmation channels.”
Bagary recommends equipping clients with tools, as well as knowledge and education. “Offer clients access to fraud detection tools that can identify suspicious activities and alert them to potential threats,” he suggests.
Malick adds that organisations need to ensure that their employees are aware of potential threats and that they remain vigilant against any suspicious activity.
“Knowledge is power. People are prone to making mistakes, but they should be trained to question when something appears to be “phishy”. Organisations must ensure that their people are equipped with the requisite knowledge not to fall victim to scammers,” he says. “From an industry perspective, organisations should introduce advanced threat detection technology that includes the ability to detect shape-shifting AI.”
Robust cybersecurity policies
“Companies must also implement strict data security measures and conduct regular cyber penetration tests and staff training to raise awareness about potential threats,” says Mahlangu.
“Develop and enforce comprehensive cybersecurity policies that include guidelines for handling sensitive information, secure communication practices and protocols for reporting and responding to suspected fraud. Conduct regular security audits and penetration testing to identify and mitigate vulnerabilities in systems and processes,” says Bosman.
Secure authentication procedures
“To prevent impersonation, improve identity verification protocols, such as biometric checks and multi-factor authentication,” says Petersen.
Bosman adds that this should have a layered approach. “Enhance security protocols by requiring multiple forms of verification before authorising sensitive transactions. This could include biometric authentication, such as facial recognition or fingerprint scans, combined with traditional methods like passwords and security tokens.”
Secure communication channels
“Encourage clients to use secure communication channels for sensitive transactions and provide tools to verify the authenticity of communications,” says Bagary.
Tshabalala adds: “If companies don’t establish an authentic way to communicate with their clients, the fraudsters will.”
Collaboration and information sharing
The fight against generative AI threats is everyone’s fight – and it will need the industry to band together. “Promote collaboration within the industry by sharing information about emerging threats and best practices through industry associations and regulatory bodies,” says Bosman.
“Participate in threat intelligence networks to stay updated on the latest scam tactics and technologies used by cybercriminals.”
Bagary agrees: “Promote collaboration within the industry by sharing information about emerging threats and best practices through industry associations and regulatory bodies. Participate in threat intelligence networks to stay updated on the latest scam tactics and technologies used by cybercriminals.”
Regulatory compliance and governance
“Ensure compliance with evolving regulations related to AI and cybersecurity,” says Bosman. Implement governance frameworks that include regular audits, compliance checks and transparent reporting mechanisms. Advocate for stronger regulations and standards that address the specific challenges posed by generative AI in the financial sector.”
Mahlangu adds that “the industry must also develop and communicate clear policies for how clients can report suspected fraud or identity theft and provide support to assist the clients should they fall victim to scams.”
Using generative AI for good
It’s important to remember that generative AI is not the threat – it is merely a tool that can be used for different means, depending on who is using it. That means generative AI can itself be harnessed to combat AI-informed threats.
“With cybercriminals using AI to launch attacks against organisations, businesses can no longer rely on traditional cybersecurity solutions to protect themselves. Instead, AI-driven threats must be combated with AI-powered security measures such as advanced threat prediction and detection tools,” says Mahlangu.
One area this is being used successfully is in the assessment of potentially fraudulent claims. “The good news is that although GenAI can be used to simulate and submit fraudulent claims, this same ability can be used to train AI fraud detection systems to further enhance these systems in identifying fraudulent claims,” says Paul Charlton, Head of Legal at Consort. He notes this has proved particularly useful in the case of smaller losses, where insurers often need to step into the role of an assessor and determine the legitimacy of claims off the information presented by either a broker or the insured.
“We can now assess smaller losses to a much greater level of accuracy, which brings benefits for both insurer and insured,” says Charlton.
However, there are far-reaching applications that extend to all areas of cybersecurity. “There is a need for AI tools to augment or replace the existing anomaly detection, endpoint protection, intrusion detection, data loss prevention, firewall and other security infrastructure,” says Tshabalala. “The newness of the problem makes it difficult for companies to proactively benchmark, assess and measure the risks associated with GenAI so the safest thing is to assume and prepare for the worst event.”
Bosman notes that AI-driven tools can be used to detect anomalies and patterns that indicate deepfake media and fraudulent activities. “These tools can analyse audio, video and text for inconsistencies that human detection might miss.” He also recommends the industry adopt blockchain verification. “Implement blockchain technology to create a secure and immutable record of transactions and communications, making it harder for fraudulent activities to go unnoticed.”
In coming up with these solutions, Tshabalala emphasises the importance of getting input and buy-in from different teams within the business. “Although a lot of financial services companies outsource their cybersecurity capacity, there needs to be more attention dedicated towards effectively implementing a multi-layered strategy of protection,” he says. “Essentially having an answer for any attack that includes machine learning algorithms, behavioural analytics and robust authentication protocols. Education on such topics for the broader company is a major contributing factor, it shouldn’t end at onboarding policies but needs to be engaged periodically.
“Fostering collaboration with internal data science teams on the creation of anti-fraud models is also a likely step for companies who have large and diverse amounts of proprietary data collected over time. Furthermore, implementing the retrieval-augmented-generation (RAG) method is a safe way for smaller companies to make use of GenAI models in the cloud without exposing their sensitive data environments to open-source tools. The successful implementation of the RAG method could counteract the adaptive-learning capabilities of GenAI models if implemented according to best practices that respect model explainability and remove data biases in trained models.”
It’s evident that generative AI is not a fad. The question is, how equipped are we for this new reality?
At a glance
As insurance fraud schemes become more complex with the growth of generative AI tools, it’s crucial to re-evaluate risk mitigation processes and strategies. Marike van Niekerk, Manager: Legal, Compliance, Marketing & Communications at MUA suggests five insights and actions to consider:
- Develop training programmes to teach effective methods of spotting and reporting deepfake-assisted fraud and rely on identity and access management (IAM) systems for sensitive transactions.
- Improve training for employees to detect phishing, compromised emails, and business email compromise, especially for those handling sensitive transactions.
- Limit the types of questions and data stored, ensuring compliance with regulations and monitoring for sensitive customer data.
- Maintain thorough documentation of model features, intended use, training data characteristics, and regular model performance testing to meet regulatory guidance.