In 2024, online fraud is more prevalent than ever, and the risks for individuals and businesses have reached unprecedented levels.
Earlier this year, Interpol released its 2023 report on financial fraud. In this report, experts emphasize that the growing use of technology has enabled organized crime groups to more effectively target victims worldwide. The report details how the integration of Artificial Intelligence (AI), large language models, and cryptocurrencies with phishing- and ransomware-as-a-service business models has facilitated the development of more advanced and professional fraud campaigns. These campaigns can be carried out with minimal technical skills and at a relatively low cost.
As fraud becomes increasingly sophisticated, the methods to combat it must evolve as well. This is why today AI is a crucial component in the arsenal for protecting against fraudsters.
The identification step is critically important during customer onboarding processes. Typically, this occurs in two stages: the first involves taking a video of the identity document, and the second involves recording a video of oneself to ensure that the document owner is the one completing the process.
There are numerous potential frauds during these two stages, with the main ones being document falsification and selfie manipulation, particularly through deepfakes.
To detect document falsification, various AI models can be employed to identify inconsistencies between the submitted document and the standard model of the document, including aspects such as background, font, and security elements. These deep learning models are trained on millions of documents and can detect very slight modifications in the documents.
Regarding deepfakes, technology is advancing rapidly, making their creation easier and their detection increasingly difficult. For example, in April, Microsoft Research unveiled its new real-time virtual avatar generation model called VASA-1. Based on a single photo and an audio clip, this model can generate an extremely realistic video with lip synchronization and strikingly realistic facial expressions.
Although AI can be used to detect deepfakes, such as identifying inconsistencies in textures, shadows, or strange movements, it is not the only solution to consider. The generation of deepfakes relies on GAN (“Generative Adversarial Network”) models, which include a model that detects poorly generated deepfakes so that the main model learns to generate more realistic images. Therefore, deepfake generators are inherently trained to deceive the AI designed to detect them.
This is why a hybrid approach is necessary, combining AI with mechanisms that prevent the injection of generated videos now of biometric capture.
The fight against online fraud is an ongoing battle that requires continuous adaptation and the deployment of cutting-edge technologies. By embracing a hybrid approach that combines AI capabilities with proactive security measures, we can better protect individuals and businesses from the ever-evolving threats posed by online fraud.