This is the first article in a series on the Fraud and AI. In this series we will look at how new technologies such as ChatGPT and other ‘AI’ tools may assist fraudsters, what we can do to mitigate those risks, and break down some of the jargon and myths surrounding AI.
Artificial Intelligence (AI) is the current hot topic in tech – you will no doubt have seen mention to ChatGPT in the media, along with various stories of AI art winning competitions or causing problems in legal cases. Whilst some of the capabilities may be exaggerated, there is a very real threat from fraudsters using AI to both improve and streamline their operations – especially against businesses.
What is AI?
There are two broad categories of what is currently being called ‘AI’: Large Language Models (LLMs) like ChatGPT, and Generative Adversarial Networks (GANs) like DALLE. These technologies are really iterations of each other with a lot of overlap, so whilst this is a significant simplification it is best to think of LLMs for anything to do with text, and GANs for images and video.
LLMs work by feeding a ‘neural network’ (think of a digital brain) a training set comprised of a huge amount of text, and then using that data to predict what the next word in a given phrase will be. This sounds very simple, but it is extremely powerful – especially when combined with other methodologies such as reinforcement learning. ChatGPT is currently the most popular LLM, but similar ‘models’ are coming out daily.
GANs work by feeding two neural networks a training set of a large number of images that are tagged with identifiers (such as, this is a face, this is a cow etc.) which are then used in a sort of competition between the two networks. This is best explained in an example: let’s say you ask the GAN to create a picture of a face; one neural network creates a random image, and the other neural network (the adversary or ‘discriminator’) says if it thinks the picture looks like the ‘faces’ it saw in the training set. This keeps on going until the adversary agrees that the generated image is a picture of a face. Midjourney is currently one of the most popular GANs, but there are dozens of different ones available.
These are extremely simplified explanations, but the actual technology behind it is relatively straightforward – you can run both an LLM and a GAN on a powerful desktop computer. AI tools are usually in ‘the cloud’ because computing power is relatively cheap, and it allows for monetisation.
It is also worth noting that there are both specific and general LLMs and GANs – for instance you could create a GAN trained just on pictures of cows. This would make everything you ask for a type of cow, but it would be very good at creating different types of cows! It’s likely we will see more and more specialised AI products, with some that are more useful for fraudsters.
How is AI used in fraud?
Both AI tools can be used to help fraudsters to deceive victims. GANs can be used to create realistic imagery, whilst LLMs can be used to write convincing text or help create more professional (or native English) emails and documents.
In the next two articles we will be looking at GANs and LLMs individually. We will see how they are used in fraud, and how we can mitigate the risk that each type of AI poses to businesses and individuals.