A recent summit was held by the Bank of England for major banks and tech companies in response to growing concerns about fraudsters using artificial intelligence (AI) for scams in the UK. Bank of England governor Andrew Bailey highlighted the risks posed by this cutting-edge technology and called for urgent action.
Representatives from Google, OpenAI, and Meta (owner of Facebook, Instagram, and WhatsApp) were questioned about their safety measures to prevent the misuse of AI technology. The threat of AI scams is significant, but there is still an opportunity for the UK to strengthen its defenses.
Fraud in the UK is already a significant problem, costing consumers over £1.2 billion last year. However, the incorporation of AI technology by criminals could lead to an even larger surge in scams. Anthony Browne, a Member of Parliament and anti-fraud advocate, emphasizes that the government is taking the risks associated with AI seriously and it is necessary to prepare for this threat.
In an effort to combat fraudsters, banks are trialing AI-powered chatbots through a collaborative industry group called Stop Scams UK. This project aims to gather intelligence on scammers and deploy chatbots to interact with them.
These chatbots, which simulate human conversation, can manipulate victims by impersonating trusted authorities or companies, sending genuine-seeming messages on social media platforms to trick people into giving away money or personal information.
Criminals can use these chatbots to automate multiple scams simultaneously, making their operations more efficient. However, the trial conducted by the banks intends to pit chatbots against each other, wasting the scammers’ time and disrupting their activities. This trial is crucial because fraudsters are adapting quickly to adopt AI technology.
AI technology also poses a risk in generating convincing scams, such as AI-generated messages, cloned voices, and deep-fake videos.
These scams can deceive victims into parting with their money. Therefore, while AI tools can aid in detecting scams and tracking down fraudsters, consumers need to remain vigilant against highly credible AI-generated scams. Tech firms are urged to take further action in protecting users and preparing for these evolving threats.
Impersonation scams using cloned voices have become one of the fastest-growing techniques employed by fraudsters. Using cheap online software, fraudsters can clone a voice to imitate a loved one or family friend, tricking unsuspecting victims into providing financial assistance.
Some cases have involved scammers generating voice clips through AI to impersonate someone in distress and demand ransom. Although these scams have been prominent in the US, they have already made their way to the UK. AI technology has enabled fraudsters to imitate voices with just a three-second voice recording or video.
The extent of voice cloning technology is demonstrated by an AI voice generative start-up called Play.Ht, which allows users to clone voices using a short audio clip and customize the voice to speak different languages and express various emotions.
This advancement in technology demands increased caution from consumers when receiving calls or watching videos. Freakishly realistic videos may use content written by AI, giving rise to the need to carefully listen and assess if the audio genuinely resembles human speech.
Criminals often impersonate bank staff, the police, and other trusted authorities, making impersonation scams common. Unfortunately, most victims are unaware that AI technology has been used, making it difficult for authorities to track such scams.
While there are limited records of AI voice-cloning scams, as reported to Action Fraud, it is necessary for customers to be wary of any unusual payment requests and to take a moment before making transactions. The adoption of AI by fraudsters raises new challenges, and consumers must stay vigilant to protect themselves.