There is mounting concern that generative AI will be used by cybercriminals to generate convincing phishing emails that are almost impossible for users to identify and are also capable of bypassing email security solutions. Researchers have demonstrated that the guardrails around tools such as ChatGPT can be bypassed to create high quality phishing emails, and tools such as WormGPT are available to cybercriminals that have none of the controls to prevent malicious use.
While evidence has been found to indicate generative AI is being used by cybercriminals, recently published research from IBM Security suggests generative AI tools are not as good at creating phishing emails as humans. According to Stephanie Carruthers, a social engineering expert and Global Head of Innovation & Delivery at X-Force, IBM Security, generative AI can save threat actors around 16 hours of work generating scam email campaigns.
Carruthers said it would generally take her team around 16 hours to build a phishing email campaign, not including the time to set up the infrastructure. Her team was able to trick a generative AI model into creating a convincing phishing email campaign in around 5 minutes, using 5 simple prompts. The prompts helped the team identify the top areas of concern for people working in specific industries, identify social engineering and marketing techniques to use in the campaign, and identify people and companies that should be impersonated. The IBM X-Force Red team used their own skills and creativity to create phishing emails that they believed would resonate more deeply with their targets, and their emails included an air of authenticity that’s difficult for generative AI tools to replicate.
The AI-generated and X-Force Red team-generated phishing emails were then A/B tested, which showed the phishing emails created by humans had a higher success rate than those generated by AI – a 14% vs 11% click rate. Human-generated phishing emails were also reported less frequently than those generated by AI – 52% vs 59%. The tests show humans are better than AI at generating phishing emails but the margins are small and AI is constantly improving. For the time being, human-generated phishing emails have the edge but AI is likely to outperform humans at some point.
It should also be noted that this study compared AI-generated emails with those created by social engineering experts at X-Force. The biggest concern is that generative AI can be used by cybercriminals with little skill in social engineering to vastly improve the effectiveness of their campaigns. A click rate of 11% and a reporting rate of 59% would be perfectly adequate for some cybercriminals, especially when they can save around 2 days of their time creating the campaign.
Image credit: Who is Danny, AdobeStock