A new generative AI cybercrime tool called WormGPT has been spotted, allowing adversaries to launch sophisticated phishing and BEC attacks. The tool automates the creation of highly convincing fake emails personalized to the recipient, increasing the chances of success for the attack.
Diving into details
WormGPT, an AI module built upon the GPTJ language model, was developed in 2021 and possesses several noteworthy functionalities. These include extensive character support, retention of chat memory, and the ability to format code. - When in the possession of threat actors, tools such as WormGPT can become potent weapons, particularly as OpenAI ChatGPT and Google Bard are increasingly implementing measures to combat the misuse of Large Language Models (LLMs) for creating deceptive phishing emails and generating harmful code.
- According to a recent report by Check Point, Bard's security measures against abuse in the realm of cybersecurity are considerably lower compared to those of ChatGPT. As a result, Bard's capabilities make it easier to produce malicious content.
Generative AI for BEC attacks
- Generative AI possesses the ability to craft emails with flawless grammar, giving them an appearance of authenticity and minimizing the chances of triggering suspicion.
- The adoption of generative AI democratizes the implementation of advanced BEC attacks. This technology enables even individuals with limited expertise to utilize it, thereby making it a readily accessible tool for a wider range of cybercriminals.
Latest attacks leveraging ChatGPT
- In May, there was a surge in cyberattacks using websites associated with ChatGPT, with malicious domains increasing in frequency over the past few months.
- Since the beginning of 2023, 1 out of 25 new ChatGPT-related domains was either malicious or potentially malicious, and the frequency of these attack attempts steadily increased over the past few months.
- In April, cybercriminals were found exploiting the rising popularity of ChatGPT and Google Bard to distribute malware, with recent attacks involving the RedLine stealer malware through fake posts on Facebook.
- They leveraged compromised Facebook business accounts to promote these fake posts, tricking users into downloading files by capitalizing on the buzz surrounding AI language models.
The bottom line
As AI continues to advance, it introduces new attack vectors. Implementing strong preventive measures is crucial. Companies should develop updated training programs to counter AI-enhanced BEC attacks. Enforcing stringent email verification processes fortifies against AI-driven phishing and BEC attacks.