In the wake of WormGPT's success, threat actors have now introduced another AI-powered cybercrime tool called FraudGPT. This AI bot is being promoted on numerous dark web marketplaces and Telegram channels, and is capable of designing spear-phishing emails, generating cracking tools, and facilitating carding activities.
Diving into details
The threat actor, who operates under the online alias CanadianKingpin, is promoting themselves as a verified vendor on several underground dark web marketplaces, including EMPIRE, WHM, TORREZ, WORLD, ALPHABAY, and VERSUS.
They have been advertising their offering since at least July 22 with a subscription cost of $200 per month, $1,000 for six months, and $1,700 for a year.
According to the claims made by the author, FraudGPT can be used to write malicious code, develop undetectable malware, and identify leaks and vulnerabilities.
Furthermore, there have been over 3,000 confirmed sales and reviews of this tool. As of now, the specific Large Language Model (LLM) utilized in the development of this system remains undisclosed.
Similar to WormGPT
WormGPT is another AI tool, designed to facilitate sophisticated phishing and BEC attacks.
It automates the creation of highly convincing fake emails, expertly tailored to individual recipients, significantly increasing the likelihood of a successful attack.
It, furthermore, possesses extensive character support, retains chat memory, and has the ability to format code.
Why this matters
The existence of WormGPT and FraudGPT raises concerns, especially since AI models such as OpenAI ChatGPT and Google Bard have been taking measures to prevent their misuse in deceptive emails and malicious code.
The use of generative AI in these attacks broadens their accessibility to a wider range of cybercriminals.
These tools not only elevate the Phishing-as-a-Service (PhaaS) model but also serve as a platform for inexperienced individuals seeking to carry out large-scale and persuasive phishing and BEC attacks.
The bottom line
The rise of AI-powered cybercrime tools necessitates a proactive approach from organizations to safeguard their data, systems, and customers. By implementing robust security measures and staying vigilant against evolving threats, businesses can strengthen their resilience against AI-driven attacks and protect themselves from potential financial and reputational damages.