“Blackhat AI Module ‘WormGPT’ Attracts 5,000 Subscribers in a Few Days”

Home/BOTNET, Compromised, cyberattack, Exploitation, Internet Security, IOC's, malicious cyber actors, Malicious extension, Security Advisory, Security Update/“Blackhat AI Module ‘WormGPT’ Attracts 5,000 Subscribers in a Few Days”

“Blackhat AI Module ‘WormGPT’ Attracts 5,000 Subscribers in a Few Days”

Artificial Intelligence (AI) has introduced revolutionary advances, including generative AI, which shows great potential for creative use. However, the emergence of tools like WormGPT has raised concerns about its implications.

WormGPT, a powerful generative AI tool, allows attackers to create custom hacking tools, posing serious cybersecurity challenges.

Shortly after its launch, the tool’s Telegram channel gained 5,000 subscribers, potential threat actors who might use it in real-life attacks.

What is WormGPT? 

WormGPT is a blackhat alternative to GPT models, openly designed for malicious purposes.

This AI module utilizes the GPT-J large language model (LLM) from 2021, boasting a wide range of features such as unlimited character support, chat memory retention, and code formatting capabilities.

Supposedly, WormGPT was trained on a variety of data sources, with a focus on malware-related data. However, the specific datasets used for training are kept confidential by its author.

The AI tool’s creators claim it can make malware, BEC phishing emails, and other hacking tools, without keeping any user activity logs. They only accept payment in cryptocurrency, and they actively promote the tool on Telegram, showcasing its usage examples.

Additionally, they consistently enhance the tool with new features, and one of the most recent updates allows users to directly import WormGPT-produced code into their code editor.

WormGPT has its own website where features and pricing are advertised, but the Telegram channel, created on July 16, 2023, is more popular, gathering over 5,000 subscribers rapidly.

The rise of AI technologies, including OpenAI’s ChatGPT, has enabled hackers to conduct business email compromise (BEC) attacks more effectively. With ChatGPT’s smart AI, they can craft convincing fake emails personalized for each individual, increasing the chances of success.


Protecting against AI-driven Business Email Compromise (BEC) attacks:

  1. Use AI Detection Tools: Utilize advanced AI detection tools to identify patterns and characteristics of AI-generated content in emails, helping to spot suspicious messages.
  2. Implement Email Authentication Protocols: Set up protocols like DMARC, SPF, and DKIM to verify the authenticity of incoming emails, reducing the risk of spoofed messages.
  3. Provide User Training: Educate employees about the risks of BEC attacks and AI-generated content. Train them to be cautious with links and attachments from unknown sources.
  4. Set up Email Filtering: Use email filters to block suspicious emails with known AI-generated patterns, reducing the chances of malicious messages reaching users.
  5. Consider Whitelisting: Implement whitelisting to allow emails only from trusted sources, adding an extra layer of protection against unauthorized senders.

‍Follow Us on: Twitter, InstagramFacebook to get the latest security news!

About the Author:

FirstHackersNews- Identifies Security

Leave A Comment

Subscribe to our newsletter to receive security tips everday!