Infostealer malware has stolen 101.000 ChatGPT accounts

Infostealer malware has stolen 101.000 ChatGPT accounts

More than 101.000 ChatGPT user accounts have been stolen by infostealer malware over the past year, according to data from the dark web market.

Infostealer malware 

Infostealer malware has led to havoc on the security of ChatGPT. With over 101,000 user accounts falling victim to data theft in the past year alone.

According to data from dark web marketplaces, cyber intelligence firm Group-IB uncover a hundred thousand info-stealer logs on underground websites containing compromised ChatGPT accounts. 

Information stealers are a specific category of malware targeting account data stored within various applications. These malicious programs focus on extracting valuable information. This is done from email clients, web browsers, instant messengers, gaming services, cryptocurrency wallets, and more.

In the case of ChatGPT, these malware variants extract stored credentials from web browsers’ SQLite databases. They exploit the CryptProtectData function to reverse the encryption of the stored secrets.

These types of malware are known for stealing credentials stored in web browsers by extracting them from the program’s SQLite database and abusing the CryptProtectData function to reverse-encrypt the stored secrets.

The Significance of ChatGPT Accounts

ChatGPT accounts hold considerable importance due to their association with various types of valuable data. Apart from email accounts, credit card information, and cryptocurrency wallet details, these compromised accounts grant unauthorized access to AI-powered tools that have gained prominence among users and businesses. 

Group-IB data shows that the number of stolen ChatGPT logs has steadily increased over time, with nearly 80% coming from the stealer Racoon, followed by 13% from Vidar and 7% from Redline.

If you enter sensitive data into ChatGPT, consider turning off the chat storage feature from the platform’s settings menu or manually deleting those conversations once you’re done using the tool.

Conclusion

Users are advised to take precautions when inputting sensitive information into ChatGPT. This is to counter against infostealer. Two recommended steps are disabling the chat-saving feature within the platform’s settings menu or manually deleting conversations after usage.  It is important to note that some information stealers capture screenshots or engage in keylogging, potentially compromising data security.

‍Follow Us on: Twitter, InstagramFacebook to get the latest security news!

About the Author:

FirstHackersNews- Identifies Security

Leave A Comment

Subscribe to our newsletter to receive security tips everday!