Malicious AI tools up 200%, ChatGPT jailbreaks +52%

Home/Compromised, malicious cyber actors, Malware, Security Advisory, Security Update, Tips/Malicious AI tools up 200%, ChatGPT jailbreaks +52%

Malicious AI tools up 200%, ChatGPT jailbreaks +52%

In 2024, AI-related threats grew as cybercriminals increasingly targeted large language models (LLMs).

KELA’s “State of Cybercrime” report found a 94% rise in discussions on exploiting LLMs like ChatGPT, Copilot, and Gemini.

Cybercriminals are actively sharing and refining jailbreaking techniques on underground forums like HackForums and XSS, with dedicated sections emerging for these discussions.

These techniques aim to bypass LLM safety restrictions, enabling the creation of phishing emails, malware, and other harmful content.

KELA identified word transformation as one of the most effective methods, successfully bypassing 27% of safety tests.

This method replaces sensitive words with synonyms or breaks them into smaller parts to avoid detection.

The report highlights a sharp rise in compromised accounts for popular LLM platforms.

ChatGPT saw a jump from 154,000 breached accounts in 2023 to 3 million in 2024, a nearly 1,850% increase.

Gemini (formerly Bard) also surged from 12,000 to 174,000 compromised accounts, up 1,350%.

These credentials, stolen via infostealer malware, pose a serious risk, allowing further misuse of LLMs and related services.

As LLM adoption grows, KELA predicts new attack surfaces in 2025, with prompt injection being a major threat and agentic AI creating new vulnerabilities.

The report urges organizations to enhance security with secure LLM integrations, deepfake detection, and AI threat awareness.

With cybercrime increasingly overlapping with state-sponsored attacks, proactive threat intelligence and adaptive defenses are essential.

By | 2025-03-28T23:19:09+05:30 March 26th, 2025|Compromised, malicious cyber actors, Malware, Security Advisory, Security Update, Tips|

About the Author:

FirstHackersNews- Identifies Security

Leave A Comment

Subscribe to our newsletter to receive security tips everday!