Beware of Malicious Fake ChatGPT Apps

Beware of Malicious Fake ChatGPT Apps

The public release of ChatGPT caused a sensation back in 2022, and it’s fair to say it’s been a game-changer. However, scammers often target platforms with large user bases. Fake ChatGPT services have started appearing, and this trend continues today. So, what exactly is the ChatGPT virus, and how dangerous are these scams? Let’s delve into some of the most prominent examples.

FAKE CHATGPT SITES

The buzz surrounding the public release of ChatGPT garnered significant attention, although not everyone could immediately access it. People from various countries eagerly sought access to this cutting-edge technology, providing an opportunity for scammers to exploit the eager audience. This led to the emergence of malicious fake ChatGPT apps, which have since evolved into more sophisticated and diverse scams.

Let’s discuss the typical profile of such a scam. The webpage involved in the scam typically features a suspicious URL containing the terms “ChatGPT” or “OpenAI,” often registered on a cheap top-level domain (TLD) such as .online or .xyz. The website itself is usually designed in a minimalist fashion, with minimal details and only a few clickable buttons. The main activities on the website usually revolve around two things: downloading a file or paying a certain sum of money that will never be refunded.

In some instances, fraudsters choose to distribute mobile malware disguised as a legitimate app from OpenAI. This was particularly lucrative before the official release, but such scams persist today. In the best-case scenario, they simply charge a fee for a basic shell over the GPT-3.5 API, which is freely available. Worse scenarios involve apps with no functionality, charging users without providing any service, or containing spyware/infostealer capabilities.

Chat-gpt-pc[.]online

One of the earliest malicious fake ChatGPT sites was detected around early February 2023. The site, designed fairly well, offered a desktop client for the chat bot. For those unaware that the original Chat is only available on OpenAI’s website, this seemed like a legitimate offer. However, those who downloaded and installed the client were infected with RedLine stealer. These instances were often promoted through Facebook ads and groups, and sometimes via SEO poisoning.

Openai-pc-pro[.]online

Another malicious website, mirroring the design of the original OpenAI page, effectively mirrors the first one on our list. Besides sharing the same design, it also offered a download for the “desktop client” for the chatbot. Predictably, the downloaded file contained malware, specifically Redline Stealer. Given that both were promoted from the same Facebook group with ChatGPT-related naming, it’s likely they belong to the same malware-spreading campaign.

Chatgpt-go[.]online

A malicious website mimicked the design of the original OpenAI page, featuring a ChatGPT dialogue box without the usual input prompt. Instead, a button labeled “TRY CHATGPT” led to malware downloading. Various interactive elements across the site also triggered malware downloads. Payloads from this site included Lumma Stealer and several clipper malware samples. Malicious Google Ads were the primary method of promotion.

Pay[.]chatgptftw[.]com

A fake ChatGPT differs from the previous examples by attempting to collect users’ payment information instead of spreading malware. It mimics a billing page that supposedly charges for access to the technology, allowing fraudsters to gather banking information, including usernames and email addresses. These scams were promoted through the same channels: Facebook groups and ads.

SuperGPT (Meterpreter inside)

The example involves malware disguised as a SuperGPT Android app, masquerading as a legitimate AI assistant derived from the original GPT model. With poor app moderation on Google Play, it was inevitable that scammers would take advantage of this situation. The app appeared identical to the original one on the surface, but it actually contained Meterpreter malware – a RAT/backdoor designed specifically for Android.

HOW TO DETECT AND AVOID MALICIOUS FAKE CHATGPT APPS?

To detect and avoid malicious fake ChatGPT apps, follow these guidelines:

  1. Verify the Source: Only download ChatGPT apps from official sources, such as the OpenAI website or reputable app stores like Google Play Store or Apple App Store.
  2. Check the URL: Be cautious of websites with suspicious URLs, especially those containing misspellings or unusual domain extensions. Official sources typically have well-established domains.
  3. Review App Permissions: Before downloading any app, review the permissions it requests. Malicious apps may ask for unnecessary permissions, such as access to sensitive data or device functions.
  4. Read User Reviews: Check user reviews and ratings for the app. If many users report issues or suspicious behavior, it’s best to avoid downloading it.
  5. Research the Developer: Look up information about the app developer or company behind the app. Reputable developers will have a track record of producing trustworthy apps.
  6. Avoid Third-Party Stores: Avoid downloading apps from third-party app stores or unknown sources, as they may host malicious or counterfeit apps.
  7. Use Security Software: Install reputable antivirus or security software on your device to detect and block malicious apps before they can cause harm.
  8. Stay Informed: Stay updated on the latest cybersecurity threats and scams involving fake apps. Awareness of common tactics used by scammers can help you avoid falling victim to their schemes.

‍Follow Us on: Twitter, InstagramFacebook to get the latest security news!

About the Author:

FirstHackersNews- Identifies Security

Leave A Comment

Subscribe to our newsletter to receive security tips everday!