OpenAI Confirms That Threat Actors Were Using ChatGPT to Create Malware

You May Be Interested In:Autonomous vehicles: How mines control driverless trucks


  • ChatGPT maker OpenAI has confirmed that threat actors were using its platform to create malware and launch attacks.
  • In a threat intelligence report, it revealed that 20 illegal campaigns have been disrupted since the beginning of 2024.
  • Threat actors are using the platform to create fake content and spread hate speech online.
  • All of these accounts have been blocked now.

OpenAI has recently confirmed that threat actors are using ChatGPT to develop and debug malware. The company released a new threat intelligence report which revealed that it has already disrupted 20 operations and deceptive networks since the beginning of 2024.

SweetSpecter

The first threat group mentioned by OpenAI is called “SweetSpecter” which was first documented by Cisco Talos analysts in November 2023. It’s a Chinese cyber-espionage group that mostly targets government and government institutions in Asia. This group tried to attack OpenAI through phishing attacks.

One of their members sent an email that seemed to be a harmless support request but was actually hiding a malicious attachment. If the victim accidentally opened the attachment, it would install the SugarGh0st RAT malware on their system.

However, since the email was sent to the personal email ID of an OpenAI employee, it raised suspicion because support emails are usually sent to the company’s official help email.

Phishing email

Later, when the company investigated, it found multiple ChatGPT accounts under SweetSpecter that were being used for creating codes and analyzing vulnerabilities.

CyberAv3ngers

The second group on the list is “CyberAv3ngers”, controlled by the Iranian Government Islamic Revolutionary Guard Corps (IRGC). This group is known to target critical infrastructure so that the impact is more far-reaching.

During the investigation, OpenAI found that it was using ChatGPT to create custom Python scripts, default credentials in Programmable Logic Controllers (PLCs), and hidden codes that would probably be used in phishing attacks.

It was also using the tool to learn new attack techniques such as stealing passwords on macOS devices and exploiting certain vulnerabilities.

Another Iranian group called “Storm-0817” was also on the list. It was using ChatGPT to debug its malware and develop an Instagram scraper. Its most ambitious project was a custom malware for Android that can steal contacts, browsing history, location details, and call logs.

Using ChatGPT to Influence Public Opinion 

Apart from launching technical attacks, threat actors are also using ChatGPT to create content for social media sites and websites.

  • For instance, in July, some groups from Rwanda were using ChatGPT to create election-related content and post them on X.
  • Similarly, in August, OpenAI found that an Iranian group was using its platform to create long-format website content as well as short posts for social media.

The aim of every post was the same – to influence the election results in their favor and weaken people’s trust in the authorities.

The only silver lining is none of these posts got any engagement. Also, OpenAI has banned all the accounts that were running under these threat actors.

However, the real danger is far from over. They can still open new accounts or switch to a different AI platform that’s not as strict as OpenAI. Disrupting campaigns when the damage is already done is of little use. What we need is a way to identify and stop these miscreants at the earliest.

share Paylaş facebook pinterest whatsapp x print

Similar Content

Flash News Hub | © 2024 | News