ChatGPT Can be Tricked To Write Malware When You Act as a Developer Mode

Japanese cybersecurity experts warn that ChatGPT can be deceived by users who input a prompt to mimic developer mode, leading the AI chatbot to generate code for malicious software.

Developers’ security measures to deter unethical and criminal exploitation of the tool have been exposed as easily bypassed by this revelation.

The Group of Seven summits in Hiroshima next month, along with other global forums, are being urged to initiate discussions on regulating AI chatbots, during increasing worries that they may encourage criminal activity and societal discord.

Recently we have reported that ChatGPT-powered polymorphic malware bypasses endpoint detection filters and hackers use ChatGPT to develop powerful hacking tools.

The Exploitation of ChatGPT is a Growing Concern

G7 digital ministers intend to advocate for quick research and improved governance of generative AI systems at their forthcoming two-day gathering in Takasaki, Gunma Prefecture.

While apart from this, Yokosuka, Kanagawa Prefecture, is the first local government in Japan to conduct a trial of ChatGPT in all of its offices.

In general, the ChatGPT is wholly programmed to reject unethical requests like instructions on creating a virus or bomb.

However, Mitsui Bussan Secure Directions analyst Takashi Yoshikawa stated:-

“Such restrictions can be bypassed easily, and could be done by instructing the chatbot to operate in developer mode.” Japanise Times reported.

Upon being directed to code ransomware, a malware that encrypts data and demands payment as ransom to restore access by providing a decryption key, ChatGPT complied within minutes and successfully infected a test computer or system.

The potential for malicious use is evident as the chatbot can generate a virus in minutes through a Japanese language conversation. Hence, AI developers must prioritize implementing measures to prevent such exploitation.

Moreover, OpenAI admitted that it is not feasible to anticipate all potential abuses of the tool but committed to striving towards developing a safer AI by drawing on insights gained from real-world implementation.

Building Your Malware Defense Strategy – Download Free E-Book

Balaji

BALAJI is an Ex-Security Researcher (Threat Research Labs) at Comodo Cybersecurity. Editor-in-Chief & Co-Founder - Cyber Security News & GBHackers On Security.

Recent Posts

Attackers Exploit Microsoft Entra Billing Roles to Escalate Privileges in Organizational Environments

A startling discovery by BeyondTrust researchers has unveiled a critical vulnerability in Microsoft Entra ID…

19 hours ago

Threat Actors Exploit Google Apps Script to Host Phishing Sites

The Cofense Phishing Defense Center has uncovered a highly strategic phishing campaign that leverages Google…

20 hours ago

Dadsec Hacker Group Uses Tycoon2FA Infrastructure to Steal Office365 Credentials

Cybersecurity researchers from Trustwave’s Threat Intelligence Team have uncovered a large-scale phishing campaign orchestrated by…

20 hours ago

Beware: Weaponized AI Tool Installers Infect Devices with Ransomware

Cisco Talos has uncovered a series of malicious threats masquerading as legitimate AI tool installers,…

20 hours ago

Pure Crypter Uses Multiple Evasion Methods to Bypass Windows 11 24H2 Security Features

Pure Crypter, a well-known malware-as-a-service (MaaS) loader, has been recognized as a crucial tool for…

21 hours ago

Attackers Exploit Microsoft Entra Billing Roles to Escalate Privileges

A recent discovery by security researchers at BeyondTrust has revealed a critical, yet by-design, security…

21 hours ago