Japanese cybersecurity experts warn that ChatGPT can be deceived by users who input a prompt to mimic developer mode, leading the AI chatbot to generate code for malicious software.
Developers’ security measures to deter unethical and criminal exploitation of the tool have been exposed as easily bypassed by this revelation.
The Group of Seven summits in Hiroshima next month, along with other global forums, are being urged to initiate discussions on regulating AI chatbots, during increasing worries that they may encourage criminal activity and societal discord.
The Exploitation of ChatGPT is a Growing Concern
G7 digital ministers intend to advocate for quick research and improved governance of generative AI systems at their forthcoming two-day gathering in Takasaki, Gunma Prefecture.
While apart from this, Yokosuka, Kanagawa Prefecture, is the first local government in Japan to conduct a trial of ChatGPT in all of its offices.
In general, the ChatGPT is wholly programmed to reject unethical requests like instructions on creating a virus or bomb.
However, Mitsui Bussan Secure Directions analyst Takashi Yoshikawa stated:-
“Such restrictions can be bypassed easily, and could be done by instructing the chatbot to operate in developer mode.” Japanise Times reported.
Upon being directed to code ransomware, a malware that encrypts data and demands payment as ransom to restore access by providing a decryption key, ChatGPT complied within minutes and successfully infected a test computer or system.
The potential for malicious use is evident as the chatbot can generate a virus in minutes through a Japanese language conversation. Hence, AI developers must prioritize implementing measures to prevent such exploitation.
Moreover, OpenAI admitted that it is not feasible to anticipate all potential abuses of the tool but committed to striving towards developing a safer AI by drawing on insights gained from real-world implementation.