ChatGPT has been met with skepticism and optimism in equal measures in the cybersecurity realm. IT professionals leverage this chatbot to write firewall rules, detect threats, develop custom codes, test software and vulnerability, and more.
This has another implication, too – it has made life much easier for novice cybercriminals with frugal resources and low to no technical knowledge. Hackers can exploit its capabilities to write malicious code and test applications for vulnerabilities to exploit and craft malicious content. They do run massive phishing campaigns or perform ransomware attacks rather seamlessly.
In this article, we delve deeper into ChatGPT and cybersecurity.
ChatGPT is an AI-powered chatbot based on a complex machine-learning model developed by Open AI, a private AI and research company specializing in generative AI. Released in November 2022, ChatGPT is powered by Natural Language Processing (NLP) to offer meaningful, human-like responses to user requests and engage in conversations with the users.
It is trained using Reinforcement Learning from Human Feedback (RLHF), wherein the language model is equipped with a large corpus of text data scraped from the internet. Based on this training data, this chatbot generates responses to user questions, writes summaries, etc. It keeps learning to improve its responses over time.
ChatGPT is a potent tool that can transform business through speed, agility, scale, and accuracy. However, it is also a powerful tool for cybercriminals, with or without deep knowledge and resources. Here are the potential threats and negative security consequences of ChatGPT.
One of the biggest security implications of ChatGPT is that threat actors widely use it in drafting legitimate-sounding phishing messages. We are already seeing several instances of the tool being used by cybercriminals to create social engineering and phishing hooks. Security researchers and companies are testing the tool’s capability to do the same.
Jonathan Todd, a security threat researcher, leveraged the tool to create a code that could analyze Reddit users’ profiles and comments to develop a rapid attack profile. Based on these attack profiles, he instructed the chatbot to craft personalized phishing hooks for emails and text messages. Through this social engineering test, he found that ChatGPT could easily enable threat actors to automate and scale high-fidelity, hyper-personalized phishing campaigns.
In another instance, security researchers could generate highly convincing World Cup-themed phishing lures in perfect English. This capability is especially useful for threat actors who aren’t native English speakers and don’t have great English fluency.
It can be leveraged for more realistic conversations with targeted individuals for business email compromise and social media phishing (through Facebook Messenger, WhatsApp, and so on).
While ChatGPT has been programmed not directly to write malicious code or engage in other malicious activity, threat actors are finding and exploiting loopholes. As a result, they can use the chatbot to write malicious code for ransomware attacks, malware attacks, etc.
One security researcher instructed the chatbot to write code for Swift, the programming language for app development in Apple devices. The code could find all MS Office files in a MacBook and send them over an encrypted connection to the web server.
He also instructed the chatbot to generate code to encrypt all those documents and then send the private key for decryption. This did not trigger any warning messages or violations. This way, they developed a ransomware code that could target Mac OS devices without directly instructing ChatGPT.
In another instance, a security researcher instructed the chatbot to find a buffer overflow vulnerability and write code to exploit it.
Security researchers have also found that this chatbot can be leveraged to develop basic information stealer code and Trojan. So, even novice cybercriminals with lesser technical skills can create malicious code.
In another case, researchers found that ChatGPT can be used alongside other malicious tools to craft phishing communications that contain a malicious payload. When users click on/ download the payload, their device will be infected.
While ChatGPT can augment existing cybersecurity technology in scanning and testing applications for vulnerabilities, cybercriminals can also use it to snoop around for exploitable gaps and vulnerabilities, making it a double-edged sword.
ChatGPT does lower the barriers for threat actors who can use it with or without any programming and technical knowledge for various malicious purposes. It is also free and can be used anonymously by anyone globally.
But ChatGPT Can Revolutionize Cybersecurity for Good Too…
Can ChatGPT revolutionize cybersecurity for good and bad? Yes, it can and, in all probability, will. This AI-powered, self-learning technology can augment an organization’s threat detection capability, boost the speed and agility of incident response, and significantly improve cybersecurity defenses’ efficiency and security decision-making.
Despite these useful security applications, ChatGPT does bring several drawbacks, ethical challenges, biases, and, most importantly, several cybersecurity risks and AI-enabled threats. Attackers are leveraging it to improve the lethality and sophistication of threats and bypassing its security controls to write malicious codes.
Organizations need to be aware of these security challenges and their implications on their business continuity. They need to invest in fully managed security solutions like AppTrana that can detect malicious bot activity and stop known and emerging threats with greater accuracy and effectiveness.
The Evasive Panda group deployed a new C# framework named CloudScout to target a Taiwanese…
Researchers warn of ongoing spear-phishing attacks by Russian threat actor Midnight Blizzard targeting individuals in…
The Ukrainian Cyber Emergency Response Team discovered a targeted phishing campaign launched by UAC-0215 against…
Researchers have identified a network of compromised devices, CovertNetwork-1658, used by Chinese threat actors to…
A security researcher discovered a vulnerability in Windows theme files in the previous year, which…
The ongoing Meta malvertising campaign, active for over a month, employs an evolving strategy to…