Threat actors can exploit ChatGPT’s ecosystem for several illicit purposes, such as crafting prompts to generate malicious code, phishing lures, and disinformation content.
Even threat actors can exploit ChatGPT’s exceptional capabilities to craft and launch a multitude of sophisticated and stealthy cyberattacks.
Besides this, they can also exploit the vulnerabilities in ChatGPT extensions or plugins to gain unauthorized access to user data or external systems.
Recently, cybersecurity analysts at Salt Labs found generative AI to be a new attack vector.
Threat actors could exploit vulnerabilities discovered in the ChatGPT ecosystem to access user accounts, even GitHub, with 0-click hacks.
At Salt Labs, researchers look at the familiar and choose ChatGPT as a starting point, assuming their results will have wider consequences for AI systems.
Alert Fatigue that helps no one as security teams need to triage 100s of vulnerabilities. :
AcuRisQ, that helps you to quantify risk accurately:
Using such plugins gives rise to an unintentional risk of exposure of sensitive data, thereby allowing access to users’ accounts such as Google Drive and GitHub.
Research exposed three vulnerabilities, and here below we have mentioned them:-
However, the focus was on recurring vulnerabilities stemming from a lack of security awareness by developers.
Cybersecurity analysts urge OpenAI to prioritize security guidelines for plugin developers.
Researchers exposed an OAuth vulnerability, allowing attackers to manipulate victims into installing malicious ChatGPT plugins.
The attack mirrors traditional OAuth redirect manipulation, where attackers substitute their credentials during the authentication flow.
When a user approves a new ChatGPT plugin, the approval code gets returned to OpenAI via a redirect URL.
An attacker could substitute this code with their own, tricking ChatGPT into installing the plugin on the victim’s behalf and granting access to messages and data.
This recurrent OAuth vulnerability persists due to oversight by many developers who believe it is insignificant. Experts emphasize the severity of this flaw within ChatGPT’s plugin ecosystem.
You have to enforce a state parameter if you use OAuth and wish to guard against this situation.
Researchers exposed an account takeover vulnerability across numerous ChatGPT plugins built with PluginLab.AI, including AskTheCode.
When users install these plugins and grant access to services like GitHub, the plugins create authenticated accounts storing the user’s credentials.
Attackers could exploit an authentication bypass to obtain the victim’s “member ID” from PluginLab and then issue unauthorized requests using this ID to generate valid authorization codes.
With these codes, attackers could hijack plugin sessions within ChatGPT and gain full access to connected private data, such as GitHub repositories.
The root cause was PluginLab’s failure to validate requests properly during the authentication flow.
Moreover, cybersecurity analysts have indicated that GPTs have not yet fixed this issue altogether.
With Perimeter81 malware protection, you can block malware, including Trojans, ransomware, spyware, rootkits, worms, and zero-day exploits. All are incredibly harmful and can wreak havoc on your network.
Stay updated on Cybersecurity news, Whitepapers, and Infographics. Follow us on LinkedIn & Twitter.
In a groundbreaking discovery on November 20, 2024, cybersecurity researchers Shubham Shah and a colleague…
A security flaw found in Android-based kiosk tablets at luxury hotels has exposed a grave…
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) issued six Industrial Control Systems (ICS) advisories…
A sophisticated cyber campaign dubbed "J-magic" has been discovered targeting enterprise-grade Juniper routers with a…
In January, Netskope Threat Labs uncovered a sophisticated global malware campaign leveraging fake CAPTCHA pages…
In a recent technical investigation, researchers uncovered critical insights into the infrastructure linked to a…