Threat actors can exploit ChatGPT’s ecosystem for several illicit purposes, such as crafting prompts to generate malicious code, phishing lures, and disinformation content.
Even threat actors can exploit ChatGPT’s exceptional capabilities to craft and launch a multitude of sophisticated and stealthy cyberattacks.
Besides this, they can also exploit the vulnerabilities in ChatGPT extensions or plugins to gain unauthorized access to user data or external systems.
Recently, cybersecurity analysts at Salt Labs found generative AI to be a new attack vector.Â
Threat actors could exploit vulnerabilities discovered in the ChatGPT ecosystem to access user accounts, even GitHub, with 0-click hacks.
Critical ChatGPT Plugins Flaw
At Salt Labs, researchers look at the familiar and choose ChatGPT as a starting point, assuming their results will have wider consequences for AI systems.
Mitigating Vulnerability & 0-day Threats
Alert Fatigue that helps no one as security teams need to triage 100s of vulnerabilities.:
- The problem of vulnerability fatigue today
- Difference between CVSS-specific vulnerability vs risk-based vulnerability
- Evaluating vulnerabilities based on the business impact/risk
- Automation to reduce alert fatigue and enhance security posture significantly
AcuRisQ, that helps you to quantify risk accurately:
Using such plugins gives rise to an unintentional risk of exposure of sensitive data, thereby allowing access to users’ accounts such as Google Drive and GitHub.
Research exposed three vulnerabilities, and here below we have mentioned them:-Â
- Malicious plugin installation on ChatGPT users
- Critical or 0-click account takeovers across many plugins
- OAuth redirection manipulation
However, the focus was on recurring vulnerabilities stemming from a lack of security awareness by developers.
Cybersecurity analysts urge OpenAI to prioritize security guidelines for plugin developers.
Researchers exposed an OAuth vulnerability, allowing attackers to manipulate victims into installing malicious ChatGPT plugins.
The attack mirrors traditional OAuth redirect manipulation, where attackers substitute their credentials during the authentication flow.Â
When a user approves a new ChatGPT plugin, the approval code gets returned to OpenAI via a redirect URL.
An attacker could substitute this code with their own, tricking ChatGPT into installing the plugin on the victim’s behalf and granting access to messages and data.Â
This recurrent OAuth vulnerability persists due to oversight by many developers who believe it is insignificant. Experts emphasize the severity of this flaw within ChatGPT’s plugin ecosystem.
You have to enforce a state parameter if you use OAuth and wish to guard against this situation.
Researchers exposed an account takeover vulnerability across numerous ChatGPT plugins built with PluginLab.AI, including AskTheCode.Â
When users install these plugins and grant access to services like GitHub, the plugins create authenticated accounts storing the user’s credentials.
Attackers could exploit an authentication bypass to obtain the victim’s “member ID” from PluginLab and then issue unauthorized requests using this ID to generate valid authorization codes.Â
With these codes, attackers could hijack plugin sessions within ChatGPT and gain full access to connected private data, such as GitHub repositories.
The root cause was PluginLab’s failure to validate requests properly during the authentication flow.Â
Moreover, cybersecurity analysts have indicated that GPTs have not yet fixed this issue altogether.
With Perimeter81 malware protection, you can block malware, including Trojans, ransomware, spyware, rootkits, worms, and zero-day exploits. All are incredibly harmful and can wreak havoc on your network.
Stay updated on Cybersecurity news, Whitepapers, and Infographics. Follow us on LinkedIn & Twitter.