Monday, July 15, 2024
EHA

NCSC Warns of Specific Vulnerabilities in AI Models Like ChatGPT

A large language model (LLM) is a deep learning AI model or system that understands, generates, and predicts text-based content, often associated with generative AI.

In the current technological landscape, we have robust and known models like:-

  • ChatGPT
  • Google Bard 
  • Meta’s LLaMA

Cybersecurity analysts at the National Cyber Security Centre (NCSC) have recently unveiled and warned of specific vulnerabilities in AI systems or models like ChatGPT or Google Bard, Meta’s LLaMA.

Vulnerabilities

While LLMs have a role, don’t forget cybersecurity basics for ML projects. Here below, we have mentioned the specific vulnerabilities in AI models about which the researchers at NCSC warned:-

  • Prompt injection attacks: A major issue with current LLMs is ‘prompt injection,’ where users manipulate inputs to make the model misbehave, risking harm or leaks. Multiple prompt injection cases exist, from playful pranks like Bing’s existential crisis to potentially harmful exploits like accessing an API key through MathGPT. The prompt injection risks have risen since the LLMs feed data to third-party apps. 
  • Data poisoning attacks: LLMs, like all ML models, rely on their training data, which often contains offensive or inaccurate content from the vast open internet. The NCSC’s security principles highlight ‘data poisoning,’ and research by Nicholas Carlini shows poisoning large models with minimal data access is possible.

Prevention mechanisms

Detecting and countering prompt injection and data poisoning is tough. System-wide security design, like layering rules over the ML model, can mitigate risks and prevent destructive failures.

Extend cybersecurity basics to address ML-specific risks, including:-

Cyber secure principles

Beyond LLMs, recent months revealed ML system vulnerabilities due to insufficient cybersecurity principles, such as:-

  • Think before you arbitrarily execute code you’ve downloaded from the internet (models)
  • Keep up to date with published vulnerabilities and upgrade software regularly.
  • Understand software package dependencies.
  • Think before you arbitrarily execute code you’ve downloaded from the internet (packages)

However, in a rapidly evolving AI landscape, maintaining strong cybersecurity practices is essentially important, regardless of ML presence.

Keep informed about the latest Cyber cybersecurity news by following us on Google NewsLinkedinTwitter, and Facebook.

Website

Latest articles

Critical Cellopoint Secure Email Gateway Flaw Let Attackers Execute Arbitrary Code

A critical vulnerability has been discovered in the Cellopoint Secure Email Gateway, identified as...

Singapore Banks to Phase out OTPs for Bank Account Logins Within 3 Months

The Monetary Authority of Singapore (MAS) and The Association of Banks in Singapore (ABS)...

GuardZoo Android Malware Attacking military personnel via WhatsApp To Steal Sensitive Data

A Houthi-aligned group has been deploying Android surveillanceware called GuardZoo since October 2019 to...

ViperSoftX Weaponizing AutoIt & CLR For Stealthy PowerShell Execution

ViperSoftX is an advanced malware that has become more complicated since its recognition in...

Malicious NuGet Campaign Tricking Developers To Inject Malicious Code

Hackers often target NuGet as it's a popular package manager for .NET, which developers...

Akira Ransomware Attacking Airline Industry With Legitimate Tools

Airlines often become the target of hackers as they contain sensitive personal and financial...

DarkGate Malware Exploiting Excel Files And SMB File Shares

DarkGate, a Malware-as-a-Service (MaaS) platform, experienced a surge in activity since September 2023, employing...
Tushar Subhra Dutta
Tushar Subhra Dutta
Tushar is a Cyber security content editor with a passion for creating captivating and informative content. With years of experience under his belt in Cyber Security, he is covering Cyber Security News, technology and other news.

Free Webinar

Low Rate DDoS Attack

9 of 10 sites on the AppTrana network have faced a DDoS attack in the last 30 days.
Some DDoS attacks could readily be blocked by rate-limiting, IP reputation checks and other basic mitigation methods.
More than 50% of the DDoS attacks are employing botnets to send slow DDoS attacks where millions of IPs are being employed to send one or two requests per minute..
Key takeaways include:

  • The mechanics of a low-DDoS attack
  • Fundamentals of behavioural AI and rate-limiting
  • Surgical mitigation actions to minimize false positives
  • Role of managed services in DDoS monitoring

Related Articles