Tuesday, April 29, 2025
HomeCyber Security NewsNCSC Warns of Specific Vulnerabilities in AI Models Like ChatGPT

NCSC Warns of Specific Vulnerabilities in AI Models Like ChatGPT

Published on

SIEM as a Service

Follow Us on Google News

A large language model (LLM) is a deep learning AI model or system that understands, generates, and predicts text-based content, often associated with generative AI.

In the current technological landscape, we have robust and known models like:-

  • ChatGPT
  • Google Bard 
  • Meta’s LLaMA

Cybersecurity analysts at the National Cyber Security Centre (NCSC) have recently unveiled and warned of specific vulnerabilities in AI systems or models like ChatGPT or Google Bard, Meta’s LLaMA.

- Advertisement - Google News

Vulnerabilities

While LLMs have a role, don’t forget cybersecurity basics for ML projects. Here below, we have mentioned the specific vulnerabilities in AI models about which the researchers at NCSC warned:-

  • Prompt injection attacks: A major issue with current LLMs is ‘prompt injection,’ where users manipulate inputs to make the model misbehave, risking harm or leaks. Multiple prompt injection cases exist, from playful pranks like Bing’s existential crisis to potentially harmful exploits like accessing an API key through MathGPT. The prompt injection risks have risen since the LLMs feed data to third-party apps. 
  • Data poisoning attacks: LLMs, like all ML models, rely on their training data, which often contains offensive or inaccurate content from the vast open internet. The NCSC’s security principles highlight ‘data poisoning,’ and research by Nicholas Carlini shows poisoning large models with minimal data access is possible.

Prevention mechanisms

Detecting and countering prompt injection and data poisoning is tough. System-wide security design, like layering rules over the ML model, can mitigate risks and prevent destructive failures.

Extend cybersecurity basics to address ML-specific risks, including:-

Cyber secure principles

Beyond LLMs, recent months revealed ML system vulnerabilities due to insufficient cybersecurity principles, such as:-

  • Think before you arbitrarily execute code you’ve downloaded from the internet (models)
  • Keep up to date with published vulnerabilities and upgrade software regularly.
  • Understand software package dependencies.
  • Think before you arbitrarily execute code you’ve downloaded from the internet (packages)

However, in a rapidly evolving AI landscape, maintaining strong cybersecurity practices is essentially important, regardless of ML presence.

Keep informed about the latest Cyber cybersecurity news by following us on Google News, Linkedin, Twitter, and Facebook.

Tushar Subhra
Tushar Subhra
Tushar is a Cyber security content editor with a passion for creating captivating and informative content. With years of experience under his belt in Cyber Security, he is covering Cyber Security News, technology and other news.

Latest articles

RansomHub Ransomware Deploys Malware to Breach Corporate Networks

The eSentire’s Threat Response Unit (TRU) in early March 2025, a sophisticated cyberattack leveraging...

19 APT Hackers Target Asia-based Company Servers Using Exploited Vulnerabilities and Spear Phishing Email

The NSFOCUS Fuying Laboratory’s global threat hunting system identified 19 sophisticated Advanced Persistent Threat...

FBI Reports ₹1.38 Lakh Crore Loss in 2024, a 33% Surge from 2023

The FBI’s Internet Crime Complaint Center (IC3) has reported a record-breaking loss of $16.6...

Fog Ransomware Reveals Active Directory Exploitation Tools and Scripts

Cybersecurity researchers from The DFIR Report’s Threat Intel Group uncovered an open directory hosted...

Resilience at Scale

Why Application Security is Non-Negotiable

The resilience of your digital infrastructure directly impacts your ability to scale. And yet, application security remains a critical weak link for most organizations.

Application Security is no longer just a defensive play—it’s the cornerstone of cyber resilience and sustainable growth. In this webinar, Karthik Krishnamoorthy (CTO of Indusface) and Phani Deepak Akella (VP of Marketing – Indusface), will share how AI-powered application security can help organizations build resilience by

Discussion points


Protecting at internet scale using AI and behavioral-based DDoS & bot mitigation.
Autonomously discovering external assets and remediating vulnerabilities within 72 hours, enabling secure, confident scaling.
Ensuring 100% application availability through platforms architected for failure resilience.
Eliminating silos with real-time correlation between attack surface and active threats for rapid, accurate mitigation

More like this

RansomHub Ransomware Deploys Malware to Breach Corporate Networks

The eSentire’s Threat Response Unit (TRU) in early March 2025, a sophisticated cyberattack leveraging...

19 APT Hackers Target Asia-based Company Servers Using Exploited Vulnerabilities and Spear Phishing Email

The NSFOCUS Fuying Laboratory’s global threat hunting system identified 19 sophisticated Advanced Persistent Threat...

FBI Reports ₹1.38 Lakh Crore Loss in 2024, a 33% Surge from 2023

The FBI’s Internet Crime Complaint Center (IC3) has reported a record-breaking loss of $16.6...