AI
AI-Generated Fake GitHub Repositories Steal Login Credentials
A concerning cybersecurity threat has emerged with the discovery of AI-generated fake GitHub repositories designed to distribute malware, including the notorious SmartLoader and Lumma...
AI
AI Becomes a Powerful Weapon for Cybercriminals to Launch Attacks at High Speed
Artificial intelligence (AI) has emerged as a potent tool in the arsenal of cybercriminals, enabling them to execute attacks with unprecedented speed, precision, and...
AI
Researchers Jailbreak 17 Popular LLM Models to Reveal Sensitive Data
In a recent study published by Palo Alto Networks' Threat Research Center, researchers successfully jailbroke 17 popular generative AI (GenAI) web products, exposing vulnerabilities...
AI
PrintSteal Cybercrime Group Mass-Producing Fake Aadhaar & PAN Cards
A large-scale cybercrime operation dubbed "PrintSteal" has been exposed, revealing a complex network involved in the mass production and distribution of fraudulent Indian KYC...
AI
LLMjacking – Hackers Abuse GenAI With AWS NHIs to Hijack Cloud LLMs
In a concerning development, cybercriminals are increasingly targeting cloud-based generative AI (GenAI) services in a new attack vector dubbed "LLMjacking."These attacks exploit non-human...
AI
MITRE Releases OCCULT Framework to Address AI Security Challenges
MITRE has unveiled the Offensive Cyber Capability Unified LLM Testing (OCCULT) framework, a groundbreaking methodology designed to evaluate risks posed by large language models...
AI
Researchers Jailbreak OpenAI o1/o3, DeepSeek-R1, and Gemini 2.0 Flash Models
Researchers from Duke University and Carnegie Mellon University have demonstrated successful jailbreaks of OpenAI’s o1/o3, DeepSeek-R1, and Google’s Gemini 2.0 Flash models through a...
AI
New LLM Vulnerability Exposes AI Models Like ChatGPT to Exploitation
A significant vulnerability has been identified in large language models (LLMs) such as ChatGPT, raising concerns over their susceptibility to adversarial attacks.Researchers have...