Saturday, May 24, 2025
HomeAIClaude AI Abused in Influence-as-a-Service Operations and Campaigns

Claude AI Abused in Influence-as-a-Service Operations and Campaigns

Published on

SIEM as a Service

Follow Us on Google News

Claude AI, developed by Anthropic, has been exploited by malicious actors in a range of adversarial operations, most notably a financially motivated “influence-as-a-service” campaign.

This operation leveraged Claude’s advanced language capabilities to manage over 100 social media bot accounts across platforms like Twitter/X and Facebook, engaging with tens of thousands of authentic users worldwide.

What sets this apart technically is Claude’s role as an orchestrator-beyond mere content generation, the AI was used to make tactical decisions on whether bots should like, share, comment on, or ignore posts based on politically motivated personas tailored to clients’ objectives.

- Advertisement - Google News

These personas, crafted with distinct political alignments and multilingual responses, sustained long-term engagement by promoting moderate narratives rather than seeking virality.

This semi-autonomous orchestration hints at the future potential of agentic AI systems to scale complex abuse infrastructures, posing a significant challenge to online safety mechanisms.

Diverse Threats: From Credential Stuffing to Malware Development

Beyond influence campaigns, Claude has been abused in other alarming technical contexts, including credential stuffing operations targeting IoT devices like security cameras.

A sophisticated actor utilized the AI to enhance open-source scraping tools, develop scripts for extracting target URLs, and process data from private stealer log communities on Telegram, aiming for unauthorized access to devices.

Similarly, recruitment fraud campaigns in Eastern Europe exploited Claude for real-time language sanitization, refining poorly written scam messages into polished, native-sounding English to dupe job seekers with convincing narratives and interview scenarios.

Perhaps most concerning is the case of a novice threat actor, lacking formal coding skills, who used Claude to evolve from basic scripts to advanced malware suites featuring facial recognition, dark web scanning, and undetectable payloads designed to evade security controls.

While real-world deployment of these threats remains unconfirmed, the rapid upskilling enabled by generative AI underscores a democratization of cybercrime capabilities, lowering the barrier for less adept individuals to execute high-level attacks.

This series of misuses highlights a critical trend: frontier AI models like Claude are becoming tools for accelerating malicious innovation.

According to the Report, Anthropic has responded by banning implicated accounts and enhancing detection through intelligence programs, leveraging techniques like Clio and hierarchical summarization to analyze vast conversation data for abuse patterns.

Yet, as AI systems grow more powerful, the dual-use nature of such technologies-where legitimate functionalities are repurposed for harm-demands continuous safety innovation and industry collaboration.

These case studies, detailed in Anthropic’s recent report, serve as a wake-up call for the AI ecosystem to fortify defenses against an evolving landscape of digital threats, balancing the immense potential of AI with the imperative to prevent its exploitation.

Find this News Interesting! Follow us on Google News, LinkedIn, & X to Get Instant Updates!

Aman Mishra
Aman Mishra
Aman Mishra is a Security and privacy Reporter covering various data breach, cyber crime, malware, & vulnerability.

Latest articles

Zero-Trust Policy Bypass Enables Exploitation of Vulnerabilities and Manipulation of NHI Secrets

A new project has exposed a critical attack vector that exploits protocol vulnerabilities to...

Threat Actor Sells Burger King Backup System RCE Vulnerability for $4,000

A threat actor known as #LongNight has reportedly put up for sale remote code...

Chinese Nexus Hackers Exploit Ivanti Endpoint Manager Mobile Vulnerability

Ivanti disclosed two critical vulnerabilities, identified as CVE-2025-4427 and CVE-2025-4428, affecting Ivanti Endpoint Manager...

Hackers Target macOS Users with Fake Ledger Apps to Deploy Malware

Hackers are increasingly targeting macOS users with malicious clones of Ledger Live, the popular...

Resilience at Scale

Why Application Security is Non-Negotiable

The resilience of your digital infrastructure directly impacts your ability to scale. And yet, application security remains a critical weak link for most organizations.

Application Security is no longer just a defensive play—it’s the cornerstone of cyber resilience and sustainable growth. In this webinar, Karthik Krishnamoorthy (CTO of Indusface) and Phani Deepak Akella (VP of Marketing – Indusface), will share how AI-powered application security can help organizations build resilience by

Discussion points


Protecting at internet scale using AI and behavioral-based DDoS & bot mitigation.
Autonomously discovering external assets and remediating vulnerabilities within 72 hours, enabling secure, confident scaling.
Ensuring 100% application availability through platforms architected for failure resilience.
Eliminating silos with real-time correlation between attack surface and active threats for rapid, accurate mitigation

More like this

Zero-Trust Policy Bypass Enables Exploitation of Vulnerabilities and Manipulation of NHI Secrets

A new project has exposed a critical attack vector that exploits protocol vulnerabilities to...

Threat Actor Sells Burger King Backup System RCE Vulnerability for $4,000

A threat actor known as #LongNight has reportedly put up for sale remote code...

Chinese Nexus Hackers Exploit Ivanti Endpoint Manager Mobile Vulnerability

Ivanti disclosed two critical vulnerabilities, identified as CVE-2025-4427 and CVE-2025-4428, affecting Ivanti Endpoint Manager...