Tuesday, April 29, 2025
HomeCyber AIPromptmap - Tool to Test Prompt Injection Attacks on ChatGPT Instances

Promptmap – Tool to Test Prompt Injection Attacks on ChatGPT Instances

Published on

SIEM as a Service

Follow Us on Google News

Prompt injection refers to a technique where users input specific prompts or instructions to influence the responses generated by a language model like ChatGPT.

However, threat actors mainly use this technique to mod the ChatGPT instances for several malicious purposes. It has several negative impacts like:-

  • Misinformation
  • Content bias
  • Offensive content
  • Manipulation

An independent security researcher, Utku Sen, recently developed and launched a new tool dubbed “promptmap” that will enable users to test the prompt injection attacks on ChatGPT instances.

- Advertisement - Google News

Promptmap

On ChatGPT instances, the “promptmap” automatically tests the prompt injections by understanding the context and purpose of your rules configured on ChatGPT.

It uses this understanding to create custom attack prompts for the target, running them alongside your system prompts. While this tool checks for prompt injection success by analyzing the ChatGPT instance’s response.

Work Mechanism Structure (Source – GitHub)

Attack types

Here below, we have mentioned all the current attack types along with their details:-

  • Basic Injection: These attacks are straightforward, as they are sent without prompt enhancements, aiming for unrelated answers or actions.
  • Translation Injection: These attacks work by giving English prompts to ChatGPT without language restrictions to gauge if it responds in another language.
  • Math Injection: Getting ChatGPT to solve a math equation indicates its capability for complex tasks. However, attacks like math injection prompts can be customized for specific targets.
  • Context-Switch: Context-switching involves asking unrelated questions to measure the willingness of ChatGPT to answer sensitive queries that are mainly tailored to specific targets.
  • External Browsing: External browsing prompts allow the ChatGPT to browse specific URLs, and they are evolving based on the target’s needs.
  • External Prompt Injection: The External Prompt Injection asks ChatGPT if it’s possible for it to access specific URLs for additional prompts.

Installation

Here below we have mentioned the installation procedure:-

  • Clone the repository:

git clone https://github.com/utkusen/promptmap.git

  • Go inside the folder.

cd promptmap

  • Install required libraries

pip3 install -r requirements.txt

  • Open promptmap.py file and add your OpenAI API key into the following line: openai.api_key = “YOUR KEY HERE”

You can also change model names that are defined target_model and attack_model variables.

Moreover, with the help of the “python3 promptmap.py command,” the promptmap” can be run, and it defaults to 5 attack prompts per category, which is adjustable with the ‘-n’ parameter.

Keep informed about the latest Cyber Security News by following us on Google NewsLinkedinTwitter, and Facebook.

Gurubaran
Gurubaran
Gurubaran is a co-founder of Cyber Security News and GBHackers On Security. He has 10+ years of experience as a Security Consultant, Editor, and Analyst in cybersecurity, technology, and communications.

Latest articles

Researchers Uncover SuperShell Payloads and Various Tools in Hacker’s Open Directories

Cybersecurity researchers at Hunt have uncovered a server hosting advanced malicious tools, including SuperShell...

Cyber Espionage Campaign Targets Uyghur Exiles with Trojanized Language Software

A sophisticated cyberattack targeted senior members of the World Uyghur Congress (WUC), the largest...

Konni APT Deploys Multi-Stage Malware in Targeted Organizational Attacks

A sophisticated multi-stage malware campaign, potentially orchestrated by the North Korean Konni Advanced Persistent...

Outlaw Cybergang Launches Global Attacks on Linux Environments with New Malware

The Outlaw cybergang, also known as “Dota,” has intensified its global assault on Linux...

Resilience at Scale

Why Application Security is Non-Negotiable

The resilience of your digital infrastructure directly impacts your ability to scale. And yet, application security remains a critical weak link for most organizations.

Application Security is no longer just a defensive play—it’s the cornerstone of cyber resilience and sustainable growth. In this webinar, Karthik Krishnamoorthy (CTO of Indusface) and Phani Deepak Akella (VP of Marketing – Indusface), will share how AI-powered application security can help organizations build resilience by

Discussion points


Protecting at internet scale using AI and behavioral-based DDoS & bot mitigation.
Autonomously discovering external assets and remediating vulnerabilities within 72 hours, enabling secure, confident scaling.
Ensuring 100% application availability through platforms architected for failure resilience.
Eliminating silos with real-time correlation between attack surface and active threats for rapid, accurate mitigation

More like this

Researchers Uncover SuperShell Payloads and Various Tools in Hacker’s Open Directories

Cybersecurity researchers at Hunt have uncovered a server hosting advanced malicious tools, including SuperShell...

Cyber Espionage Campaign Targets Uyghur Exiles with Trojanized Language Software

A sophisticated cyberattack targeted senior members of the World Uyghur Congress (WUC), the largest...

Konni APT Deploys Multi-Stage Malware in Targeted Organizational Attacks

A sophisticated multi-stage malware campaign, potentially orchestrated by the North Korean Konni Advanced Persistent...