Honeypots, decoy systems, detect and analyze malicious activity by coming in various forms and can be deployed on cloud platforms to provide insights into attacker behavior, enhancing security.
The study proposes to create an interactive honeypot system using a Large Language Model (LLM) to mimic Linux server behavior.
By fine-tuning the LLM with a dataset of attacker-generated commands, the goal is to enhance honeypot effectiveness in detecting and analyzing malicious activities.
The authors combined three datasets of Linux commands, including real-world attacker data, common commands, and command explanations, and processed this data by simulating command execution and preprocessing the text, creating a robust dataset for training their language model to mimic a honeypot.
Prompt engineering involved refining prompts to align with research objectives and enhance model interaction with the dataset, leading to a more effective honeypot system.
The Llama3 8B model was selected for honeypot LLM due to its balance of linguistic proficiency and computational efficiency.
Larger models were too slow, while code-centric models were less effective for honeypot simulation.
Decoding Compliance: What CISOs Need to Know – Join Free Webinar
They fine-tuned a pre-trained language model using LlamaFactory, employing LoRA, QLoRA, NEFTune noise, and Flash Attention 2 to enhance training efficiency and performance, resulting in a honeypot server-like model.
It proposes an LLM-Honeypot framework using an SSH server and a fine-tuned LLM to interact with attackers in natural language, enabling realistic simulation and attacker behavior analysis.
The custom SSH server, built using Python’s Paramiko library, employs a fine-tuned language model to generate realistic responses to user commands.
It logs SSH connections, user credentials, and command interactions, providing valuable data for cybersecurity analysis.
The fine-tuned model’s training losses exhibited a steady decline, indicating effective learning from the dataset.
A learning rate of 5×10−4 was used for 36 training steps, resulting in consistent performance improvement and enhanced ability to generate realistic and contextually appropriate responses.
It demonstrated superior performance in generating terminal outputs compared to the base model, as evidenced by consistently higher similarity scores and lower distance metrics across all samples, which indicates the model’s effectiveness in producing outputs that closely align with expected responses from a Cowrie honeypot server.
The paper proposes a new method for creating interactive and realistic honeypot systems using LLMs. By fine-tuning an LLM on attacker data, the system enhances response quality, improves threat detection, and provides deeper insights into attacker behavior.
They plan to expand training datasets, explore alternative fine-tuning, and incorporate behavioral analysis by deploying the system publicly to collect attack logs and create knowledge graphs to analyze attacker strategies.
They will also evaluate performance using metrics like accuracy and interaction quality to refine the model and enhance honeypots for better cyber-threat detection and analysis.
Are You From SOC/DFIR Teams? - Try Advanced Malware and Phishing Analysis With ANY.RUN - 14-day free trial