The subject of whether ChatGPT can be used to create phishing sites and if it can also be used to detect them accurately has been discussed by security researchers.
This experiment has been conducted to see how much cybersecurity information ChatGPT has picked up from its training data and how it may help human analysts.
Researchers from Kaspersky examined 5,265 URLs, of which 2943 were safe, and 2322 were phishing.
Implementing AI-Powered Email security solutions can secure your business from today’s most dangerous email threats, such as Email Tracking, Blocking, Modifying, Phishing, Account Take Over, Business Email Compromise, Malware & Ransomware –
Researchers asked the straightforward inquiry, “Does this link lead to a phishing website?” to ChatGPT (GPT-3.5). The AI chatbot had a detection rate of 87.2% and a false positive rate of 23.2% based just on the URL format.
According to the reports, the number of false positives is unsatisfactory despite the high detection rate.
If every fifth website you visit was blocked, what would happen? Although no machine learning method can guarantee a false positive rate of zero, this figure is still too high.
“Is this link safe to visit?” they asked, and the outcomes were significantly worse: a false positive rate of 64.3% and a detection rate of 93.8%.
“It turns out that the more general prompt is more likely to prompt a verdict that the link is dangerous”, reports Kaspersky.
The extraction of the possible phishing victim was the most impressive feature of ChatGPT’s performance.
Attackers strive to deceive consumers into thinking that a URL is legitimate and belongs to a particular organization while simultaneously obfuscating it just enough to evade automated examination when they create their samples.
In many situations, removing the assault target might be helpful.
Researchers added that ChatGPT performs a great job of extracting various internet and financial services with only a tiny amount of post-processing (e.g., combining “Apple” and “iCloud” or deleting “LLC” and “Inc”).
Major online websites like Facebook, TikTok, and Google were among the organizations mentioned.
There were also marketplaces like Amazon and Steam, many banks from around the world, from Australia to Russia, and cryptocurrency and delivery services.
This is because ChatGPT has enough real-world information to know about them. More than half the time, it was successful in locating a target.
The findings from both strategies were insufficient. “It is possible to use this type of technology to assist flesh-and-blood analysts by highlighting suspicious parts of the URL and suggesting possible attack targets. It could also be used in weak supervision pipelines to improve classic ML pipelines”, researchers said.
According to reports, it performs on par with what they would anticipate from a phishing analyst intern: it is good, but never leave it unattended!
Overall, the researchers concluded that ChatGPT and LLMs are not yet prepared to fundamentally alter the cybersecurity landscape, at least not in terms of phishing detection.
Phishing attackers used Google Docs to deliver malicious links, bypassing security measures and redirecting victims…
The Python-based NodeStealer, a sophisticated info-stealer, has evolved to target new information and employ advanced…
A significant XSS vulnerability was recently uncovered in Microsoft’s Bing.com, potentially allowing attackers to execute…
Meta has announced the removal of over 2 million accounts connected to malicious activities, including…
Critical security vulnerability has been identified in Veritas Enterprise Vault, a widely-used archiving and content…
A critical security vulnerability has been disclosed in the popular file archiving tool 7-Zip, allowing…