On illegal Dark Web Markets, more than 101,000 hacked accounts of the OpenAI language model ChatGPT were discovered.
These hacked credentials were found in the logs of information-stealing malware sold on illegal dark web markets.
Reports say in May 2023, there were 26,802 logs accessible that contained hacked ChatGPT accounts.
Info stealers are a sort of malware that gathers information from installed browsers on infected machines, including cookies, browsing history, bank card information, credentials saved in browsers, and other information, before sending it all to the malware operator.
Along with extensive information on the victim’s device, hackers can also gather information from emails and instant messengers.
Cyber intelligence firm, Group-IB says that most ChatGPT credentials for sale over the past year have been listed in the Asia-Pacific region.
Many employees are using chatbots to streamline their job, whether it be company communications or software development.
ChatGPT, by default, keeps a record of all user inquiries and AI responses.
As a result, unauthorized access to ChatGPT accounts may reveal private or sensitive information that can be used to launch attacks specifically against businesses and their employees.
“Many enterprises are integrating ChatGPT into their operational flow.
Employees enter classified correspondences or use the bot to optimize proprietary code”, said Group-IB’s Dmitry Shestakov.
“Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials.”
According to the information shared with Cyber Security News, a significant number of logs, including ChatGPT accounts, have been compromised by the infamous Raccoon information stealer.
The domain lists discovered in the logs and the information on the hacked host’s IP address are additional details about logs that are accessible on such markets.
Between June 2022 and May 2023, the Asia-Pacific area had the highest percentage (40.5%) of ChatGPT accounts that information stealers compromised.
Consider disabling the platform’s chat saving option if you use ChatGPT to input sensitive data or delete such chats manually as soon as you’re done using the service.
Even if you do not save discussions to your ChatGPT account, the malware infection might still result in a data leak because many information stealers take screenshots of the compromised machine or carry out keylogging.
For this reason, people handling particularly sensitive data should only use solutions securely constructed locally and housed on their servers rather than entrusting any cloud-based services with it.
Looking For an All-in-One Multi-OS Patch Management Platform –
Cybersecurity experts are sounding the alarm over a new strain of malware dubbed "I2PRAT," which…
A new cyber campaign by the advanced persistent threat (APT) group Earth Koshchei has brought…
Recent research has linked a series of cyberattacks to The Mask group, as one notable…
RiseLoader, a new malware family discovered in October 2024, leverages a custom TCP-based binary protocol…
GFI Software's Kerio Control, a popular UTM solution, was found to be vulnerable to multiple…
Researchers have uncovered vulnerabilities in Microsoft Azure Data Factory's integration with Apache Airflow, which could…