Wednesday, December 18, 2024
HomeCyber AIPoisoned Facebook Ads Deliver Malware Using Fake ChatGPT, Bard & Other AI...

Poisoned Facebook Ads Deliver Malware Using Fake ChatGPT, Bard & Other AI Services

Published on

SIEM as a Service

Cyber criminals have recently started using Facebook to pretend to be well-known generative AI brands like ChatGPT, Google Bard, Midjourney, and Jasper to steal users’ personal information.

Users on Facebook are deceived into downloading content from fake brand sites and advertisements.

These downloads contain harmful malware that steals users’ internet credentials for banking, social networking, gaming, and other services, their cryptocurrency wallets, and any data saved in their browsers.

- Advertisement - SIEM as a Service

According to the Check Point Research Team (CPR), the majority of Facebook campaigns that use fake sites and dangerous advertisements eventually spread malware that steals information.

Users who are not aware of the situation are liking and commenting on fake posts, spreading them to their social networks.

How Criminals Use Facebook Ads to Steal Private Information?

This new scam makes use of people’s curiosity about popular generative AI apps to trick them out of their passwords and sensitive data.

The intruders begin by making fake Facebook pages or groups for well-known brands and adding interesting content to them. The unaware individual comments on or likes the content, guaranteeing that it appears on their friends’ news feeds. 

Through a link, the false page advertises a new service. When the user clicks on the link, malicious malware that is intended to steal their internet passwords, cryptocurrency wallets, and other information saved in their browser is unknowingly downloaded.

“Many of the fake pages offer tips, news, and enhanced versions of AI services Google Bard or ChatGPT”, researchers said.

Fake posts displayed to the users

Additionally, cyber criminals frequently persuade users to utilize other AI services and tools. Jasper AI is another well-known AI brand that has amassed over 2 million followers and is being impersonated by online crooks.

Jasper AI impersonated by cyber criminals

In reality, people are furiously debating the role of AI in the comments and liking/sharing the posts, which increases their reach.

“Most of those Facebook pages lead to similar type landing pages which encourage users to download password-protected archive files that are allegedly related to generative AI engines”, say the researchers.

Notably say, when an ignorant user looks for ‘Midjourney AI’ on Facebook and comes across a page with 1.2 million followers, they are likely to assume it is genuine.

Researchers mention that the main goal of this fake Mid-Journey AI Facebook page is to mislead visitors into downloading malware. Links to malicious websites are combined with links to authentic Midjourney reviews or social networks to offer credibility.

“The malware makes efforts to gather various types of information from all the major browsers, including cookies, bookmarks, browsing history, and passwords,” researchers.

“It targets cryptocurrency wallets including Zcash, Bitcoin, Ethereum, and others.”

Final Thoughts

The primary objective of cybercriminals seems to be information related to Facebook accounts and the theft of Facebook pages. Even many pages with a wide audience might be used in this way to propagate fraud since cybercriminals are seeking to take advantage of pages with significant audiences and advertising budgets already in place.

Individuals and organizations must thus educate themselves, be aware of the hazards, and maintain vigilance against the strategies used by cybercriminals. To defend against these changing dangers, advanced security solutions are still crucial.

Stay up-to-date with the latest Cyber Security News; follow us on GoogleNewsLinkedinTwitterand Facebook.

Gurubaran
Gurubaran
Gurubaran is a co-founder of Cyber Security News and GBHackers On Security. He has 10+ years of experience as a Security Consultant, Editor, and Analyst in cybersecurity, technology, and communications.

Latest articles

Cyber Criminals Exploit Windows Management Console to Deliver Backdoor Payloads

A recent campaign dubbed FLUX#CONSOLE has come to light, leveraging Microsoft Common Console Document (.MSC) files...

Texas Tech Systems Breach, Hackers Accessed System Folders & Files

The Texas Tech University Health Sciences Center (TTUHSC) and Texas Tech University Health Sciences...

Beware of Malicious Ads on Captcha Pages that Deliver Password Stealers

Malicious actors have taken cybercrime to new heights by exploiting captcha verification pages, a...

Hitachi Authentication Bypass Vulnerability Allows Attackers to Hack the System Remotely

Critical Authentication Bypass Vulnerability Identified in Hitachi Infrastructure Analytics Advisor and Ops Center Analyzer.A...

API Security Webinar

72 Hours to Audit-Ready API Security

APIs present a unique challenge in this landscape, as risk assessment and mitigation are often hindered by incomplete API inventories and insufficient documentation.

Join Vivek Gopalan, VP of Products at Indusface, in this insightful webinar as he unveils a practical framework for discovering, assessing, and addressing open API vulnerabilities within just 72 hours.

Discussion points

API Discovery: Techniques to identify and map your public APIs comprehensively.
Vulnerability Scanning: Best practices for API vulnerability analysis and penetration testing.
Clean Reporting: Steps to generate a clean, audit-ready vulnerability report within 72 hours.

More like this

Malicious ESLint Package Let Attackers Steal Data And Inject Remote Code

Cybercriminals exploited typosquatting to deploy a malicious npm package, `@typescript_eslinter/eslint`, targeting developers seeking the...

Hackers Target Android Users via WhatsApp to Steal Sensitive Data

Researchers analyzed a malicious Android sample created using Spynote RAT, targeting high-value assets in...

Hackers Can Hijack Your Terminal Via Prompt Injection using LLM-powered Apps

Researchers have uncovered that Large Language Models (LLMs) can generate and manipulate ANSI escape...