Sunday, May 19, 2024

Hackers Employ Deepfake Technology To Impersonate as LastPass CEO

A LastPass employee recently became the target of an attempted fraud involving sophisticated audio deepfake technology.

This incident underscores the urgent need for heightened cybersecurity awareness and the implementation of robust verification processes within organizations.

The Rise of Deepfake Technology

Deepfake technology, which employs generative artificial intelligence to create hyper-realistic audio or visual content, has been a growing concern among cybersecurity experts for several years.

Initially associated with political misinformation campaigns, the technology’s potential for harm has expanded into the private sector, with fraudsters leveraging it for elaborate impersonation schemes.

The technology’s accessibility has dramatically increased, with numerous websites and applications enabling virtually anyone to craft convincing deepfakes.

Historically, deep fakes have been used in high-profile fraud cases, such as a 2019 incident where a UK company’s employee was tricked into transferring funds to a fraudster impersonating the CEO through voice-generating AI.

Document
Stop Advanced Phishing Attack With AI

AI-Powered Protection for Business Email Security

Trustifi’s Advanced threat protection prevents the widest spectrum of sophisticated attacks before they reach a user’s mailbox. Stopping 99% of phishing attacks missed by other email security solutions. .

More recently, a finance worker at a Hong Kong-based multinational was deceived into sending $25 million to perpetrators using video deepfake technology to impersonate key company officials during a video call.

The LastPass Incident: A Close Call

The recent attempt on a LastPass employee represents a significant escalation in using deepfake technology for corporate fraud.

The employee received multiple calls, texts, and at least one voicemail via WhatsApp, all featuring an audio deepfake of the company’s CEO.

The fraudulent communication was immediately suspicious to the employee due to its occurrence outside normal business channels and the presence of social engineering red flags, such as undue urgency.

Screen capture displaying the WhatsApp attempted contact using deepfake audio as part of a CEO impersonation

Screen capture displaying the WhatsApp attempted contact using deepfake audio as part of a CEO impersonation.

Fortunately, the LastPass employee did not engage with the fraudulent messages and promptly reported the incident to the company’s internal security team.

This swift action allowed LastPass to mitigate any potential threat and use the incident as a case study to enhance awareness of deepfake technology’s dangers within the company and the broader cybersecurity community.

The incident serves as a critical reminder of the importance of verifying the identity of individuals claiming affiliation with a company, especially when contacted through unconventional channels.

LastPass’s proactive approach in sharing details of the attempted fraud aims to encourage other organizations to remain vigilant and educate their employees about cybercriminals’ evolving tactics.

In response to the growing threat posed by deepfake technology, LastPass is collaborating with intelligence-sharing partners and other cybersecurity entities to share knowledge about such tactics.

This collective effort is crucial for staying ahead of fraudsters and safeguarding the integrity of corporate communications and transactions.

The attempted deepfake call targeting a LastPass employee is a stark illustration of the sophisticated methods employed by cybercriminals in the digital age.

It highlights the imperative for continuous education, vigilance, and developing secure verification protocols to protect against the ever-evolving threats posed by malicious actors in the cyber realm.

Secure your emails in a heartbeat! To find your ideal email security vendor, Take a Free 30-Second Assessment.

Website

Latest articles

Hackers Exploiting Docusign With Phishing Attack To Steal Credentials

Hackers prefer phishing as it exploits human vulnerabilities rather than technical flaws which make...

Norway Recommends Replacing SSLVPN/WebVPN to Stop Cyber Attacks

A very important message from the Norwegian National Cyber Security Centre (NCSC) says that...

New Linux Backdoor Attacking Linux Users Via Installation Packages

Linux is widely used in numerous servers, cloud infrastructure, and Internet of Things devices,...

ViperSoftX Malware Uses Deep Learning Model To Execute Commands

ViperSoftX malware, known for stealing cryptocurrency information, now leverages Tesseract, an open-source OCR engine,...

Santander Data Breach: Hackers Accessed Company Database

Santander has confirmed that there was a major data breach that affected its workers...

U.S. Govt Announces Rewards up to $5 Million for North Korean IT Workers

The U.S. government has offered a prize of up to $5 million for information...

Russian APT Hackers Attacking Critical Infrastructure

Russia leverages a mix of state-backed Advanced Persistent Threat (APT) groups and financially motivated...
Divya
Divya
Divya is a Senior Journalist at GBhackers covering Cyber Attacks, Threats, Breaches, Vulnerabilities and other happenings in the cyber world.

Free Webinar

Live API Attack Simulation

94% of organizations experience security problems in production APIs, and one in five suffers a data breach. As a result, cyber-attacks on APIs increased from 35% in 2022 to 46% in 2023, and this trend continues to rise.
Key takeaways include:

  • An exploit of OWASP API Top 10 vulnerability
  • A brute force ATO (Account Takeover) attack on API
  • A DDoS attack on an API
  • Positive security model automation to prevent API attacks

Related Articles