Sunday, December 10, 2023

Hackers Are Using AI For Cyberattacks. How Can We Stop Them?

The use of AI has greatly increased over the past few months, with programs like ChatGPT and Bing AI making the technology freely available to all.

It has been used to create beautiful works of art and poetry and for more sinister purposes. Cybersecurity experts must be aware of these growing threats and how to counter them.

How Can AI Be Used For Cyberattacks?

The most common method is using AI to generate malicious programming code that can disrupt systems and potentially steal data.

In times past, these amateur hackers had to rely on the dark web for their prewritten scripts.

Now, with some skillful manipulation, any public AI bot can do it in seconds, although it isn’t guaranteed that the code will work.

However, it can pose a severe threat to systems, companies, and individuals if it does.

Malicious AI can also generate voice duplicates like a grandparent receiving a phone call and hearing what sounds like the voice of their grandchild saying they’ve been kidnapped and held for ransom.

It has also been used to imitate the voices of CEOs, demanding that their accounting departments transfer funds to offshore bank accounts.

AI can also be used to generate “deepfake” videos that purportedly contain “new training” or “special instructions” to the viewer, requesting that they send money or give privileged information to an employee’s manager or even the CEO.

The videos are entirely digital and created by hackers, but they can be convincing enough that even someone who has worked with the person for years might not be able to tell.

Similarly, an AI-generated email or phone call could convince an unsuspecting employee to give a hacker access to company servers, leading to attacks like ransomware, files containing trade secrets, or financial data for the company and its customers.

Countering AI Crime

Although there are many ways that AI can be used to commit cybercrimes, there are also a fair number of solutions that cybersecurity experts can use to make sure these crimes do not affect them or their companies.

A simple, non-technical solution is to institute a codeword or phrase that a CEO or another member of management must use over the phone to give commands involving financial or sensitive data, ideally something that couldn’t easily be guessed.

It’s also a good idea to change the code regularly in the event it does get leaked out. Another idea is to use AI itself to counter potential threats.

Companies like Aura have begun to develop AI-based defenses that can detect subtle changes that a human engineer is likely to miss. It can also identify employee behavior that might suggest a potential inside threat.

In his Forbes’ Technology Council blog, Hari Ravichandran, CEO of Aura, says “While it is crucial to be vigilant against the weaponized use of AI, it is equally – if not more – important to recognize the potential of AI to improve cybersecurity and benefit society as a whole.”

Training Employees To Recognize AI

Another important part of handling AI-based threats is ensuring that everyone in the company, from CEO to part-time college interns receives proper training on the use of AI and how to identify it.

For example, AI-generated articles can be found all over the internet. They can be challenging to identify to the untrained eye. Still, as the technology is yet to be perfected, there are little details like repetition, sentences written in a technically correct manner but not one that an English speaker would use, and overuse of the word “the” that can help identify whether a human or a machine wrote a text.

This training should also contain information on phishing scams, which can now also be delivered with AI.

Over 94% of malware is delivered by email and employees without proper training can be vulnerable to opening attachments or providing information to unauthorized parties because they opened an email they thought was from someone they knew.

Perhaps the best way is to have them spend 15 or 20 minutes exploring ChatGPT or Bing AI for themselves.

Once they’ve seen what artificial intelligence can do and how it works, they’ll be better able to spot it when a potential situation arises.

Conclusion

Although the future is generally uncertain, one thing we do know is that AI is here to stay. As this technology improves, the potential for new threats will always appear for hackers and other bad actors to exploit.

Companies worldwide already lose an estimated $600 billion a year to cybercrimes. With AI in the mix, that number is only expected to increase in the coming years.

However, every problem has a solution. INDIVIDUALS AND COMPANIES CAN PROTECT THEMSELVES FROM WEAPONIZED ARTIFICIAL INTELLIGENCE through AI, proper employee and management training, and non-technical safeguards.

Website

Latest articles

WordPress POP Chain Flaw Exposes Over 800M+ Websites to Attack

A critical remote code execution vulnerability has been patched as part of the Wordpress...

Russian Star Blizzard New Evasion Techniques to Hijack Email Accounts

Hackers target email accounts because they contain valuable personal and financial information. Successful email...

Exploitation Methods Used by PlugX Malware Revealed by Splunk Research

PlugX malware is sophisticated in evasion, as it uses the following techniques to avoid...

TA422 Hackers Attack Organizations Using Outlook & WinRAR Vulnerabilities

Hackers exploit Outlook and WinRAR vulnerabilities because these widely used software programs are lucrative...

Bluetooth keystroke-injection Flaw: A Threat to Apple, Linux & Android Devices

An unauthenticated Bluetooth keystroke-injection vulnerability that affects Android, macOS, and iOS devices has been...

Atlassian Patches RCE Flaw that Affected Multiple Products

Atlassian has been discovered with four new vulnerabilities associated with Remote Code Execution in...

Reflectiz Introduces AI-powered Insights on Top of Its Smart Alerting System

Reflectiz, a cybersecurity company specializing in continuous web threat management, proudly introduces a new...

Endpoint Strategies for 2024 and beyond

Converge and Defend

What's the pulse of Unified Endpoint Management and Security (UEMS) in Europe? Join us live to uncover the strategies that are defining endpoint security in the region.

Related Articles