Tuesday, December 17, 2024
HomeArtificial IntelligenceHackers Are Using AI For Cyberattacks. How Can We Stop Them?

Hackers Are Using AI For Cyberattacks. How Can We Stop Them?

Published on

SIEM as a Service

The use of AI has greatly increased over the past few months, with programs like ChatGPT and Bing AI making the technology freely available to all.

It has been used to create beautiful works of art and poetry and for more sinister purposes. Cybersecurity experts must be aware of these growing threats and how to counter them.

How Can AI Be Used For Cyberattacks?

The most common method is using AI to generate malicious programming code that can disrupt systems and potentially steal data.

- Advertisement - SIEM as a Service

In times past, these amateur hackers had to rely on the dark web for their prewritten scripts.

Now, with some skillful manipulation, any public AI bot can do it in seconds, although it isn’t guaranteed that the code will work.

However, it can pose a severe threat to systems, companies, and individuals if it does.

Malicious AI can also generate voice duplicates like a grandparent receiving a phone call and hearing what sounds like the voice of their grandchild saying they’ve been kidnapped and held for ransom.

It has also been used to imitate the voices of CEOs, demanding that their accounting departments transfer funds to offshore bank accounts.

AI can also be used to generate “deepfake” videos that purportedly contain “new training” or “special instructions” to the viewer, requesting that they send money or give privileged information to an employee’s manager or even the CEO.

The videos are entirely digital and created by hackers, but they can be convincing enough that even someone who has worked with the person for years might not be able to tell.

Similarly, an AI-generated email or phone call could convince an unsuspecting employee to give a hacker access to company servers, leading to attacks like ransomware, files containing trade secrets, or financial data for the company and its customers.

Countering AI Crime

Although there are many ways that AI can be used to commit cybercrimes, there are also a fair number of solutions that cybersecurity experts can use to make sure these crimes do not affect them or their companies.

A simple, non-technical solution is to institute a codeword or phrase that a CEO or another member of management must use over the phone to give commands involving financial or sensitive data, ideally something that couldn’t easily be guessed.

It’s also a good idea to change the code regularly in the event it does get leaked out. Another idea is to use AI itself to counter potential threats.

Companies like Aura have begun to develop AI-based defenses that can detect subtle changes that a human engineer is likely to miss. It can also identify employee behavior that might suggest a potential inside threat.

In his Forbes’ Technology Council blog, Hari Ravichandran, CEO of Aura, says “While it is crucial to be vigilant against the weaponized use of AI, it is equally – if not more – important to recognize the potential of AI to improve cybersecurity and benefit society as a whole.”

Training Employees To Recognize AI

Another important part of handling AI-based threats is ensuring that everyone in the company, from CEO to part-time college interns receives proper training on the use of AI and how to identify it.

For example, AI-generated articles can be found all over the internet. They can be challenging to identify to the untrained eye. Still, as the technology is yet to be perfected, there are little details like repetition, sentences written in a technically correct manner but not one that an English speaker would use, and overuse of the word “the” that can help identify whether a human or a machine wrote a text.

This training should also contain information on phishing scams, which can now also be delivered with AI.

Over 94% of malware is delivered by email and employees without proper training can be vulnerable to opening attachments or providing information to unauthorized parties because they opened an email they thought was from someone they knew.

Perhaps the best way is to have them spend 15 or 20 minutes exploring ChatGPT or Bing AI for themselves.

Once they’ve seen what artificial intelligence can do and how it works, they’ll be better able to spot it when a potential situation arises.

Conclusion

Although the future is generally uncertain, one thing we do know is that AI is here to stay. As this technology improves, the potential for new threats will always appear for hackers and other bad actors to exploit.

Companies worldwide already lose an estimated $600 billion a year to cybercrimes. With AI in the mix, that number is only expected to increase in the coming years.

However, every problem has a solution. INDIVIDUALS AND COMPANIES CAN PROTECT THEMSELVES FROM WEAPONIZED ARTIFICIAL INTELLIGENCE through AI, proper employee and management training, and non-technical safeguards.

Latest articles

The Rise of AI-Generated Professional Headshots

It’s clear that a person’s reputation is increasingly influenced by their online presence, which...

Hackers Abuse Google Ads To Attacking Graphic Design Professionals

Researchers identified a threat actor leveraging Google Search ads to target graphic design professionals,...

Hackers Using New IoT/OT Malware IOCONTROL To Control IP Cameras, Routers, PLCs, HMIs And Firewalls

Recent cyberattacks targeting critical infrastructure, including fuel management systems and water treatment facilities in...

Hackers Exploiting Apache Struts2 Vulnerability to Upload Malicious Payloads

Hackers have begun exploiting a newly discovered vulnerability in Apache Struts2, a widely used...

API Security Webinar

72 Hours to Audit-Ready API Security

APIs present a unique challenge in this landscape, as risk assessment and mitigation are often hindered by incomplete API inventories and insufficient documentation.

Join Vivek Gopalan, VP of Products at Indusface, in this insightful webinar as he unveils a practical framework for discovering, assessing, and addressing open API vulnerabilities within just 72 hours.

Discussion points

API Discovery: Techniques to identify and map your public APIs comprehensively.
Vulnerability Scanning: Best practices for API vulnerability analysis and penetration testing.
Clean Reporting: Steps to generate a clean, audit-ready vulnerability report within 72 hours.

More like this

New Android Banking Malware Attacking Indian Banks To Steal Login Credentials

Researchers have discovered a new Android banking trojan targeting Indian users, and this malware...

New Research Uncovered Dark Internet Service Providers Used For Hacking

Bulletproof hosting services, a type of dark internet service provider, offer infrastructure to cybercriminals,...

FBI Seizes Rydox Marketplace, Arrests Key Administrators

The Federal Bureau of Investigation (FBI) announced the seizure of Rydox, an illicit online...