Four Chinese cybercriminals were taken into custody after using ChatGPT to create ransomware. The lawsuit is the first of its sort in China, where OpenAI’s popular chatbot is not legally available, and Beijing has been tightening down on foreign AI.

On Thursday, the state-run Xinhua News Agency said that the attack was first reported by an unnamed company in Hangzhou, the capital of the province of eastern Zhejiang, whose systems had been blocked by ransomware. 

For access to be restored, the hackers wanted 20,000 Tether, a cryptocurrency stablecoin correlated one-to-one with the US dollar.

The suspects confessed to “writing versions of ransomware, optimizing the program with the help of ChatGPT, conducting vulnerability scans, gaining access through infiltration, implanting ransomware, and carrying out extortion” when the police apprehended them in late November in Beijing and two other locations in Inner Mongolia.

The report did not specify if the use of ChatGPT was included in the charges. Due to Beijing’s efforts to restrict access to foreign generative artificial intelligence technologies, it operates in a legal gray area in China.

Generative AI Stirred Up Multiple Challenges

Chinese users became interested in ChatGPT and related products after OpenAI launched its chatbot at the end of 2022. However, in sanctioned markets like North Korea and Iran, as well as in China and Hong Kong, OpenAI has blocked internet protocol addresses. 

Using virtual private networks (VPNs) and a phone number from a supported region, some users circumvent restrictions.

According to a legal firm, King & Wood Mallesons report, there are “compliance risks” for domestic businesses that construct or rent VPNs to access OpenAI’s services, such as ChatGPT and text-to-image generator Dall-E.

Given the popularity of generative AI, the number of legal issues concerning the technology has skyrocketed. Beijing police warned about ChatGPT’s potential to “commit crimes and spread rumors” in February.

This year, the US Federal Trade Commission released a warning regarding scammers using artificial intelligence (AI) clone voices to deceive consumers into real people. This may be done with just a brief audio sample of a person’s voice.

This week, the New York Times filed a lawsuit against OpenAI and Microsoft, the company’s primary investor, claiming that the firms’ potent models were improperly trained on millions of articles. The case is expected to be keenly monitored for its potential legal ramifications.

Recently, concerns about cybersecurity and intellectual property have increased due to generative AI, prompting policymakers to consider how to respond.

LEAVE A REPLY

Please enter your comment!
Please enter your name here