ChatGPT Creator Sued for $3 Billion Over Theft of Private Data

In a class action complaint filed on Wednesday, it is claimed that OpenAI and Microsoft stole “vast amounts of private information” from internet users without their permission to train ChatGPT. The case seeks $3 billion in damages.

There is currently a class action lawsuit filed against OpenAI in a California federal court. The lawsuit claims that OpenAI collected 300 billion words from the internet without registering as a data broker or obtaining permission. The lawsuit consists of sixteen unnamed plaintiffs.

In simple terms, the complaint alleges that OpenAI used “stolen private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge.”

Microsoft is OpenAI’s major customer and business partner, paying the corporation billions of dollars to license AI technologies.

The businesses allegedly continue to “unlawfully collect and feed additional personal data from millions of unsuspecting consumers worldwide to continue developing and training the products,” according to the report.

Popular AI technologies created by OpenAI and utilized by Microsoft were mentioned in the case, including the language models GPT 3.5 and 4.0, the image model Dall-E, and the text-to-speech model Vall-E.

Data Allegedly Stolen By OpenAI

  • Names
  • Contact information (including phone numbers and email addresses)
  • Email addresses
  • Payment information
  • Social media information
  • Chat log data
  • Usage data, analytics
  • Cookies

“Defendants have been unjustly enriched by their theft of personal information as its billion-dollar AI business, including ChatGPT and beyond, was built on harvesting and monetizing Internet users’ personal data,” the lawsuit states.

“Thus, Plaintiffs and the Classes have a right to disgorgement and/or restitution damages representing the value of the stolen data and/or their share of the profits Defendants earned thereon.”

OpenAI and Microsoft To Adopt Additional Measures

The complaint demands that OpenAI and Microsoft adopt extra measures and stop violating people’s privacy.

 The first step is to make clear what information is being gathered and how it will be utilized. Secondly, according to the plaintiffs, is to adhere to a set of moral standards and make up for the data that was taken.

Finally, the complaint demanded that internet users be given the option to refuse any data gathering and that any unlawful data collection end.

The complaint also refers to the “existential threat” that AI may pose in the absence of “immediate legal intervention.”

It refers to the recent appeals for action by well-known individuals who urged to halt or control the spread of AI systems.

“The proliferation of AI—including Defendants’ products—pose an existential threat if not constrained by the reasonable guardrails of our laws and societal mores,” the complaint says.

“Defendants’ business and scraping practices raise fundamentally important legal and ethical questions that must also be addressed. Enforcing the law will not amount to stifling AI innovation, but rather a safe and just AI future for all”.

So yet, neither Microsoft nor OpenAI have responded to the complaint that has been brought against them. The case has been filed, but it is unclear if the court will allow the legal processes to proceed.

“AI-based email security measures Protect your business From Email Threats!” – .

Gurubaran

Gurubaran is a co-founder of Cyber Security News and GBHackers On Security. He has 10+ years of experience as a Security Consultant, Editor, and Analyst in cybersecurity, technology, and communications.

Recent Posts

Why the MITRE ATT&CK Evaluation Is Essential for Security Leaders

In today’s dynamic threat landscape, security leaders are under constant pressure to make informed choices…

11 hours ago

Lazarus Hackers Exploits macOS Extended Attributes To Evade Detection

The xattr command in Unix-like systems allows for the embedding of hidden metadata within files,…

12 hours ago

ProjectSend Authentication Vulnerability Exploited in the Wild

ProjectSend, an open-source file-sharing web application, has become a target of active exploitation following the…

14 hours ago

NVIDIA UFM Vulnerability Leads to Privilege Escalation & Data Tampering

NVIDIA has released a critical security update addressing a significant vulnerability in its Unified Fabric…

17 hours ago

Junior School Student Indicted for Infecting Computers With Malware

Fukui Prefectural Police have indicted a 15-year-old junior high school student from Saitama Prefecture for…

20 hours ago

Critical Gitlab Vulnerability Let Attackers Escalate Privileges

GitLab, a widely used platform for DevOps lifecycle management, has released critical security updates for…

20 hours ago