In response to the future artificial intelligence (AI) restrictions by the European Union, OpenAI CEO Sam Altman stated that the maker of ChatGPT may think about leaving Europe.
The EU is developing the first set of international regulations for AI. The law requires businesses using generative AI tools, including ChatGPT, to report any copyrighted materials used to create such systems.
Altman claimed his company wanted to “work with the government” to stop the technology from being harmful and compared the quick development of AI to the discovery of the printing press.
Even Altman admitted that the introduction of AI will impact jobs and that the government would need to devise ways to “mitigate that.”
Altman clarified this at the University College London event: “The right answer is probably something between the traditional European-UK approach and the traditional US approach,” he added. “I hope we can all get it right together this time.”
Altman stated that OpenAI would attempt to abide by the European regulation when it is established before thinking about leaving.
“Either we’ll be able to solve those requirements or not. If we can comply, we will, and if we can’t, we’ll cease operating”, Altman said.
This month, the EU legislators agreed on the act’s drafting. The representatives of the Council, the Commission, and the Parliament will now discuss the bill’s final details.
“The current draft of the EU AI Act would be over-regulating, but we have heard it’s going to get pulled back. They are still talking about it”, Altman told Reuters.
In a report analyzing the EU AI Act, the Future of Life Institute described general-purpose AI as AI systems with a broad range of potential uses, both intended and unanticipated by the developers.
“There’s so much they could do like changing the definition of general-purpose AI systems. There’s a lot of things that could be done”, Altman said.
Legislators have proposed the term “general purpose AI system” to describe AI systems with multiple uses, such as generative AI models like Microsoft-backed ChatGPT.
While addressing at University College London, the CEO of OpenAI acknowledged disinformation concerns surrounding AI, highlighting, in particular, the tool’s capacity to produce false information that is “interactive, personalized [and] persuasive,” and saying that more work needed to be done on that front.
Nicole Gill, executive director and co-founder of Accountable Tech, wrote an op-ed for Fast Company last week comparing Altman to Meta’s founder and CEO Mark Zuckerberg, writing, “Lawmakers appear poised to trust Altman to self-regulate under the guise of ‘innovation,’ even as the speed of AI is ringing alarm bells for technologists, academics, civil society, and yes, even lawmakers.”
Shut Down Phishing Attacks with Device Posture Security – Download Free E-Book
The Lotus Blossom hacker group, also known as Spring Dragon, Billbug, or Thrip, has been…
A newly identified malware, dubbed "Squidoor," has emerged as a sophisticated threat targeting government, defense,…
Cyber adversaries have evolved into highly organized and professional entities, mirroring the operational efficiency of…
In Q3 2024, Cofense Intelligence uncovered a targeted spear-phishing campaign aimed at employees working in…
The DragonForce ransomware group has launched a significant cyberattack on critical infrastructure in Saudi Arabia,…
In a concerning development, cybersecurity researchers at Trellix have uncovered a sophisticated malware campaign that…