Saturday, April 12, 2025
HomeCyber Security NewsChatGPT Chief Testifies on AI risks To US Congress

ChatGPT Chief Testifies on AI risks To US Congress

Published on

SIEM as a Service

Follow Us on Google News

To mitigate the threats posed by increasingly potent AI systems, government action will be essential, according to the CEO of the artificial intelligence company that produces ChatGPT.

The success of OpenAI’s chatbot, ChatGPT, provoked worries and an AI arms race among legislators during a Parliamentary session.

“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” OpenAI CEO Sam Altman said at a Senate hearing.

- Advertisement - Google News

For the most potent AI systems, Altman suggested the establishment of a U.S. or global agency with the capacity to “take that license away and ensure compliance with safety standards.”

Raised Concerns About The Next Generation

Concerns about the coming years of “generative AI” tools’ potential to deceive people, distribute false information, violate copyright laws, and displace some jobs have grown out of what began as an educator’s panic about ChatGPT’s usage to cheat on homework assignments.

The societal concerns that brought Altman and other tech CEOs to the White House earlier this month have prompted U.S. agencies to promise to crack down on harmful AI products that violate current civil rights and consumer protection laws.

Despite this fact, there is no immediate indication that Congress will draught comprehensive new AI rules, as European lawmakers are doing.

Sen. Richard Blumenthal, a Democrat from Connecticut and chair of the Senate Judiciary Committee’s subcommittee on privacy, technology, and the law, began the hearing with a recorded speech that appeared to be him but was a voice clone that had been trained on the Blumenthal’s floor speeches and was reading ChatGPT-written opening remarks.

The result was impressive, and he continued, “What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin’s leadership?”

Except for stating that the sector may “significantly harm the world” and that “if this technology goes wrong, it can go quite wrong,” Altman largely avoided giving specifics.

Both Gary Marcus, a former NYU professor who criticized the AI hype, and Christina Montgomery, vice president and director of privacy at IBM, testify. 

Montgomery underlined the importance of striking a balance between innovation and ethical behavior and cautioned against fast AI development. Altman and Montgomery recognized that AI could both create and destroy jobs.

Recently, Altman demonstrated ChatGPT’s capabilities to Parliament politicians, and all attendees acknowledged the need for AI regulation. Altman has stated his commitment to the responsible development of AI while acknowledging its risks.

Elon Musk and others, however, call for a temporary halt to developing potent AI systems because of the grave societal concerns involved.

Government Involvement Is Crucial To Regulate AI

The fact that the committee hearing on AI in government took place simultaneously with the Parliamentary hearing shows how important AI is becoming to legislators. 

The government’s emphasis on ethical AI development is evident in Altman’s encounters with senior officials, including Deputy Prime Minister Kamala Harris and Prime Minister Joe Biden. Altman favors caution and greater safety precautions, but he doubts the efficacy of the open letter calling for a suspension of training as the best course of action.

Altman’s evidence emphasized the urgent need for government engagement to regulate AI, recognizing its transformative potential while emphasizing the significance of responsible development. The conversations highlight the numerous difficulties associated with AI and the ongoing attempts to balance innovation and risk reduction.

According to Montgomery, “We think that AI should be regulated at the point of risk, essentially,” by creating guidelines that control the application of particular uses of AI as opposed to the technology itself.

Gurubaran
Gurubaran
Gurubaran is a co-founder of Cyber Security News and GBHackers On Security. He has 10+ years of experience as a Security Consultant, Editor, and Analyst in cybersecurity, technology, and communications.

Latest articles

Threat Actors Manipulate Search Results to Lure Users to Malicious Websites

Cybercriminals are increasingly exploiting search engine optimization (SEO) techniques and paid advertisements to manipulate...

Hackers Imitate Google Chrome Install Page on Google Play to Distribute Android Malware

Cybersecurity experts have unearthed an intricate cyber campaign that leverages deceptive websites posing as...

Dangling DNS Attack Allows Hackers to Take Over Organization’s Subdomain

Hackers are exploiting what's known as "Dangling DNS" records to take over corporate subdomains,...

HelloKitty Ransomware Returns, Launching Attacks on Windows, Linux, and ESXi Environments

Security researchers and cybersecurity experts have recently uncovered new variants of the notorious HelloKitty...

Resilience at Scale

Why Application Security is Non-Negotiable

The resilience of your digital infrastructure directly impacts your ability to scale. And yet, application security remains a critical weak link for most organizations.

Application Security is no longer just a defensive play—it’s the cornerstone of cyber resilience and sustainable growth. In this webinar, Karthik Krishnamoorthy (CTO of Indusface) and Phani Deepak Akella (VP of Marketing – Indusface), will share how AI-powered application security can help organizations build resilience by

Discussion points


Protecting at internet scale using AI and behavioral-based DDoS & bot mitigation.
Autonomously discovering external assets and remediating vulnerabilities within 72 hours, enabling secure, confident scaling.
Ensuring 100% application availability through platforms architected for failure resilience.
Eliminating silos with real-time correlation between attack surface and active threats for rapid, accurate mitigation

More like this

Threat Actors Manipulate Search Results to Lure Users to Malicious Websites

Cybercriminals are increasingly exploiting search engine optimization (SEO) techniques and paid advertisements to manipulate...

Hackers Imitate Google Chrome Install Page on Google Play to Distribute Android Malware

Cybersecurity experts have unearthed an intricate cyber campaign that leverages deceptive websites posing as...

Dangling DNS Attack Allows Hackers to Take Over Organization’s Subdomain

Hackers are exploiting what's known as "Dangling DNS" records to take over corporate subdomains,...