Friday, May 23, 2025
HomeCyber Security NewsOpenAI Developing Its Own Chip to Reduce Reliance on Nvidia

OpenAI Developing Its Own Chip to Reduce Reliance on Nvidia

Published on

SIEM as a Service

Follow Us on Google News

OpenAI, the organization behind ChatGPT and other advanced AI tools, is making significant strides in its efforts to reduce its dependency on Nvidia by developing its first in-house artificial intelligence chip.

According to the source, OpenAI is finalizing the design of its first-generation AI processor, which is expected to be sent for fabrication in the coming months at Taiwan Semiconductor Manufacturing Company (TSMC).

The process, known as “taping out,” marks a critical milestone in chip development. If all goes as planned, OpenAI aims to begin mass production in 2026.

- Advertisement - Google News

However, there is no certainty that the chip will work flawlessly on the first attempt, as any errors could necessitate costly redesigns and additional tape-out stages.

The move to develop custom chips is seen as strategic for OpenAI, giving the company greater negotiating leverage with existing chip suppliers like Nvidia, which currently dominates the AI chip market with an 80% share.

Similar efforts by tech giants such as Microsoft and Meta have faced challenges, highlighting the complexity of custom chip design.

OpenAI’s in-house team, led by Richard Ho, has grown rapidly, doubling to 40 engineers in recent months. Ho, who previously worked on Google’s custom AI chips, is spearheading the initiative in collaboration with Broadcom.

Reports suggest that designing and deploying a high-performance chip of this magnitude could cost the company upwards of $500 million, with additional investments required for accompanying software and infrastructure.

Chip Features and Deployment

The new chip will leverage TSMC’s cutting-edge 3-nanometer fabrication process, incorporating advanced high-bandwidth memory (HBM) and a systolic array architecture—features commonly found in Nvidia’s chips.

Despite its potential, the chip’s initial deployment will likely be limited to running AI models rather than training them.

While the custom chip development is an ambitious step, it may take years for OpenAI to match the scale and sophistication of chip programs run by Google and Amazon.

Expanding such efforts would require the AI leader to significantly increase its engineering workforce.

The demand for AI chips continues to soar as generative AI models become increasingly complex.

Organizations, including OpenAI, Google, and Meta, require massive computing power to operate these models, leading to an “insatiable” need for chips. In response, companies are investing heavily in AI infrastructure.

Meta has allocated $60 billion for AI development in 2025, while Microsoft is set to spend $80 billion the same year.

OpenAI’s move to develop its silicon reflects an industry-wide trend of reducing reliance on dominant suppliers like Nvidia.

Although still in its early stages, the company’s in-house chip initiative could reshape its operational landscape, offering cost savings, competitive flexibility, and improved efficiency as it continues to push the boundaries of AI innovation.

Are you from SOC/DFIR Team? - Join 500,000+ Researchers to Analyze Cyber Threats with ANY.RUN Sandbox - Try for Free

Divya
Divya
Divya is a Senior Journalist at GBhackers covering Cyber Attacks, Threats, Breaches, Vulnerabilities and other happenings in the cyber world.

Latest articles

EU Targets Stark Industries in Cyberattack Sanctions Crackdown

The European Union has escalated its response to Russia’s ongoing campaign of hybrid threats,...

Venice.ai’s Unrestricted Access Sparks Concerns Over AI-Driven Cyber Threats

Venice.ai has rapidly emerged as a disruptive force in the AI landscape, positioning itself...

GenAI Assistant DIANNA Uncovers New Obfuscated Malware

Deep Instinct’s GenAI-powered assistant, DIANNA, has identified a sophisticated new malware strain dubbed BypassERWDirectSyscallShellcodeLoader. This...

Hackers Expose 184 Million User Passwords via Open Directory

A major cybersecurity incident has come to light after researcher Jeremiah Fowler discovered a...

Resilience at Scale

Why Application Security is Non-Negotiable

The resilience of your digital infrastructure directly impacts your ability to scale. And yet, application security remains a critical weak link for most organizations.

Application Security is no longer just a defensive play—it’s the cornerstone of cyber resilience and sustainable growth. In this webinar, Karthik Krishnamoorthy (CTO of Indusface) and Phani Deepak Akella (VP of Marketing – Indusface), will share how AI-powered application security can help organizations build resilience by

Discussion points


Protecting at internet scale using AI and behavioral-based DDoS & bot mitigation.
Autonomously discovering external assets and remediating vulnerabilities within 72 hours, enabling secure, confident scaling.
Ensuring 100% application availability through platforms architected for failure resilience.
Eliminating silos with real-time correlation between attack surface and active threats for rapid, accurate mitigation

More like this

EU Targets Stark Industries in Cyberattack Sanctions Crackdown

The European Union has escalated its response to Russia’s ongoing campaign of hybrid threats,...

Venice.ai’s Unrestricted Access Sparks Concerns Over AI-Driven Cyber Threats

Venice.ai has rapidly emerged as a disruptive force in the AI landscape, positioning itself...

GenAI Assistant DIANNA Uncovers New Obfuscated Malware

Deep Instinct’s GenAI-powered assistant, DIANNA, has identified a sophisticated new malware strain dubbed BypassERWDirectSyscallShellcodeLoader. This...