OpenAI, the organization behind ChatGPT and other advanced AI tools, is making significant strides in its efforts to reduce its dependency on Nvidia by developing its first in-house artificial intelligence chip.
According to the source, OpenAI is finalizing the design of its first-generation AI processor, which is expected to be sent for fabrication in the coming months at Taiwan Semiconductor Manufacturing Company (TSMC).
The process, known as “taping out,” marks a critical milestone in chip development. If all goes as planned, OpenAI aims to begin mass production in 2026.
However, there is no certainty that the chip will work flawlessly on the first attempt, as any errors could necessitate costly redesigns and additional tape-out stages.
The move to develop custom chips is seen as strategic for OpenAI, giving the company greater negotiating leverage with existing chip suppliers like Nvidia, which currently dominates the AI chip market with an 80% share.
Similar efforts by tech giants such as Microsoft and Meta have faced challenges, highlighting the complexity of custom chip design.
OpenAI’s in-house team, led by Richard Ho, has grown rapidly, doubling to 40 engineers in recent months. Ho, who previously worked on Google’s custom AI chips, is spearheading the initiative in collaboration with Broadcom.
Reports suggest that designing and deploying a high-performance chip of this magnitude could cost the company upwards of $500 million, with additional investments required for accompanying software and infrastructure.
The new chip will leverage TSMC’s cutting-edge 3-nanometer fabrication process, incorporating advanced high-bandwidth memory (HBM) and a systolic array architecture—features commonly found in Nvidia’s chips.
Despite its potential, the chip’s initial deployment will likely be limited to running AI models rather than training them.
While the custom chip development is an ambitious step, it may take years for OpenAI to match the scale and sophistication of chip programs run by Google and Amazon.
Expanding such efforts would require the AI leader to significantly increase its engineering workforce.
The demand for AI chips continues to soar as generative AI models become increasingly complex.
Organizations, including OpenAI, Google, and Meta, require massive computing power to operate these models, leading to an “insatiable” need for chips. In response, companies are investing heavily in AI infrastructure.
Meta has allocated $60 billion for AI development in 2025, while Microsoft is set to spend $80 billion the same year.
OpenAI’s move to develop its silicon reflects an industry-wide trend of reducing reliance on dominant suppliers like Nvidia.
Although still in its early stages, the company’s in-house chip initiative could reshape its operational landscape, offering cost savings, competitive flexibility, and improved efficiency as it continues to push the boundaries of AI innovation.
Are you from SOC/DFIR Team? - Join 500,000+ Researchers to Analyze Cyber Threats with ANY.RUN Sandbox - Try for Free
Microsoft Entra ID has introduced a robust mechanism called protected actions to mitigate the risks…
The realm of fault injection attacks has long intrigued researchers and security professionals. Among these,…
IBL Software Engineering has disclosed a significant security vulnerability, identified as CVE-2025-1077, affecting its Visual…
New York Governor Kathy Hochul announced that the state has banned the use of the…
Cybercriminals are capitalizing on the season of love to launch sneaky and deceptive cyberattacks. According…
Advanced Persistent Threats (APTs) represent a sophisticated and stealthy category of cyberattacks targeting critical organizations…