Nvidia to Invest $26 Billion Over Next 5 Years to Build World's Leading Open-Source AI Large Language Models

robot
Abstract generation in progress

IT Home, March 12 — Wired published a blog today (March 12), reporting that NVIDIA announced it will invest $26 billion (IT Home note: approximately 178.79 billion RMB at current exchange rates) over the next five years to develop open-source AI models. The interpretation suggests that NVIDIA is beginning a strategic shift from a pure chip manufacturer to a leading AI research lab.

Wired believes that NVIDIA is evolving from a chip maker with a strong software ecosystem into a top-tier AI frontier lab capable of competing with OpenAI and DeepSeek. Since these self-developed models will deeply optimize their own hardware, this move will further solidify NVIDIA’s dominance in the AI chip field.

In terms of technology strategy, NVIDIA has chosen a middle path. The so-called “open weights” means publicly sharing model parameters but not necessarily adopting a fully open-source license.

IT Home note: Open-weight AI models refer to models that make key parameters (weights) determining AI behavior available to the public for free. Unlike fully closed-source models (such as GPT-4), developers can download and run or fine-tune these models on their own devices.

This strategy differs from OpenAI’s closed approach and from Meta’s fully open-source Llama series. Companies currently need transparent and customizable models. If NVIDIA can launch open-weight models optimized for its hardware, it will build a strong technological moat.

The $26 billion investment will cover model development, computing infrastructure, research talent reserves, and ecosystem building. By comparison, training GPT-4 cost about $3 billion.

This means that with its own core computing resources and a research team secretly assembled over the past two years, NVIDIA has ample budget to develop multiple cutting-edge large models.

Documents indicate that the related funding will be gradually implemented over the next 18 to 24 months, with the first models expected to be released by late 2026 or early 2027.

Financial analysts predict that if NVIDIA maintains its hardware dominance while capturing 10% of the foundational model market, it could generate an additional $50 billion annually within three years.

NVIDIA’s open-source strategy precisely addresses current industry pain points. Core models from leading American companies like OpenAI, Anthropic, and Google are currently closed-source, only accessible via cloud; Meta has also hinted at tightening its open-source policies in the future. In contrast, Chinese companies like DeepSeek and Alibaba have attracted many global developers with their free open-source strategies.

Bryan Catanzaro, Vice President of Deep Learning Research at NVIDIA, stated that promoting the development of an open-source ecosystem fully aligns with NVIDIA’s core interests.

Looking ahead, NVIDIA’s AI model landscape continues to expand rapidly. The company recently completed pre-training of a massive model with 550 billion parameters.

Kari Briski, Vice President of Enterprise Generative AI Software, emphasized that NVIDIA develops these cutting-edge models not only to test computing power but also to push the limits of storage, networking, and supercomputing data centers, thereby guiding the development roadmap for the company’s next-generation hardware architecture.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin