Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Jensen Huang Rarely Posts Long Articles: In the Coming Years, Traditional Software and APPs May Disappear, AI Intelligent Agents Are Very Likely to Become Mainstream
On Tuesday, local time, NVIDIA CEO Jensen Huang published a rare long blog post about artificial intelligence, his seventh public long article since 2016.
Huang systematically defined the “five-layer AI architecture,” comparing it to a “five-layer cake,” consisting of energy, chips, infrastructure, models, and applications from bottom to top.
According to Jiemian News, in the AI “five-layer architecture,” the bottom layer, energy, is defined by Huang as the first principle of AI infrastructure and the absolute constraint on how much intelligence a system can generate. He emphasized that real-time generated intelligence requires real-time electricity; each token generated is the result of electron movement, heat management, and energy conversion into computing power. There is no abstract layer. Currently, energy supply has become a critical bottleneck for large-scale AI development.
Above the energy layer is the chip layer, the physical foundation of computing power, which is also NVIDIA’s core territory. Huang pointed out that AI workloads require enormous parallel computing capabilities, high-bandwidth memory, and fast interconnects. Progress in the chip layer directly determines the speed of AI expansion and the reduction of intelligence costs. The current pace of chip technology iteration still struggles to fully match the explosive growth in AI computing demands.
Above the chip layer is the infrastructure layer, which Huang defines as “AI factories.” This layer includes land, power transmission, cooling systems, construction, networks, and systems that coordinate thousands of processors into a single machine. Its design is not for storing information but for creating intelligence. He emphasized that the world is currently building three types of facilities at large scale: chip manufacturing plants, supercomputing factories, and AI factories, making it the largest infrastructure construction in human history.
Above the infrastructure layer is the model layer. Huang noted that AI models can understand various types of information, including language, biology, chemistry, physics, finance, medicine, and the physical world itself. Large language models like ChatGPT are just one category, and industry applications are still limited to surface-level uses, with their deep potential yet to be fully explored. He highlighted the key role of open-source models, citing DeepSeek-R1 as an example. When powerful reasoning models become widely available, they not only change software itself but also activate demand across the entire architecture stack, accelerating application layer adoption and increasing demand for underlying training, infrastructure, chips, and energy.
Huang stated that the topmost application layer of AI is the core area where AI creates economic value, including drug discovery platforms, industrial robots, legal assistants, and autonomous vehicles. The same underlying architecture can support different applications, and there is still vast room for innovation at the application layer.
He predicts that in the coming years, traditional software and app forms may disappear, and a new software paradigm—AI Agents—could become mainstream. Every successful application will drive the layers below it, from models, infrastructure, and chips down to the power plants at the bottom, creating a powerful industry-driven effect.
Regarding concerns about AI’s impact on employment, Huang believes AI will not reduce jobs but will create many new opportunities, especially in infrastructure and skilled technical fields. The workforce needed for AI infrastructure is enormous—electricians, plumbers, steelworkers, network technicians, installers, and operators—all high-skilled, high-paying jobs that are currently in short supply. AI is filling large labor gaps worldwide in truck drivers, nurses, accountants, and more, rather than causing unemployment.
First Financial News reports that Huang emphasized that the largest-scale AI infrastructure build in history has just begun.
“Billions of dollars have already been invested, but infrastructure worth tens of thousands of dollars still needs to be built,” Huang said. Every successful application will drive the layers below it, down to the power equipment that sustains its operation. Globally, chip factories, computer assembly plants, and AI factories are being built on an unprecedented scale.
He noted that the manpower required for this construction is enormous—electricians, plumbers, pipefitters, steelworkers, network technicians, installers, and operators. Meanwhile, AI is also boosting productivity across the entire knowledge economy. Increased productivity will generate more capacity, which in turn will drive growth.
Reflecting on the past year, Huang summarized that AI has crossed an important threshold. Model performance has significantly improved, enabling large-scale deployment. Reasoning capabilities have strengthened, hallucinations have decreased, and practical application capabilities have greatly advanced.
On February 25, local time, Huang told foreign media that the market has misjudged AI’s threat to software companies.
He reiterated his view that AI assistants will not replace these software tools but will instead use them. Although it sounds “counterintuitive,” many software companies will use AI assistants to develop software and improve efficiency.
Huang said, “All the tools we use today—Cadence, Synopsys, ServiceNow, SAP—exist for a fundamental and legitimate reason. AI assistants will become intelligent software, capable of representing us in using these tools and helping us greatly improve productivity… Ultimately, we need tools to do specific tasks and to feed information back to us in a way we can understand.”
On February 25, NVIDIA released its Q4 FY2026 financial report, ending January 25, 2026. The quarter’s revenue was $68.1 billion, up 73% year-over-year, setting a record; GAAP net income was $42.96 billion, up 94%; gross margin was 75%, up 2 percentage points.
For FY2026, NVIDIA’s revenue reached $215.9 billion, a 65% increase year-over-year; GAAP net income was $120.07 billion, up 65%.
Editor | Zhang Jinhe, Du Bo
Proofreader | Jin Mingyu
Daily Economic News compiled from Jiemian News, First Financial, and public information
(Edited by: Wen Jing)