Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
88 cores + 1.2TB/s bandwidth: NVIDIA's first-generation intelligent agent AI processor Vera
IT House, March 17 — Today (March 17), at the 2026 GTC conference held in San Jose, California, NVIDIA announced detailed specifications for its new 88-core Vera data center CPU, claiming it is the world’s first processor specifically designed for AI agents and reinforcement learning.
When processing large-scale data, AI training, and inference, this chip operates at twice the efficiency of traditional rack-level CPUs, with a 50% increase in speed. NVIDIA founder and CEO Jensen Huang emphasized that CPUs are no longer just auxiliary components for AI models but have become a core driving force. Vera will help AI systems achieve faster “thinking” and greater scalability.
In terms of core architecture, the Vera data center CPU features 88 NVIDIA-customized Olympus cores. To meet the demanding needs of multi-tenant AI factories running concurrent tasks, each core supports spatial multithreading, capable of simultaneously running two tasks reliably.
Additionally, Vera uses the second-generation low-power memory subsystem based on LPDDR5X memory, with a bandwidth of up to 1.2 TB/s. Compared to general-purpose CPUs, it doubles bandwidth while significantly reducing power consumption by half.
To support extreme data center scalability, NVIDIA also introduced the Vera CPU rack based on the MGX modular architecture. This rack integrates 256 liquid-cooled Vera CPUs, capable of maintaining over 22,500 independent full-speed concurrent computing environments, with more than 45,000 independent threads and 400TB of massive memory. This not only achieves a sixfold increase in CPU throughput but also doubles the performance of AI agent workloads.
In terms of data transfer, Vera uses NVLink-C2C interconnect technology paired with GPUs to provide a consistent bandwidth of up to 1.8 TB/s, seven times that of PCIe 6.0.
The Vera CPU is now in full production, with mass deliveries expected to begin in the second half of this year to key customers such as Meta and Oracle. IT House has attached relevant screenshots below: