Investing.com — this week projected that revenue opportunities from its artificial intelligence chips could reach at least $1 trillion through 2027, doubling a forecast it made just last month for $500 billion through 2026 tied to its Blackwell and Rubin chip lines.
At its annual GTC developer conference in San Jose, California, CEO Jensen Huang introduced a new central processor and an AI system built on technology licensed from chip startup Groq for $17 billion in December.
Huang also previewed the company’s Feynman chip architecture, slated for 2028 and set to follow its Rubin Ultra processors, though details remained limited to a broad list of planned chips including AI processors and networking components.
The announcements come amid Nvidia’s push to strengthen its position in inference computing — the process by which AI systems respond to queries in real time. While Nvidia’s graphics processors have dominated AI model training, inference is an arena where the company faces growing competition from central processing units and custom chips developed by companies such as Google.
“The inference inflection has arrived,” Huang said at the GTC event. “And demand just keeps on going up.”
The chipmaker is also targeting the market for autonomous AI agents through NemoClaw, a product that integrates with the OpenClaw platform and is designed to add privacy and safety controls to tools capable of executing a wide range of tasks with minimal human oversight.
But after an unprecedented multi-year rally that made Nvidia the first company to reach a $5 trillion market valuation last October, questions have emerged about the durability of its growth and whether reinvesting profits back into the AI ecosystem will generate sufficient returns.
The AI bellwether once again posted better-than-expected results for the January quarter and issued a current-quarter revenue forecast above analyst expectations.
Still, despite this growth momentum, the company’s shares have been largely flat since September, with its price-to-earnings (P/E) multiple compressing to around 17 times fiscal 2028 consensus estimates, according to Vital Knowledge.
“What’s causing the stall? The big factor is the same one that’s hanging over the whole AI industry: people are worried that the AI infrastructure boom isn’t sustainable and could give way to a sharp cliff at some point in the next few years as companies shift their focus back to prioritizing free cash flow,” analysts at Vital Knowledge said.
They also flagged Nvidia’s aggressive habit of investing in AI startups, which has led to “accusations of the firm ’buying’ revenue.”
As the industry pivots from model training to inference, Vital Knowledge warned that GPUs “might be unnecessarily powerful and expensive” for the task, and that Nvidia could be “forced to compete on price, and sacrifice margin, to fend off competitors.”
Nvidia is “facing more competition than ever,” the analysts stressed, including from custom chips developed by and , as well as third-party processors from and other players.
Vital Knowledge also raised Groq concerns, arguing that it could “simply cannibalize more expensive Vera Rubin chips rather than meaningfully expand the firm’s TAM.”
