Outperforming the Market

Outperforming the Market

Share this post

Outperforming the Market
Outperforming the Market
Nvidia: When Blackwell is well

Nvidia: When Blackwell is well

Significant improvements in manufacturing yields first positive sign for Nvidia

Simple Investing's avatar
Simple Investing
Jun 27, 2025
∙ Paid
11

Share this post

Outperforming the Market
Outperforming the Market
Nvidia: When Blackwell is well
1
Share

Nvidia NVDA 0.00%↑ has had a difficult past year, but the company’s long-term prospects remain.

The company was facing difficulties in ramping the very complex Blackwell and new export controls further complicated things.

Nvidia shared about growing demand for inference and AI factories, which is driving significant revenue growth.

Blackwell ramp is now going well and the transition from Hopper is now almost complete.

Furthermore, management reiterates that it is working towards achieving mid-70% gross margin late this year.

Mixed results but better than feared

Nvidia reported revenues of $44.1 billion for 1Q, 2% ahead of consensus expectations.

The company incurred a $4.5 billion charge in 1Q with H20 excess inventory and purchase obligations because of restrictions of H20 in China.

This was less than the $5.5 billion that the company initially anticipated.

H20 sales in 1Q was $4.6 billion prior to restrictions and the company was unable to ship additional $2.5 billion.

Data center revenue grew 10% sequentially to $39.1 billion, 1 percentage point below expectations.

Gaming revenue grew 48% sequentially to $3.8 billion, 35 percentage points above expectations.

Pro visualization revenue was flat sequentially at $509 million, 3 percentage points above expectations.

Automotive revenue was down 1% sequentially to $567 million, 1 percentage point below expectations.

The data center segment is clearly the most important one for Nvidia and just to show you how fast they have grown, take a look at the table below.

Nvidia data center revenue grew from $4.3 billion in the quarter 2 years ago to $39.1 billion. Data center revenues are up 9x in just 2 years.

Gross margin came in at 71.3%, compared with expectations of 71%, implying in-line margins.

EPS excluding the H20 charge and related tax impact came in at $0.96 and including the impact, EPS would have come in at $0.81. This was 9% ahead of expectations.

Guidance for 2Q revenue came in at $45.0 billion at the midpoint, 2% below expectations.

The outlook reflects a loss of $8 billion H20 revenue due to recent export control.

Management is guiding an improvement of gross margin in 2Q to 72% at the midpoint, which is in-line with expectations.

The company also commented on continuing to work toward achieving mid-70% gross margin late this year.

H20 controls: Charges

Nvidia recognized $4.6 billion in H20 revenues in the 1Q which occurred before April 9.

However, new export controls on H20 was imposed by the US government on April 9.

Because H20 GPUs were designed specifically for the China market and sold to China with the previous approval of the prior administration, it does not have a market outside of China.

Since there was no grace period for the new export controls on the H20 GPUs to sell the remaining inventory, Nvidia recognized a $4.5 billion charge for inventory write-down and purchase obligations tied to orders that were received before April 9.

Interestingly, the $4.5 billion charge recognized was less than what Nvidia initially expected as they were able to reuse certain materials.

Nvidia continues to emphasize that losing the China AI accelerator market which will grow to almost $50 billion, would benefit Nvidia’s foreign competitors both in China and globally, and have an adverse impact on its own business.

Blackwell ramp: Significant improvement

The Blackwell ramp has not been smooth sailing for Nvidia.

The GB200 NVL72 resulted in a fundamental architecture change and are complex to build, resulting in some hiccups along the way.

That said, Blackwell now makes up almost 70% of the data center compute revenue this quarter, with the transition from Hopper almost complete.

Nvidia shared that in 1Q, they have seen a “significant improvement” in manufacturing yields for Blackwell and rack shipments are moving to “strong rates” to end customers.

Major hyperscalers are now each deploying almost 1,000 NVL72 racks or 72,000 Blackwell GPUs per week on average.

Furthermore, these hyperscalers are set to ramp to output in the next quarter in 2Q.

Nvidia shared that Microsoft has deployed “tens of thousands of Blackwell GPUs” and it is expected to ramp to hundreds of thousands of GB200s with OpenAI as one of its key customers.

Nvidia will be using the key learnings from the GB200 ramp to ensure a smooth ramp for Blackwell Ultra.

Nvidia started sampling of GB300 systems this month with major cloud service providers, with production shipments expected in 2Q.

GB300 will be less complex in that it uses the same architecture, footprint, electrical and mechanical specifications as GB200, which should allow for a smoother transition and maintain the higher yields.

With regards to its annual product cadence, Nvidia continues to reiterate the same roadmap and timeline.

AI workloads shifting to inference

I think this is worth elaborating further given this was mentioned by management multiple times in the earnings call.

Nvidia shared that they are seeing AI workloads transition “strongly” towards inference.

Where is Nvidia seeing the strong demand for inference?

Management shared that it saw a “step function leap” in the token generation for OpenAI, Microsoft and Google.

I think this gives very strong hints as to where inference demand is shifting towards.

Nvidia shared that Microsoft processed more than 100 trillion tokens in 1Q, a 5x increase from the prior year.

A 5x increase is clearly very significant and exponential.

I think this suggests we are seeing not just strong demand for Azure OpenAI but also for AI services across Microsoft’s platform.

To meet this demand for inference, startups that are serving the inference demand are now serving models using B200.

Because B200 is being used, it is tripling their token generation rate and resulting in higher revenues for reasoning models like DeepSeek-R1.

In addition, the GB200 NVL72 is delivering additional performance benefits over the H200. These are what the hyperscalers are ramping up currently.

The GB200 NVL72 delivered up to 30x higher inference throughput compared to the Nvidia 8-GPU H200 on the Llama 3.1 benchmark.

How was the 30x performance improvement achieved?

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Outperforming the Market
Market data by Intrinio
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share