Nvidia’s Blackwell GPU may reach Indian shores as early as OctoberMutual FundNvidia’s Blackwell GPU may reach Indian shores as early as October

Nvidia’s Blackwell GPU may reach Indian shores as early as October


The company has an existing deal with Nvidia to source over 16,000 GPUs, or graphics processing units, spread over at least two fiscal years.

“We are in early talks with Nvidia to source Blackwell GPUs as part of our order, and are in the process of finalizing all details,” said Sunil Gupta, cofounder and chief executive of Yotta, in an interview with Mint. 

“We’re likely looking at procuring around 1,000 Blackwell B200 GPUs by this October, which would be equivalent to around 4,000 ‘H100’ GPUs. While a timeline isn’t yet clear, we’re expecting to take delivery of Blackwell GPUs before the end of this year, and complete our full existing order by the next fiscal,” Gupta said.

The entire order between Yotta and Nvidia, Gupta said, is worth close to $1 billion.

The Blackwell GPU, announced by Jensen Huang, chief executive of Nvidia, is in line with what analysts expected of Silicon Valley’s fastest-growing tech company. 

Huang claimed it offers four times faster training of AI models with more than 1 trillion data parameters, and 30 times faster processing time in producing “inferences”, or a model’s ability to ingest a query, process it, and deliver an end-result.

In the past four quarters, Nvidia overtook energy behemoth Saudi Aramco to become the world’s third-largest tech company by valuation. As of publishing, Nvidia (valued at around $2.1 trillion) is behind Microsoft ($3.1 trillion) and Apple ($2.8 trillion), as per a Reuters report.

Three Big Tech chief executives confirmed on launch that Nvidia’s latest Blackwell GPU will feature in their services. Sundar Pichai, chief executive of Google and Alphabet, said that Blackwell-based data centre and cloud infrastructure will come to customers of Google Cloud and DeepMind researchers. 

Amazon’s Andy Jassy and Microsoft’s Satya Nadella also affirmed that the new GPU will feature across Amazon Web Services (AWS) and Microsoft Azure cloud servers, while Meta’s Mark Zuckerberg affirmed that the company’s Llama AI model will be trained on it.

The new GPU can produce “real-time” inferencing in foundational AI models with up to 10 trillion data parameters, Huang said. For reference, OpenAI’s latest GPT model is estimated to have 1.8 trillion data parameters. The GPU also uses a second-generation ‘Transformer Engine’, which either doubles the inferencing speed or processes AI models that are twice as large.

Cost, however, could be a question. Senior executives of two cloud platform providers said the new GPU could be anywhere between 40% and 60% more expensive than the presently-used GPU, Nvidia’s H100. 

“One H100 GPU today is for around $25,000—Blackwell could be around $35,000-40,000. This would make the new chips potentially more expensive—but may also need fewer units in a data centre facility depending on demand,” one of the executives said. They requested anonymity since each has existing relations with Nvidia, and were not authorized to speak on Blackwell’s pricing yet.

Yotta’s Gupta said that the new GPU “may let data centres improve real estate efficiency, and rope in more customers that are waiting for GPU supply to materialize.”

“This can help us increase revenue by providing a wider range of managed cloud services to paying customers. Significantly lower power supply will further help efficiencies,” Gupta added.

Analysts, too, viewed Nvidia’s launch positively, underlining that the company’s soaring valuations may not be short-lived. “You do require GPUs for training generative AI workloads. The primary demand right now is for ‘training’ workloads, not inferencing,” said Anushree Verma, director analyst at Gartner. “For training, you do require accelerated computing and would typically scale by running multiple instances of the workloads on different servers.”

The new Blackwell processors will be placed in GB200—a ‘superchip’ with two Blackwell GPUs and a CPU. As a reference, Huang said that Nvidia’s previous-generation GPU architecture, Hopper, would have taken 8,000 units and 15W power supply over 90 days to train a 1.8-trillion-parameter AI model. Now, Blackwell will need one-fourth the number of chips and one-fourth the power supply to train the model.

Nvidia also announced a number of specific strategic initiatives, including a ‘BioNeMo’ toolkit of foundational AI models for biological and pharmaceutical research. Its ‘Omniverse’ platform of high-precision virtual simulations will offer on-cloud digital twins as a service—showcased across automotive and robotics partnerships including with the likes of Mercedes-Benz and Cadence Design. 

Nvidia is also working on a domain-specific large language model to accelerate chip designing, Huang added.

The author is in San Jose to attend GTC 2024, on the invitation of Nvidia.

Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed – it’s all here, just a click away! Login Now!

Catch all the Corporate news and Updates on Finplay.
Download Finplay News App to get Daily Market Updates & Live Business News.

More
Less

Published: 19 Mar 2024, 04:29 PM IST

Disclaimer: Along with publishing our own news, we get news from various sources namely from news wires ANI, PTI, other reputed finance portals and individual journalists. We are not legally liable for any inaccuracies in the news and expect the reader to do their own due diligence.

http://ganesh@finplay.in

Finance enthusiast, Mutual fund expert.




Leave a Reply

Your email address will not be published. Required fields are marked *

Finplay

AMFI-registered Mutual Fund Distributor ARN-192179

Company

© 2024 Finplay Technologies Private Limited. All Rights Reserved.