Nvidia flagship chip that powers AI systems like ChatGPT is 'nearly impossible' to buy
Nvidia chips are in high demand due to their AI capabilities. The H100 processor is especially sought after, but supply can't keep up with demand.

Businesses are finding it difficult to purchase the flagship Nvidia Corp. chips, which power artificial intelligence applications that have taken Silicon Valley aback.
In recent months, the demand for Nvidia chips based on its Hopper design skyrocketed. According to people familiar with AI hardware availability, the growing AI frenzy has caused some companies to be unable to obtain the H100 chips. These are among the most advanced processors for developing AI apps.
Some businesses are looking at alternatives to Nvidia processors due to the shortage. Sources said that some are considering rival chips which may be less efficient or developing AI software to reduce the amount computing power needed.
Nvidia officials in Santa Clara declined to comment.
AI applications like OpenAI LLC's ChatGPT demand a huge amount of computing power. The H100 powered systems can provide this. ChatGPT, and other similar software, will require more computing power in the future.
Fiaz Mohammed, Crusoe's vice president, said that the models were getting larger and bigger. This was driving up demand for hardware.
This massive processing demand will likely encourage startups and other companies, to come up with innovative solutions to avoid needing to add more raw computing power to AI applications. Mohamed stated that the enormous computing requirements of AI has already pushed companies to be creative and will continue to do so.
He said that "it becomes a force for other types" of innovation.
There are alternatives to Nvidia chips. For example, Advanced Micro Devices Inc. makes similar products, but these lack the extensive ecosystem built around Nvidia AI chips and systems. Nvidia's investment in AI tools powered by its chips has made it easier to build software than using alternatives. Nvidia's software is what makes it the leader in AI hardware, not its processors.
Supplies are likely to remain tight
The H100 chip will be hard to come by for several months. All the latest chips, including the H100, take 90 days or more to produce. The time taken to ship the chips from Taiwan Semiconductor Manufacturing Co. factories to the facilities where they will be installed on the boards or into the systems in data centers and servers rooms is also added to the delay.
There's no way that Nvidia can catch up to demand quickly.
David Eagan is an analyst at Columbia Threadneedle Investments. He said that it takes three months from the time someone places a brand new order until Nvidia wafers are available.
Nvidia, under Jensen Huang, slowed down production of its AI chips last year amid a general decline in the chip market. ChatGPT was launched by OpenAI late last year. Nvidia was forced to ramp up production quickly in the first quarter of this year. It is expected to deliver significantly more chips during the current quarter.
The shortage of GPUs for AI applications is a major problem among cloud computing giants such as Google Cloud Platform and Amazon Web Services.
Some AI software can be built and run on older GPUs from Nvidia or other manufacturers. Nvidia’s A100 chip, or an older design from two generations ago called the V100 can be used for some uses in the medical industry or to detect spam emails.
The H100-based systems are able to handle the most complex and large apps. This includes ChatGPT, and other OpenAI rivals.