Google Is Exploring Ways to Use Its Financial Might to Take On Nvidia
As more AI companies consider Googles chips, the company wants to use deals with external partners to expand the potential market Google is exploring new ways to expand the market for its artificial-intelligence chips, seeking to use its financial might to build a broader AI ecosystem that can better compete with market leader Nvidia.
The companys chips are gaining wider adoption for AI workloads, including with startups such as Anthropic, but Google is dealing with myriad challenges as it seeks to grow. The issues include bottlenecks at manufacturing partners and limited interest from cloud-computing rivals that are among the largest buyers of Nvidia processors, according to people familiar with the matter.
To expand its potential market, Google is increasing its financial support to a network of data-center partners that can provide computing power to a broader swath of customers, people familiar with its plans said.
The company is in talks to invest around $100 million in cloud-computing startup Fluidstack, part of a deal that values it at around $7.5 billion, people familiar with the discussions said. Fluidstack is one of a growing number of so-called neocloud companies that offer computing services to AI companies and others. CoreWeave, one of the biggest such neocloud operators, provides access to graphics processing units, or GPUs, mostly from Nvidia.Google wants to help amplify the growth potential of Fluidstack and to encourage more computing providers to use its AI chips, people familiar with its plans said. Googles AI chips are called tensor processing units, or TPUs.
Google has also held discussions about expanding its financial commitments to other data-center partners that could lead to additional TPU demand, people familiar with the talks said. Google has backstopped financing for projects involving Hut 8, Cipher Mining and TeraWulf, which are former crypto-mining companies that are now developing data centers. Cipher Mining declined to comment. Hut8 and TeraWulf didnt respond to requests for comment.
Some managers at Googles cloud-computing division recently refreshed a longstanding internal debate about restructuring the TPU team into a stand-alone unit, people familiar with those discussions said. Such a plan could potentially allow Google to expand its opportunities to invest, including with outside capital.
One challenge for any potential stand-alone unit is that Googles cloud business relies heavily on Nvidia chips, some of the people said.
A representative of Google said there are no plans to restructure the TPU unit. Keeping the chip team tightly integrated with other parts of the company has advantages such as allowing the developers of Googles Gemini AI model to more easily make changes to the chip design.In 2018, Google started selling access to TPUs through its cloud services. The company has traditionally signed up TPU users through its cloud-computing unit, but it is also selling the TPU chips directly to external customers, according to industry research group SemiAnalysis.
The measures represent an effort to expand the potential market for Googles chips, which AI customers have praised as effective for training some models and for inference tasks, when AI taps its training to produce output such as a chatbots answers.
In a sign of the TPU teams growing importance, Amin Vahdat, who has led the development of Googles chips and networks, was recently promoted to chief technologist for AI infrastructure, reporting directly to Chief Executive Sundar Pichai.
Last April, Google introduced its seventh-generation TPU, called Ironwood, which it said was designed for AI inference. Compared with GPUs, which were originally designed for gaming, engineers say the TPU is sometimes better suited for large volumes of AI computation that dont require high precision.
Alphabet has so far partnered with Broadcom for the design and production of its TPUs, and it uses Taiwan Semiconductor Manufacturing as a contract manufacturer.
Google could face hurdles in ramping up TPU shipments. TSMC may prioritize Nvidia, its largest customer, over Google as the foundrys advanced capacity is stretched thin by a surge in AI-related demand, according to people familiar with semiconductor supply chains. Google is also vulnerable to the global shortage of memory chips, an essential component for AI chips.
Over the past year, more companies that develop and operate AI have shown interest in Googles TPUs, seeking to tap more cost-effective computing power and avoid excessive dependence on Nvidia.
The Wall Street Journal reported in November that Meta Platforms was in talks about using the chip. This week, Meta deepened its ties with Nvidia with new purchasing plans of tens of billions of dollars worth of chips and other hardware. In October, Anthropic said it would expand its use of Googles cloud-computing technologies, including up to one million TPU chips.However, interest from major cloud-service providers appears to be tepid, partly because they consider Google a competitor, according to industry participants. Amazon Web Services, Amazon.coms cloud unit, has also developed its own chips for AI.