Google has finally started allowing other companies to rent its artificial intelligence-tailored chips in the cloud.
The company announced in a blog post on Monday that tensor processing units or TPUs are available in “limited quantities” for a select set of customers looking to run machine learning models on the Google Cloud Platform.
The Cloud TPUs, which Google first announced last year, work by providing customers with specialized circuits solely for the purpose of accelerating AI computation. Google tested using 64 of them to train ResNet-50 (a neural network for identifying images that also serves as a benchmarking tool for AI training speed) in only 30 minutes, VentureBeat reported.
This new hardware could help attract customers to Google’s cloud platform with the promise of faster machine learning computation and execution. Accelerating the training of new AI systems can be a significant help since data scientists can then use the results of those experiments to make improvements for future model iterations.
Google’s move highlights several sweeping changes in the way modern technology is built and operated. Google is in the vanguard of a movement to design chips specifically for artificial intelligence, a worldwide push that includes dozens of start-ups as well as familiar names like Intel, Qualcomm and Nvidia.
And these days, companies like Google, Amazon and Microsoft are not just big internet companies. They are big hardware makers.
In addition to its T.P.U. chips, which sit inside its data centres, the company has designed an AI chip for its smartphones, The New York Times reported.
Right now, Google’s new service is focused on a way to teach computers to recognize objects, called computer vision technology. But as time goes on, the new chips will also help businesses build a wider range of services, said Zak Stone, who works alongside the small team of Google engineers that designs these chips.