At the company’s I/O 2017 developer’s conference, Google unveiled the second-generation Tensor Processing Unit and it is capable of delivering a lot of computing power.
Second-Generation Tensor Processing Units Will Be Used for Faster Machine Learning Purposes Ranging from Google Translate, Google Photos, and More
Though machine learning is normally carried out by GPUs made by NVIDIA, Google has decided to build some its own hardware and optimize it to work well with its software.
“Research and engineering teams at Google and elsewhere have made great progress scaling machine learning training using readily-available hardware. However, this wasn’t enough to meet our machine learning needs, so we designed an entirely new machine learning system to eliminate bottlenecks and maximize overall performance. At the heart of this system is the second-generation TPU we’re announcing today, which can both train and run machine learning models.”
The company now claims that the second version of its TPU system is now completely operational and its being deployed across its Google Compute Engine. There are some other facts that Google has decided to relay regarding its Tensor Processing Units and they have been detailed below.
These newer processing units are capable of doing both inference and training, and researchers can deploy more versatile AI experiments at a faster rate than before just as long as the software is built using TensorFlow.
Google has not delivered the power consumption metric for its Tensor Processing Units but we feel that it is going to be more efficient than NVIDIA’s graphics processors. What impression do you have regarding these chips? Tell us your thoughts down in the comments.