Google’s chip layout: next-generation TPU artificial intelligence chips are on the way

With the departure of several manufacturers such as NVIDIA, Intel, and Texas Instruments, the mobile phone chip market pattern has been formed, including Qualcomm, MediaTek, Samsung, the unique Apple A series and Huawei Kirin.

While the chip technology barrier is getting higher and higher, there are still some ambitious manufacturers who want to climb the high ground of technology.

Jixian: poaching big bosses to lay the foundation for self-developed chips

In 2017, Google poached a number of bigwigs in the chip industry from Apple, including former Apple SoC chip architect Manu Gulati, Apple chip experts John Bruno, Wonjae Choi, and Mainak Biswas, Vinod Chamarty and Shamik Ganguly from Qualcomm.

Google is working on building its own chips for its Pixel smartphones.

And under the trend of emerging new architectures, Google also hopes to strengthen the strength of its self-developed chips in this field.

Google has recruited at least 16 tech veterans in Bengaluru, and four recruiters specialize in poaching talent from traditional chip companies such as Intel, Qualcomm, Broadcom and Nvidia.

In March, Google also announced that the company had hired longtime Intel executive Uri Frank as vice president to run its custom chip division.

Acquisition: Accelerate the self-developed chip schedule

In 2018, Google announced the completion of the $1.1 billion acquisition of the HTC smartphone Pixel team. Some members of HTC’s mobile device division will join Google’s hardware division; Google will also acquire some of HTC’s non-exclusive intellectual property rights.

After acquiring HTC’s team responsible for the Pixel, Google’s ability to develop its own chips has been further improved.

This year, Google acquired Provino Technologies, a start-up developing network-on-a-chip (NoC) systems for machine learning, which can help with the development of its TPUs, thereby boosting its development of AI chips in the cloud.

NoCs increase the scalability of SoCs and the power of complex SoCs over other designs.

But judging from the products released by Google, their breakthrough in self-developed mobile phone chips is not mobile phone processor chips, but mobile phone co-processors.

Going to the cloud: TPU extends the plan to go to the cloud

TPU is a special chip for neural networks launched by Google in 2015, which is built to optimize its own TensorFlow machine learning framework. Unlike GPU, Google TPU is an ASIC chip solution, which is a specially customized chip.

Since 2015, Google has gradually improved the layout from cloud to end based on TPU.

In addition to TPUs and TPU PODs for cloud services, Edge TPUs that provide end-to-end, end-to-end AI computing power have also been launched, enabling predictive maintenance, fault detection, machine vision, robotics, voice recognition, etc. scene.

Today, Google’s TPU has iterated to the fourth generation, and the average performance of its fourth-generation TPU is 2.7 times higher than its third-generation TPU.

With the excellent performance of TPU chips, Google has also become a representative player of dedicated AI chips. The new architecture he introduced has also brought new inspiration to a new generation of artificial intelligence waves.

Google also plans to apply its TPU in the EDA field, using cloud resources for chip verification, which can greatly speed up chip development time.

Google has also gradually brought its TPU chips to the edge, launching the Google Edge TPU in 2018. Complementing Cloud TPU and Google Cloud services, it provides an end-to-end, cloud-to-edge, “hardware + software” infrastructure to assist customers in deploying AI-based solutions.

In the history of AI chip development, whether in terms of on-chip memory or programmability, Google TPU is a rare technological innovation, breaking the monopoly of GPU and opening up a new competitive landscape for cloud AI chips.

Open Source: First Open Source PDK Lowers Barriers to Entry

Last year, Google announced the first open source PDK – the SkyWater PDK. The selected companies do not need to bear expensive manufacturing costs, and Google will provide them with a completely free chip manufacturing process.

This is the first open source processing tool of its kind, and with this PDK, chips can be produced at the SkyWater fab at the 130nm node.

If the open source PDK model is successful, it will lower the threshold for companies to enter the semiconductor industry.

Next-generation TPU AI chips on the way

In terms of AI hardware, Google recently announced that it will launch the next-generation customized tensor processing unit (TPU) artificial intelligence chip TPUv4 Pods artificial intelligence chip.

The computing speed of the TPUv4 Pods artificial intelligence chip is twice that of the previous version, and quantum computing will challenge the scale of 1 million qubit operations, which is the fastest generation system currently deployed by Google.

The TPUv4 launched this time optimizes the interconnection speed and architecture within the system to further improve the interconnection speed. It is reported that the interconnection bandwidth of TPUv4 cluster is 10 times that of most other network technologies, and it can provide exaflop-level computing power.

In the second half of this year, Google plans to make the chip available to developers as part of a cloud platform.

The AI ​​chip that cannot be bypassed

Although TPU is not an AI chip used in mobile phones, and in deep learning tasks, compared with CPU, GPU, and FPGA, the task flexibility is low. But in any case, Google’s ambitions to enter the field of AI have become clear.

AI applications (speech recognition, image processing, etc.) on personal mobile terminals have such broad development prospects and market potential, Google will naturally not turn a blind eye, and the upgrade of Android has long been a key point of the platform.

On the Pixel phones that have been launched, Google has equipped Visual Core, a dedicated AI chip for image processing, which is 5 times faster than the application processor for compiling HDR+ images, and consumes only 1/10 of the power consumption.

Visual Core also handles complex camera-related imaging and machine learning tasks, including automatic scene-based image adjustment, among other uses.

Now, the chip is in development and will debut in the Pixel 6 smartphone and another device coming later this year.

The 5-nanometer process chip, code-named Whitechapel, will power the next-generation Pixel phones. It’s known internally as the GS101 – the Google Silicon chip.

This advanced chip will have a tri-cluster setup via TPU, bringing enhanced machine learning capabilities to smartphones, allowing these modern applications to get a better AI experience from it.

The Whitechapel chip will feature a custom neural processing unit and image signal processor. The use of artificial intelligence and machine learning may not just be used to improve the camera, but to raise the performance bar of the Pixel 6 and Pixel 6 Pro.

From the cloud, to edge terminals and mobile smart terminals, Google’s layout around AI chips is getting wider and wider. Judging from these layouts of Google, Google’s plans in the chip field seem to be more ambitious.

Some references: Semiconductor Industry Observation: “Google’s Chip Layout”, Sina Technology: “Google’s Chip Layout: Not Only Mobile Phones Have Been Layout from Edge to Cloud”, Sanyi Life: “One Chip Uses Three Generations, Google’s Chip Layout” Where does the confidence come from?”.

The Links:   FZ1600R17KF6C_B2 G156XW01V1