HBM2E and GDDR6 help a new wave of artificial intelligence applications

At present, with the rapid rise of artificial intelligence/machine learning (AI/ML), intelligent technologies are being widely used in various fields such as manufacturing, transportation, medical care, education, and finance. Artificial intelligence will set off the next industrial revolution.

As one of the world’s fastest growing countries in artificial intelligence, China is attracting attention. According to the latest statistical forecast data released by Deloitte, the global artificial intelligence market will reach 680 billion yuan in 2020, with a compound growth rate (CAGR) of 26%. The performance of China’s artificial intelligence market is particularly outstanding. The market size is expected to reach 71 billion yuan by 2020, and the compound growth rate in the five-year period from 2015 to 2020 is as high as 44.5%.

In recent years, China is actively promoting the integration of artificial intelligence and the real economy to achieve industrial optimization and upgrading. In July 2017, the State Council issued the “New Generation Artificial Intelligence Development Plan”. This plan and the “Made in China 2025” released in May 2015 constitute the core of China’s artificial intelligence strategy. This landmark plan has strategically deployed the development of artificial intelligence and strives to build China into a major artificial intelligence innovation center in the world by 2030. In addition, 2020 is the first year of China’s new infrastructure, and artificial intelligence, as a major sector, is bound to become the core support of the new infrastructure.

In this context, the scale of China’s artificial intelligence industry reached 51 billion yuan at the end of 2019, of which there are more than 2,600 artificial intelligence companies. As China accelerates the application of artificial intelligence to lead economic growth, this trend will promote the rapid development of computer hardware and software in all aspects.

Rambus discussed the important role of memory bandwidth for AI/ML in the latest white paper, especially introducing the advantages and design considerations of HBM2E and GDDR6 memory. The white paper also explains the applicability of each type of memory in the entire AI/ML architecture, and how to use Rambus HBM2E and GDDR6 interface solutions to implement a complete memory subsystem. The following are some important contents of the white paper:

AI/ML has entered a period of rapid development

As a key application scenario of AI/ML, the development of training and reasoning capabilities, to some extent, represents the rapid development of artificial intelligence. From 2012 to 2019, the artificial intelligence training set has increased by 300,000 times, requiring continuous and rapid improvement in all aspects of artificial intelligence computer hardware and software. At the same time, artificial intelligence inference is being adopted at the edge of the network and in a wide range of IoT devices, including in automobiles/ADAS.

Supporting this speed of development requires far more than the improvements that Moore’s Law can achieve. Moore’s Law is slowing down under any circumstances, which requires continuous and rapid improvement in all aspects of artificial intelligence computer hardware and software.

Memory bandwidth is a key factor affecting the development of AI

Memory bandwidth will become one of the key focus areas for the continued growth of artificial intelligence. Taking advanced driver assistance systems (ADAS) as an example, the complex data processing of level 3 and higher systems requires more than 200 GB/s of memory bandwidth. These high bandwidths are the basic requirements of complex AI/ML algorithms. These algorithms need to perform a large number of calculations quickly and safely execute real-time decisions during self-driving on the road. At level 5, fully autonomous driving, the vehicle can independently respond to the dynamic environment of traffic signs and signals, and accurately predict the movement of cars, trucks, bicycles, and pedestrians, which will require more than 500GB/s of memory bandwidth.

With the rapid development of a new generation of AI/ML accelerators and dedicated chips, new memory solutions such as high-bandwidth memory (HBM, HBM2, HBM2E) and GDDR6 SDRAM (GDDR6) are gradually being adopted to provide the required bandwidth.

HBM2E and GDDR6 help a new wave of artificial intelligence applications

In view of the characteristics of AI/ML demand shunting, the choice of memory depends on the application: training or inference. Both HBM2E and GDDR6 high-bandwidth memory can play a vital role.

For training, bandwidth and capacity are critical requirements. Especially considering that the size of the training set is doubling every 3.43 months. In addition, the training applications running in the data center are becoming more and more limited due to power and space, so it is a big plus to have a solution that provides better energy efficiency and smaller size. Considering all these requirements, HBM2E is an ideal memory solution for AI training hardware.

In the case of inference, bandwidth and latency are critical to the needs of real-time operations. For the increasingly challenging field of artificial intelligence reasoning, GDDR6 is an ideal solution. Built on the basis of mature manufacturing technology, its excellent cost performance makes it suitable for widespread adoption.

Rambus provides comprehensive and ready-made HBM2E and GDDR6 memory interface solutions that can be integrated into AI/ML training and inference SoCs. Recently, the company’s HBM2E memory interface solution achieved a record 4 Gbps performance. The solution consists of a fully integrated PHY and controller, with the industry’s fastest HBM2E DRAM running at 3.6Gbps from SK hynix. The solution can provide 460 GB/s of bandwidth from a single HBM2E device. This performance can meet the TB-level bandwidth requirements, and is designed for the most demanding and advanced AI/ML training and high-performance accelerator computing (HPC) applications.

In general, training and inference have unique application requirements, which can be supported by custom memory solutions. HBM2E is the ideal choice for AI training, and GDDR6 is the ideal choice for AI reasoning. Designers can overcome the inherent design challenges in these architectures by cooperating with Rambus to realize the advantages of these high-performance memories.

The Links:   PM400DVA060 https://www.slw-ele.com/tt162n16kof.html“> TT162N16KOF

We will be happy to hear your thoughts

Leave a reply

Enable registration in settings - general
Compare items
  • Total (0)
Compare
0