Science

Nvidia unveils Grace, its first ARM processor, built for artificial intelligence

One could believe in a declaration of war: Nvidia, king of the GPU, in the running to buy ARM, has just announced a processor. What to threaten Intel and AMD? No … Well, not right away.

“It is a processor designed for a very specific use”, explained Paresh Kharya, senior director at Nvidia, “Not a competitor of x86 processors”. His company intends to continue working with AMD and Intel to place its chips alongside their versions of the x86 architecture.

A chip to respond to a growing problem

So what is the ambition of this processor, named Grace in homage to Grace Hopper, a genius computer scientist who created COBOL and the first compiler in the early 1950s?

Grace is the first processor for data centers from Nvidia. It is designed to speed up calculations of artificial intelligence algorithms. The Santa Clara company announces that it is offering “Ten times the performance of the most efficient current servers in this field”.

A chip that therefore aims to improve the execution of natural language algorithms, intelligent recommendation systems or AI supercomputers. So many machines that have to manage huge sets of data, which require both a lot of computing power and a huge amount of associated RAM.

Also to discover in video:

At the heart of Grace, Nvidia slipped an ARM architecture, a choice which was obviously decided well before the launch of the option to buy back the English structure. ARM CPU cores are combined with a low-consumption memory subsystem, in order to find the best balance between power consumption and performance, advances Nvidia.

This approach responds to a real current challenge in the world of AI: “Succeed in running larger and larger models”, put Paresh Kharya into perspective: “Where a language model like ELMo had around 94 million parameters in 2018, GPT-3, the current star of the field, totals more than 175… billion”.

The representative of Nvidia explained to us, during an early presentation of the processor, that we are witnessing a doubling of the number of parameters every two and a half months. “AI models integrating thousands of millions of criteria are therefore very close”, he concluded.

Rethink everything to get around a bottleneck

Nvidia is obviously the big fan of the central role of GPUs in the field of artificial intelligence. A world in which, until now, overpowered Intel or AMD processors coexist with Nvidia GPUs.
However, with current computing architectures, Nvidia researchers have found that there is a bottleneck (at 64 GB / s, all the same) in terms of access to the processor memory by the chips. graphics support parallel calculations.

Nvidia has therefore decided to reshuffle the cards using several of its technologies. First of all, its engineers adapted its NVLink technology to result in 900 GB / s of bandwidth between CPU and GPU – in other words, it is an x14 compared to what has been around until now.

Next, Nvidia increased the memory bandwidth (LPDDR5 EEC) to reach 500 GB / s. “Twice the bandwidth, for 10 times less electricity consumption compared to DDR4”, promises Paresh Kharya.

It is actually a new architecture that provides unified cache consistency with a single memory address space. It combines system memory and HBM memory from graphics chips

Finally, where we found the pair of Nvidia GPUs and x86 chips, we will now find, instead of an Intel or AMD processor, a Neoverse chip, the new generation ARM processor models developed for servers. Nvidia is therefore not at war with Intel or AMD, but it tries to do without them when it can …

This new generation Neoverse should not be available before 2023, date of availability of this first processor for AI from Nvidia.

Grace in Switzerland and New Mexico

Grace has already found two takers. It will be at the heart of a new supercomputer for artificial intelligence, called ALPS, which was ordered by the CSCS, the Swiss National Center for Scientific Computing. It will be able to produce 20 Exaflops of AI calculations. This will allow him to train GPT-3 models in two days where it currently takes several months.
To achieve such performances, ALPS will count not only on Grace, but also on “An Nvidia GPU that has not yet been announced”, Paresh Kharya told us.

The other candidate to be touched by Grace will be the National Laboratory in Los Alamos, New Mexico, United States. Multidisciplinary, this research center is particularly known for having been the cradle of the Manhattan Project during the Second World War.

Because Grace is cut out for AI and HPCs (those beasts of high-performance computing). It will therefore have to deal with many fields, such as economics, biology, but also… quantum computing.

Thanks to its power, to its innovative memory management, Grace should indeed make it possible to boost an essential part of this future of computing which NVIDIA is tackling right now: “The simulation of quantum circuits”, according to Paresh Kharya.

A new tool, developed by Nvidia and called CuQuantum, should facilitate the use of Nvidia’s GPUs to simulate the operation of quantum components, and therefore succeed in producing more efficient quantum machines.

Far from our graphics cards (hit by the shortage) and our gaming PCs, Nvidia continues to design the future with its chips. It adds elements from ARM’s IP banks to it. A process started before the buyout procedure, even if each new announcement from the Santa Clara company seems to validate this strategic merger.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker