AI lacks processing power, for IBM the answer lies in the chips

Close-up of an IBM artificial intelligence chip. Photo: IBM

The hype suggests that artificial intelligence (AI) is already everywhere, but in fact the technology behind it is still in development. Many AI applications run on non-AI chips. Instead, they rely on general purpose processors and GPUs built for gaming. This discrepancy has led to a flood of investment from tech giants like IBM, Intel and Google, as well as startups and venture capitalists, into new chips specifically designed for AI workloads.

As technology improves, business investment is bound to follow. According to Gartner, AI chip revenue was over $34 billion in 2021 and is expected to reach $86 billion in 2026. % expected by 2026.

IBM Research, for its part, just unveiled the Artificial Intelligence Unit (AIU), a prototype chip dedicated to AI.

“We’re running out of computing power. AI models are growing exponentially, but the hardware needed to train these giants and run them on servers in the cloud or on edge devices like smartphones and laptops. Sensors aren’t evolving as fast.” said in IBM. .

Run deep learning models

AIU is the first IBM Research AI Hardware Center SoC designed specifically to run enterprise AI deep learning models.

IBM claims that the “workhorse of traditional computing,” also known as the CPU, was developed before the advent of deep learning. While processors are good for general applications, they are not as good for training and running deep learning models that require massively parallel AI operations.

“We have no doubt that AI will be a fundamental driver of computing solutions for a very long time to come,” Jeff Burns, director of AI Compute for IBM Research, told ZDNET. “It will be embedded in the IT landscape, in these complex enterprise IT infrastructures and solutions in a very broad and comprehensive way.”

According to Jeff Burns, it makes more sense for IBM to create end-to-end solutions that are effectively generic, “so that we can integrate these capabilities across different computing platforms and support a very, very wide range of AI business requirements.”

Save resources

The AIU is an application specific integrated circuit (ASIC), but it can be programmed to perform any type of deep learning task. The chip has 32 processing cores built using 5nm technology and contains 23 billion transistors. The layout is simpler than that of a CPU, designed to send data directly from one computing engine to another, making it more power efficient. It is designed to be as easy to use as a graphics card and can be connected to any computer or server with a PCIe slot.

To conserve energy and resources, IAU uses approximate calculations, a method developed by IBM to trade off calculation accuracy and efficiency. Traditionally, calculations have been based on 64-bit and 32-bit floating point arithmetic, which provides a level of precision useful for finance, scientific computing, and other applications where granularity is important. However, this level of accuracy is not really needed for the vast majority of AI applications.

“If you’re thinking about tracking a self-driving car, there’s no exact position in the lane where the car should be,” says Jeff Burns. “There are many places in the alley.”

Neural networks are fundamentally inaccurate – they produce a result with probability. For example, a computer vision program can tell you with 98% certainty that you are looking at a photo of a cat. Despite this, neural networks were initially trained using high-precision arithmetic, which consumed a lot of energy and time.


The AIU approximation method allows it to be scaled from 32-bit floating point arithmetic to bit formats containing a quarter of the amount of information.

To make the chip truly versatile, IBM went beyond hardware innovation. IBM Research focused on the base models, and a team of 400–500 people worked on these models. Unlike AI models, which are designed for a specific task, base models are trained on a large set of raw data, creating a resource that looks like a giant database. Then, when you need a model for a specific task, you can retrain the base model using a relatively small amount of labeled data.

Through this approach, IBM intends to work with different verticals and different use cases for AI. The company creates base models for several areas – these use cases cover areas such as chemistry and time series data. Historical data, which simply refers to data collected at regular intervals, is essential for industrial companies that need to monitor the operation of their equipment. By building base models for a few key areas, IBM can develop more specific vertical offerings. The team also ensured that the IAU software is fully compatible with IBM Red Hat’s proprietary software stack.

Source: .com

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.