IBM Research has developed a new chip that could revolutionize the way artificial intelligence (AI) is processed.
The chip, which is still in the research phase, uses analog in-memory computing (AIMC) to perform matrix-vector multiplications directly on the chip’s memory, without the need to transfer data to a separate processing unit. This approach can significantly reduce the power consumption of AI chips, making them more suitable for mobile and embedded devices. In tests, IBM’s chip was able to achieve near-software equivalent inference accuracy on the CIFAR-10 image dataset, while consuming only 1.51 microjoules of energy per input.
The chip is made using a 14nm process and has a maximum matrix-vector multiplication clock frequency of 1GHz. It contains 64 cores, each of which can perform the computations associated with a layer of a DNN model. The chip also includes eight global digital processing units (GDPUs) that provide additional digital post-processing capabilities.
This is an improvement over traditional AI chips, which can consume hundreds of milliwatts or even watts of power per inference. The lower power consumption of IBM’s chip could make it possible to deploy AI on a wider range of devices, from smartphones to self-driving cars.
IBM is not the only company working on AIMC chips. Other companies, such as Intel and Qualcomm, are also developing similar technologies.
The sources for this piece include an article in TheRegister.