Intel recently unveiled some of its latest developments and breakthroughs in deep learning technologies. The chipmaker took the stage at Hot Chips 31 and revealed exciting new details of its new Spring Crest Deep Learning Accelerators called the Nervana Neural Network Processor for Training or (NNP-T).

The NNP-T packs a lot of firepower, and Intel is capitalizing on this in order to garner as much market share of the growing deep learning industry. The NNP-T packs 24 processing cores. Intel said that it would not use the more popular and ubiquitous DDR memory type; instead, it opted to use the more efficient high bandwidth memory called HBM2. The NNP-T packs have 32GB of HBM2 memory.

Intel claims that it was able to cram 27 billion transistors into a die that measures just 688mm2. The company also confirmed that it used some technologies from its rival, TSMC, in order to put together this powerful piece of hardware.

Over the past few years, machine learning has grown exponentially and is now one of the staple features of most data centers. Since graphics processing units are usually the first choice when it comes to building these massive computer powerhouses, the rise of machine learning has greatly helped in the current rise of demands for GPUs.

Almost a decade ago, all supercomputers still employ the traditional central processing unit as its main computational hardware. However, as GPUs get more powerful by the day, almost every supercomputer is now harnessing its power to provide the necessary compute speed for machine learning algorithms.

In response this, Intel said that it had developed a new technique on how to handle such massive compute requirements. Training workloads for machine learning use complicated neural networks in order to run applications such as speech translation, object recognition, and voice synthesis. While the Xeon family of processors are still the go-to option for some data centers, Intel said that it is developing new compute solutions that will instead take advantage of the massive compute power of graphics cards.

This puts graphics cards as one of the most sought-after pieces of hardware in the machine learning industry. Notable graphics card manufacturer Nvidia claims that GPUs are poised to be the first option for companies who are planning to build their own data centers. Despite this, Intel still believes that hardware requirements will still depend on the kind of machine learning platform and its application.