JP Data LLC
101 Jefferson Dr, 1st Floor
Menlo Park CA 94025

Phone
(408) 623 9165

Email
info at jpdata dot co
sales at jpdata dot co

(More and more) Deep Learning Chipset Companies Are Coming Out of Stealth Mode

Artificial intelligence (AI) applications are hot and everyone is trying to capitalize on the buzz. It is not surprising that everyone wants better, cheaper hardware that gives them the best performance for their AI application. Ultimately, this boils down to the chipset that runs the underlying algorithms, which has sparked an intense race to win the hardware battle.

Note: This was published via Tractica in 2017

Deep learning technology is, by far, the most popular type of neural network used in AI applications; naturally, chip companies have chosen to focus on that. Deep learning chipsets came into the limelight with Intel’s acquisition of Nervana Systems for $400 million in August 2016. Since then, many startups have come out of stealth mode, claiming to have solved the deep learning algorithm acceleration. They are all trying to design a new kind of architecture to accelerate deep learning algorithms and claim to provide significantly higher performance than today’s graphics processing units (GPUs) at a much lower power consumption level. These companies include:

  • Wave Systems: The Campbell, California-based company announced details of its approach in October 2016.
  • Thincii: This company has not given much detail about the underlying architecture, but it is developing a low-power chipset targeting deep learning video applications.
  • Graphcore: It announced $30 million in funding in October 2016. Graphcore is developing an intelligent processor unit (IPU) to accelerate deep learning algorithms.
  • Isocline: The Austin, Texas-based company announced funding in December 2016. The company is developing an AI chip that will work on edge devices.
  • Cerebras: It announced $25 million in funding in December 2016. Cerebras has given very little detail about its approach and the only description of the company is on LinkedIn.
  • Deepscale: The Mountain View, California-based company has a strong management team focusing on automotive AI chipsets.
  • Tenstorrent: The Ontario, Canada-based company is developing a deep learning processor and has not released any details.

Many Needs Opening the Door for Many Solutions

Given that top companies like Intel, NVIDIA, and AMD have already announced their plans, one might think that this is leading to overcrowding of the market. However, Tractica believes that the needs for the deep learning chipset market are so diverse that it will require many solutions to resolve the issue of AI application acceleration. For example, needs for training and inference differ. Training needs high computation performance in comparison with inference. So, the chipsets need to be optimized for one or the other, which means multiple companies can coexist that cater to training or inference.

Chipsets can also cater to a particular application segment, such as enterprise or embedded. Embedded applications need low power, whereas the enterprise market needs high performance. In addition, there could also be two other market segments: ultra-low power for Internet of Things (IoT) applications and a mid-range power for automotive applications. Different chip companies can choose to go after different markets. For instance, Graphcore and Wave Systems are going after enterprise training markets, whereas Thincii and Isocline are going after the embedded inference market.

In addition, it is possible to create chip architectures based on central processing units (CPUs), digital signal processors (DSPs), GPUs, or field-programmable gate arrays (FPGAs) to optimize deep learning algorithms. So, depending on the approach taken by these companies, one or more companies could emerge as prominent, while others will play a niche role.

Innovative Approaches Are Encouraged

Most of these architectures are programmable and will stay that way in the near term due to the evolving nature of deep learning algorithms. The ultimate evolution of such chipsets would be toward a dedicated chipset that tackles a specific problem, such as face recognition. The chipset can implement a highly optimized neural network that gives the highest accuracy at the highest performance or the lowest power in comparison to general purpose chipsets. No such companies are on the horizon just yet, but I am sure we will get there in time.

Artificial intelligence (AI) applications are hot and everyone is trying to capitalize on the buzz. It is not surprising that everyone wants better, cheaper hardware that gives them the best performance for their AI application. Ultimately, this boils down to the chipset that runs the underlying algorithms, which has sparked an intense race to win the hardware battle.

Deep learning technology is, by far, the most popular type of neural network used in AI applications; naturally, chip companies have chosen to focus on that. Deep learning chipsets came into the limelight with Intel’s acquisition of Nervana Systems for $400 million in August 2016. Since then, many startups have come out of stealth mode, claiming to have solved the deep learning algorithm acceleration. They are all trying to design a new kind of architecture to accelerate deep learning algorithms and claim to provide significantly higher performance than today’s graphics processing units (GPUs) at a much lower power consumption level. These companies include:

  • Wave Systems: The Campbell, California-based company announced details of its approach in October 2016.
  • Thincii: This company has not given much detail about the underlying architecture, but it is developing a low-power chipset targeting deep learning video applications.
  • Graphcore: It announced $30 million in funding in October 2016. Graphcore is developing an intelligent processor unit (IPU) to accelerate deep learning algorithms.
  • Isocline: The Austin, Texas-based company announced funding in December 2016. The company is developing an AI chip that will work on edge devices.
  • Cerebras: It announced $25 million in funding in December 2016. Cerebras has given very little detail about its approach and the only description of the company is on LinkedIn.
  • Deepscale: The Mountain View, California-based company has a strong management team focusing on automotive AI chipsets.
  • Tenstorrent: The Ontario, Canada-based company is developing a deep learning processor and has not released any details.

Many Needs Opening the Door for Many Solutions

Given that top companies like Intel, NVIDIA, and AMD have already announced their plans, one might think that this is leading to overcrowding of the market. However, Tractica believes that the needs for the deep learning chipset market are so diverse that it will require many solutions to resolve the issue of AI application acceleration. For example, needs for training and inference differ. Training needs high computation performance in comparison with inference. So, the chipsets need to be optimized for one or the other, which means multiple companies can coexist that cater to training or inference.

Chipsets can also cater to a particular application segment, such as enterprise or embedded. Embedded applications need low power, whereas the enterprise market needs high performance. In addition, there could also be two other market segments: ultra-low power for Internet of Things (IoT) applications and a mid-range power for automotive applications. Different chip companies can choose to go after different markets. For instance, Graphcore and Wave Systems are going after enterprise training markets, whereas Thincii and Isocline are going after the embedded inference market.

In addition, it is possible to create chip architectures based on central processing units (CPUs), digital signal processors (DSPs), GPUs, or field-programmable gate arrays (FPGAs) to optimize deep learning algorithms. So, depending on the approach taken by these companies, one or more companies could emerge as prominent, while others will play a niche role.

Innovative Approaches Are Encouraged

Most of these architectures are programmable and will stay that way in the near term due to the evolving nature of deep learning algorithms. The ultimate evolution of such chipsets would be toward a dedicated chipset that tackles a specific problem, such as face recognition. The chipset can implement a highly optimized neural network that gives the highest accuracy at the highest performance or the lowest power in comparison to general purpose chipsets. No such companies are on the horizon just yet, but I am sure we will get there in time.

The hardware revolution for AI is just beginning. As the technology matures and application requirements start to freeze, we will start seeing new chipset architectures emerge to help accelerate the innovation.