JP Data LLC
101 Jefferson Dr, 1st Floor
Menlo Park CA 94025
Phone
(408) 623 9165
Email
info at jpdata dot co
sales at jpdata dot co
JP Data LLC
101 Jefferson Dr, 1st Floor
Menlo Park CA 94025
Phone
(408) 623 9165
Email
info at jpdata dot co
sales at jpdata dot co
At the 2022 AI Hardware and Edge Summit, the most frequent question I was asked was, ‘What is the future of all these AI chip companies?’. It’s been almost a year since, and the 200+ AI chip start-ups with the dream of going IPO or being acquired are still waiting. With so much VC investment pouring in, one would have expected that more companies would be acquired by now. Clearly, that is not the case. So why isn’t there more M&A happening in the AI chip world?
When the AI chip industry began in 2016, everybody was very optimistic. There was a wave of acquisitions by Intel, with Movidius in 2016, followed by Mobileye in 2017, followed by Habana in 2019. Given the demand for AI chips, it would them seem that other large semi-companies should have then acquired more start-ups. Instead, we have seen AI chip start-ups shutting down, changing directions, or laying off people. Wave Computing, for example, went bankrupt, Mythic is trying to come back as a different company, and Graphcore has laid off several engineers.
There are many factors contributing to the lack of M&A. Despite the infusion of capital, start-ups have floundered in getting their products to the market. Nervana’s chips, for instance, were supposed to be available in 2017 and Graphcore’s in 2018. The argument for AI chips, at the time, was that they would offer far more compute than the Nvidia’s P100. As it turns out, Nervana never released its chip, and Graphcore is still trying to get the chip to production quality.
The software stack to run AI applications also turned out to be a way more difficult problem than estimated by AI chip start-ups. Mapping a given neural network to the processing elements available in an efficient, optimized manner was not everybody’s cup of tea. The rapidly evolving architecture of NNs did not help either. As a result, start-ups did not fare well in performance benchmarks. Some chose not to publish numbers because they looked worse than Nvidia while others even struggled to run benchmark NNs. This led to customers and potential buyers questioning the credibility of chips to accelerate NNs.
Hyperscalers, like Google and Facebook, decided to build their own chips in the meantime. Hyperscalers, knowing exactly what their workloads look like, brought together their software and hardware teams to build the best possible solution. In doing so, they achieved better performance and desired PPA matrix.
Then there was 1000-pound Guerrilla, Nvidia, who went all out at AI. During the same time period, Nvidia went from 20 TOPS(FP16) for P100 to 4 PetaOPS (FP16) for H100. Nvidia poured a cool $3 billion into R&D for V100, which began the successful product line of AI GPUs. Nvidia also built a stack to support AI applications at every level, starting from low-level libraries all the way up to the application. This allowed GPU as the only production quality architecture for training as well as inference and thus rapid business growth.
Many AI chip companies have their products ready today, but they are limited in their acceleration capabilities which limits their customer reach. Their business is not quite scalable to every AI vertical, given the diverse NN needs and software investment. Most of the companies have not been able to produce impressive benchmarks either. When you put all this together, it doesn’t make AI chip start-ups very attractive acquisition targets.
Then on the buyer side, there are two potential big groups of buyers: hyperscalers and large chip companies. When hyperscalers built their own chips, they essentially shut doors on any large acquisition. It made sense for them to invest 100s of millions of dollars in building their own chips rather than pay billions of dollars for acquisitions.
When large semiconductor companies realized that the AI acceleration problem can be addressed by software, they enhanced their hardware to support AI workload and invested in software. Intel, for example, came up with OpenVino and added matrix instructions to its CPU SIMD engine. This removed some of the large companies as potential buyers from the list. Even mid-size companies, such as Ambarella, utilizing their existing customer relationships, moved rapidly and added an AI engine. In contrast, AI chip start-ups did have a market reach and focused on only part of the solution.
There are some chip companies who could still potentially buy, but they might take a ‘wait and see’ approach – possibly waiting for the solution to mature, and get past the economic downturn, maybe even wait for some companies to fall. The most likely scenario is that the AI chip start-ups will be acquired by their largest customer/OEM. Every start-up company has now signed one or more customers, where their solution works very well. This customer is likely to acquire the chip company, as that gives them a differentiator and competitive advantage. The remaining start-ups will disappear, some will be ‘absorbed,’ and some will straddle along.
That said, the demand for AI chips continues to be strong. Every prominent business worldwide, whether its enterprise or edge focused, plans on using AI for one or the other use case. Today, large semiconductor chip companies have monetized the AI boom successfully, but the list does not include start-ups, unfortunately. The market is still wide open for innovative AI chips, and new AI chip companies will continue to get funded. The out-of-box thinkers will eventually succeed in this crowded market.