JP Data LLC
101 Jefferson Dr, 1st Floor
Menlo Park CA 94025
Phone
(408) 623 9165
Email
info at jpdata dot co
sales at jpdata dot co
JP Data LLC
101 Jefferson Dr, 1st Floor
Menlo Park CA 94025
Phone
(408) 623 9165
Email
info at jpdata dot co
sales at jpdata dot co
Since 2015, more than seventy companies have entered the AI chipset market with more than 100 chip starts announced. All of them are trying to tackle the AI algorithm acceleration problem using different techniques. These companies range from cloud companies to top semiconductor companies to start ups. Intel for instance has poured billions of dollars into AI via acquisition of Altera and Mobileye. Many start-ups have raised capital exceeding $100 million. For instance, Wave Computing, Campbell based company, recently announced closure of $86 million funding round in its E round and Habana, another AI chipset company closed $75 million in series B round recently. The need for AI chipsets has injected new life into dormant semiconductor industry for last decade and many AI chipset companies have raised capital in excess of $50 million. Cambricon, an AI chip company from China, became first unicorn chip company with valuation as high as $2.5 billion with its latest round of funding.
However, there shouldn’t be second questions that as of 2018, Nvidia is the de-facto leader in AI chipset industry. Nvidia almost single handedly has created a new market for AI servers and workstations and has already reached $2 billion run rate. It is certainly being challenged by many start-ups and other semiconductor companies, but Nvidia is doing extremely well in dealing with the potential competition both in terms of innovation as well as execution. In fact, over the past few years, Nvidia has actually moved much faster than the competition.
How fast? Here are some comparison points. Keynotes from Nvidia’s CEO have always been some of the prominent takeaways at GTCs (GPU Tec Conference) organized by Nvidia. If you look at the archives of GTC, there was no mention of Computer Vision or AI in 2013 keynote. There was a few minutes of speech dedicated to Computer Vision in 2014 but by 2015, the conference was about computer vision. By 2016 Nvidia had introduced DGX-1 with Pascal that had 170 TFLOPS. By 2017, Nvidia had introduced the new DGX with V100 that offered 1 peta FLOPS compute (Tensor FLOPS) and 2018 saw introduction of 2 peta FLOP DGX2.
To sum it, in the past two years, Nvidia has successfully introduced two very complicated chips and servers that are already in production. Volta, a GPU targeted towards data center AI acceleration for instance, is 815mm square, a large die by any standards and is already in productions. Nvidia’s Automotive Xavier SoC is one of the most complex SoC designed that consists of ARM, GPU, vision accelerator and image processing pipeline and offers 30 TOPS of performance.
In comparison, pretty much all start-ups who are building comparable chipsets are still in the sampling stages, even the ones who started back in 2016. Graphcore for instance just announced availability of their pods , Wave Computing is sampling and there’s no official word from Cerebras and Groq on their production schedule. Even other top semiconductor companies are struggling. Intel Nervana has been delayed and will not be out until 2019. AI chipset startups with smaller chipset sizes targeting embedded market are shipping but no one with a large size chipset that directly competes with Nvidia has been able to get to market yet.
Certainly having lots of dollars in the bank helps and Nvidia has not been shy about investing. Nvidia has reportedly invested $3 billion in Volta platform and $2 billion into Xavier and that has certainly helped. And Nvidia has not stopped there – they have created one of industry’s most comprehensive software stack for deep learning. The stack extends from cloud containers to AI frameworks to AI libraries to low level acceleration framework. All this taken together along with adaption of GPU architecture (that was already in production) has helped Nvidia get to market fast and generate billions of dollars in revenue.
Nvidia is moving faster than the competition in AI chipset industry
Since 2015, more than seventy companies have entered the AI chipset market with more than 100 chip starts announced. All of them are trying to tackle the AI algorithm acceleration problem using different techniques. These companies range from cloud companies to top semiconductor companies to start ups. Intel for instance has poured billions of dollars into AI via acquisition of Altera and Mobileye. Many start-ups have raised capital exceeding $100 million. For instance, Wave Computing, Campbell based company, recently announced closure of $86 million funding round in its E round and Habana, another AI chipset company closed $75 million in series B round recently. The need for AI chipsets has injected new life into dormant semiconductor industry for last decade and many AI chipset companies have raised capital in excess of $50 million. Cambricon, an AI chip company from China, became first unicorn chip company with valuation as high as $2.5 billion with its latest round of funding.
However, there shouldn’t be second questions that as of 2018, Nvidia is the de-facto leader in AI chipset industry. Nvidia almost single handedly has created a new market for AI servers and workstations and has already reached $2 billion run rate. It is certainly being challenged by many start-ups and other semiconductor companies, but Nvidia is doing extremely well in dealing with the potential competition both in terms of innovation as well as execution. In fact, over the past few years, Nvidia has actually moved much faster than the competition.
How fast? Here are some comparison points. Keynotes from Nvidia’s CEO have always been some of the prominent takeaways at GTCs (GPU Tec Conference) organized by Nvidia. If you look at the archives of GTC, there was no mention of Computer Vision or AI in 2013 keynote. There was a few minutes of speech dedicated to Computer Vision in 2014 but by 2015, the conference was about computer vision. By 2016 Nvidia had introduced DGX-1 with Pascal that had 170 TFLOPS. By 2017, Nvidia had introduced the new DGX with V100 that offered 1 peta FLOPS compute (Tensor FLOPS) and 2018 saw introduction of 2 peta FLOP DGX2.
To sum it, in the past two years, Nvidia has successfully introduced two very complicated chips and servers that are already in production. Volta, a GPU targeted towards data center AI acceleration for instance, is 815mm square, a large die by any standards and is already in productions. Nvidia’s Automotive Xavier SoC is one of the most complex SoC designed that consists of ARM, GPU, vision accelerator and image processing pipeline and offers 30 TOPS of performance.
In comparison, pretty much all start-ups who are building comparable chipsets are still in the sampling stages, even the ones who started back in 2016. Graphcore for instance just announced availability of their pods , Wave Computing is sampling and there’s no official word from Cerebras and Groq on their production schedule. Even other top semiconductor companies are struggling. Intel Nervana has been delayed and will not be out until 2019. AI chipset startups with smaller chipset sizes targeting embedded market are shipping but no one with a large size chipset that directly competes with Nvidia has been able to get to market yet.
Certainly having lots of dollars in the bank helps and Nvidia has not been shy about investing. Nvidia has reportedly invested $3 billion in Volta platform and $2 billion into Xavier and that has certainly helped. And Nvidia has not stopped there – they have created one of industry’s most comprehensive software stack for deep learning. The stack extends from cloud containers to AI frameworks to AI libraries to low level acceleration framework. All this taken together along with adaption of GPU architecture (that was already in production) has helped Nvidia get to market fast and generate billions of dollars in revenue.
As for competition, each one has their own take on the AI algorithm acceleration and have designed architectures from ground up to tackle the AI algorithm acceleration problem. They all are going to market with their own bag of tricks that promise significant gains in performance while reducing the power. Given that lots of money is being poured into AI chipsets is a good news for the industry overall. When these companies start shipping in 2019, true competition will begin.