AI revolution, started in 2012 when neural network Alexnet surpassed accuracy of all previous classic computer vision techniques and has not looked back since. The AI algorithms are compute sensitive by nature and the need for accelerating AI algorithms in hardware has long been recognized with over 100 companies jumping in with their chipsets.
Intel is currently locked in battle with Nvidia to become world’s dominant AI chipset company. While Nvidia’s focus in on discrete GPUs for AI acceleration, Intel is focusing on CPUs and heterogeneous computing. Nvidia is unifying its software development environment via CUDA and Intel is embarked on enabling software unification via OpenVino and OneAPI for different architectures. All the new chipsets are expected to be supported by OpenVino Toolkit at launch making it easier for developers to switch environments.
Intel has had a flurry of announcements recently suggesting that they are taking the battle to dominate AI chipset world to a new level. At a recent Intel AI Summit, announced many products that have been ‘several years in making’ and followed it up with additional announcements soon afterwards at Super Computing 2019. Then there are also rumors regarding Intel’s potential acquisition of AI start-up, Habana Labs. The announcements are as follows.
Focusing on the inference market, Intel announced that the next-generation Xeon (codenamed Cooper Lake) will support bfloat16. The bfloat 16 is a popular format in AI inference world that offers a larger number than half-precision floating point being used previously. Intel also announced software package called Deep Learning Boost that uses the enhanced microarchitecture to enhance convolution performance. This new microarchitecture combines three instructions into one to perform convolution thus generating higher performance.
Nervana chipsets, awaited by market for a long time, is finally hitting the market and Intel announced early MLPerf benchmark results of its NNP-I chipset. The benchmarks were generated using its early hardware and alpha software stack. The company expressed optimism that the numbers will get even better as the product matures.
Intel’s AI portfolio
Source: Intel
Intel also announced availability of Nervana NNP-T, its training chipset and early customer engagement with Baidu. It demonstrated a 32-card cross-chassis system in 1U form factor. Intel claimed that it can achieve scaling to few thousand of these cards via PCIe Express connectivity of up to theoretical limit of 1024 cards. Intel claims to have achieved up to 95% scaling with Resnet-50 & BERT as measured on 32 cards.
Intel’s next generation low power AI chipset from Movidus, code-named Keem Bay is expected to be released in the first half of 2020. Although no concrete details were provided on performance and power numbers, Intel announced that it will offer more than 4X the raw inference throughput of Nvidia’s TX2 putting it at 4 TOPS. The chip is expected to consume 1/3 less power for the same performance. The chip is a low power solution with size of 72 sq mm.
Intel’s also announced Dev Cloud for the Edge. The cloud comes pre-loaded with software and hardware consisting of Intel’s CPU, GPU, FPGA and ASIC (Movidius). Using the cloud, developers can choose device(s) for algorithms acceleration and then decide which is the best choice for them. The cloud also offers tutorials and code samples for the developers to get started.
At SuperComputing 2019, Intel announced its long awaited Ponte Vecchio GPU architecture. These new chipsets will allow Intel to create its own discrete GPU accelerator cards for enterprise as well as consumer market and will put it in head-to-head competition with Nvidia.
Finally, there are rumors that Intel is in discussion with Habana to acquire its product portfolioy. Habana is the only start-up in AI chipsetworld to have demonstrated its products for inference as well as training and its products are qualified by Facebook.
Intel already has acquired several companies that include Mobileye, Movidius and Nervana. Intel also has a Neuromorphic chipset called Loihi’ with the long term play in mind. With the new acquisition, Intel has signaled its willingness to go all-in to battle supremacy in AI chipset market.
Intel is going ‘all-in’ to dominate AI chipset world
AI revolution, started in 2012 when neural network Alexnet surpassed accuracy of all previous classic computer vision techniques and has not looked back since. The AI algorithms are compute sensitive by nature and the need for accelerating AI algorithms in hardware has long been recognized with over 100 companies jumping in with their chipsets.
Intel is currently locked in battle with Nvidia to become world’s dominant AI chipset company. While Nvidia’s focus in on discrete GPUs for AI acceleration, Intel is focusing on CPUs and heterogeneous computing. Nvidia is unifying its software development environment via CUDA and Intel is embarked on enabling software unification via OpenVino and OneAPI for different architectures. All the new chipsets are expected to be supported by OpenVino Toolkit at launch making it easier for developers to switch environments.
Intel has had a flurry of announcements recently suggesting that they are taking the battle to dominate AI chipset world to a new level. At a recent Intel AI Summit, announced many products that have been ‘several years in making’ and followed it up with additional announcements soon afterwards at Super Computing 2019. Then there are also rumors regarding Intel’s potential acquisition of AI start-up, Habana Labs. The announcements are as follows.
Focusing on the inference market, Intel announced that the next-generation Xeon (codenamed Cooper Lake) will support bfloat16. The bfloat 16 is a popular format in AI inference world that offers a larger number than half-precision floating point being used previously. Intel also announced software package called Deep Learning Boost that uses the enhanced microarchitecture to enhance convolution performance. This new microarchitecture combines three instructions into one to perform convolution thus generating higher performance.
Nervana chipsets, awaited by market for a long time, is finally hitting the market and Intel announced early MLPerf benchmark results of its NNP-I chipset. The benchmarks were generated using its early hardware and alpha software stack. The company expressed optimism that the numbers will get even better as the product matures.
Intel’s AI portfolio
Source: Intel
Intel also announced availability of Nervana NNP-T, its training chipset and early customer engagement with Baidu. It demonstrated a 32-card cross-chassis system in 1U form factor. Intel claimed that it can achieve scaling to few thousand of these cards via PCIe Express connectivity of up to theoretical limit of 1024 cards. Intel claims to have achieved up to 95% scaling with Resnet-50 & BERT as measured on 32 cards.
Intel’s next generation low power AI chipset from Movidus, code-named Keem Bay is expected to be released in the first half of 2020. Although no concrete details were provided on performance and power numbers, Intel announced that it will offer more than 4X the raw inference throughput of Nvidia’s TX2 putting it at 4 TOPS. The chip is expected to consume 1/3 less power for the same performance. The chip is a low power solution with size of 72 sq mm.
Intel’s also announced Dev Cloud for the Edge. The cloud comes pre-loaded with software and hardware consisting of Intel’s CPU, GPU, FPGA and ASIC (Movidius). Using the cloud, developers can choose device(s) for algorithms acceleration and then decide which is the best choice for them. The cloud also offers tutorials and code samples for the developers to get started.
At SuperComputing 2019, Intel announced its long awaited Ponte Vecchio GPU architecture. These new chipsets will allow Intel to create its own discrete GPU accelerator cards for enterprise as well as consumer market and will put it in head-to-head competition with Nvidia.
Finally, there are rumors that Intel is in discussion with Habana to acquire its product portfolioy. Habana is the only start-up in AI chipsetworld to have demonstrated its products for inference as well as training and its products are qualified by Facebook.
Intel already has acquired several companies that include Mobileye, Movidius and Nervana. Intel also has a Neuromorphic chipset called Loihi’ with the long term play in mind. With the new acquisition, Intel has signaled its willingness to go all-in to battle supremacy in AI chipset market.
At the AI summit, Intel announced that it generated $3.5 billion in AI chipset revenue in 2019 with bulk of revenue coming from AI Inference workloads. This leaves a large market that can be addressed with the chipsets and Intel is ensuring that all the bases are covered via OneAPI that supports multitudes of software. Only time will tell how this plays out.
Please enter your name and company email address.
The download link for the ultra-low power AI chips report will be mailed to you.
Please enter your name and company email address.
The download link for the summary report will be mailed to you.
Please enter your name and company email address.
We will follow up with the brochure.
Please enter your name and company email address.
The download link for the brochure will be mailed to you.
Please enter your name and company email address.
The download link for the summary report will be mailed to you.
Please enter your name and company email address.
The download link for the summary report will be mailed to you.
Please enter your name and company email address.
The download link for the brochure will be mailed to you.
Please enter your name and company email address.
The download link for the brochure will be mailed to you.