JP Data LLC
101 Jefferson Dr, 1st Floor
Menlo Park CA 94025
Phone
(408) 623 9165
Email
info at jpdata dot co
sales at jpdata dot co
JP Data LLC
101 Jefferson Dr, 1st Floor
Menlo Park CA 94025
Phone
(408) 623 9165
Email
info at jpdata dot co
sales at jpdata dot co
Since 2015, more than seventy companies have entered the AI chipset market with more than 100 chip starts announced. All of them are trying to tackle the AI algorithm acceleration problem using different techniques. These companies range from cloud companies to top semiconductor companies to start ups. Intel for instance has poured billions of dollars into AI via acquisition of Altera and Mobileye. Many start-ups have raised capital exceeding $100 million. For instance, Wave Computing, Campbell based company, recently announced…
Note: This was published on Tractica web site in 2018. High mask costs made it hard for chipset start-ups to raise capital during early 2000s. Today mask costs can range up to 25 million dollars and by some estimates the design costs for a chip at 12nm node can run as high as $174 million. This made it hard for investors to justify ROI as most of them demanded 10X return. Only a few markets offered high volume for chipsets…
As AI chipset market is getting crowded, many AI companies have started creating solutions that caters to a niche market. The needs for chipset power, performance, software etc. vary greatly depending on the nature of application. For instance IoT edge market needs ultra low power (in milliwatts), mobile phones can work well with power consumption of up to 1W, drones can consume a bit more, automotive can go from 10-30W and so on. Today two most prominent architectures are CPU…
AI and deep learning has generated lot of excitement over the past few years. Many semiconductor start-ups have emerged since to build chipset optimized for AI. They tackle compute, communication and memory related problems specific to AI algorithm accelerations and build highly optimized architecture that promises low power and high performance. Nervana was perhaps the first companies to build a chipset specifically for AI who got started around 2014. Nervana wanted to sell cloud services based on their chipsets and…
Facial recognition is one of the popular applications of computer vision technology that deals with recognizing identity of a person. The technology has been in use for several years with varying degree of accuracy. Classic facial recognition techniques, such as SIFT (Scale Invariant Feature Transform) and SURF (Speeded Up Robust Features), relied on extracting unique features of face. These techniques relied on comparing values of the incoming picture with a reference picture to generate a match. While this worked fairly…
Note: This blog was published via Tractica in 2017. Microprocessors form the basis of any computer system. Whether the device is used for IoT, consumer or cloud, the microprocessor chip is there that acts as a brain. In fact, Semiconductor industry owes its success to microprocessor when Intel introduced 4004 back in the 70s. That later evolved into 808X series leading to widespread adaption of chips. Even today, microprocessors get more coverage than other types of chipsets in the media.…
Artificial intelligence (AI) applications are hot and everyone is trying to capitalize on the buzz. It is not surprising that everyone wants better, cheaper hardware that gives them the best performance for their AI application. Ultimately, this boils down to the chipset that runs the underlying algorithms, which has sparked an intense race to win the hardware battle. Note: This was published via Tractica in 2017 Deep learning technology is, by far, the most popular type of neural network used…
Note: This blog was published via Tractica in 2017 The popularity of artificial intelligence (AI) technology has led to application developers demanding more compute power. AI applications come in different shapes and sizes, so the need for compute performance varies quite a bit. Training and inference also have different performance requirements. The training phase requires higher compute performance capacity, while lower compute performance suffices for inference. Compute performance needs can range from a personal computer (PC) with a graphics card,…