JP Data LLC
101 Jefferson Dr, 1st Floor
Menlo Park CA 94025

Phone
(408) 623 9165

Email
info at jpdata dot co
sales at jpdata dot co

Computer Vision Will Drive the Next Wave of Robot Applications

Note: THis blog was published on Tractica web site on June 15, 2015

Robotics has played a major part in industrial automation, even though the systems are not always referred to as robots. For example, a crane used in building construction is essentially a robot with an arm and limited control functionality, but is always referred as a crane. These robotic systems are great at performing a specific task. They have been used widely in industrial automation to perform functions such as lifting, sorting, inspecting, and so forth. While these robots have excelled at specialized tasks, it has generally not been possible to use them for multiple tasks and the research for a robot that can be trained to perform tasks similar to humans has been going on for quite some time.

Due to recent technology developments, we are now getting close to the next era of robots in which they will do multiple tasks, and computer vision is playing a big role in that. Computer vision is allowing these robots to see what’s around them and make decisions based on what they perceive. New technologies such as deep learning are allowing robots to interpret these images and learn new things on the fly. Some of them are even shaped like humans for those who fancy humanoid appearance.

Computer vision, often referred to as machine vision in the context of robots, is enabled via cameras mounted on these robots. The robots have a backend embedded system that analyzes the picture and generates results for the next course of action. This integration is being facilitated by the falling prices of hardware and software along with advances in computer vision technology.

Take Rethink Robotics, for example. The company is a startup that has raised over $113 million in funding. Rethink’s value proposition is a robot that can be programmed easily for a variety of tasks. The company offers a robot along with APIs, thus enabling it to be used for multiple applications. Amazon’s acquisition of Kiva Systems generated significant buzz in the industry due in part to the $775 million purchase price. Kiva has been focused on robots that can guide themselves through a warehouse using an onboard camera, avoid obstacles, reach to a destination, pick up merchandise, and drop it off at a given location. Yume, yet another robot, has arms and a torso similar to humans. Yume has a camera integrated into its hand that it uses to locate objects.

Robotics has played a major part in industrial automation, even though the systems are not always referred to as robots. For example, a crane used in building construction is essentially a robot with an arm and limited control functionality, but is always referred as a crane. These robotic systems are great at performing a specific task. They have been used widely in industrial automation to perform functions such as lifting, sorting, inspecting, and so forth. While these robots have excelled at specialized tasks, it has generally not been possible to use them for multiple tasks and the research for a robot that can be trained to perform tasks similar to humans has been going on for quite some time.

Due to recent technology developments, we are now getting close to the next era of robots in which they will do multiple tasks, and computer vision is playing a big role in that. Computer vision is allowing these robots to see what’s around them and make decisions based on what they perceive. New technologies such as deep learning are allowing robots to interpret these images and learn new things on the fly. Some of them are even shaped like humans for those who fancy humanoid appearance.

Computer vision, often referred to as machine vision in the context of robots, is enabled via cameras mounted on these robots. The robots have a backend embedded system that analyzes the picture and generates results for the next course of action. This integration is being facilitated by the falling prices of hardware and software along with advances in computer vision technology.

Take Rethink Robotics, for example. The company is a startup that has raised over $113 million in funding. Rethink’s value proposition is a robot that can be programmed easily for a variety of tasks. The company offers a robot along with APIs, thus enabling it to be used for multiple applications. Amazon’s acquisition of Kiva Systems generated significant buzz in the industry due in part to the $775 million purchase price. Kiva has been focused on robots that can guide themselves through a warehouse using an onboard camera, avoid obstacles, reach to a destination, pick up merchandise, and drop it off at a given location. Yume, yet another robot, has arms and a torso similar to humans. Yume has a camera integrated into its hand that it uses to locate objects.

Of course, these systems are not perfect yet and we are really just at the beginning of the growth curve for robot shipments. Deep learning algorithms need more GPU power at a cheaper price than what is available today. Training these systems for production-quality results is another key challenge. More R&D is needed to make them perform everything as good as humans.  In time, these challenges will be overcome, after which the robotics revolution will enter a new phase of innovation and productivity.