Toggle Navigation
Digital Imaging: Imagine the Possibilities
Teledyne Logo
hidden

Robotics 101: Sensors that allow robots to see, hear, touch, and move

Vision, audio, movement, and touch sensors allow modern robots to perform increasingly sophisticated tasks.

Have you ever seen a humanoid robot doing parkour?

Two arms, two legs, a square head and a thick rectangular torso hopping hibbity dibbity over boxes and wooden pallets. It even did a backflip.

Yeah, most robots can’t do that.

Most robots don’t need to do that.

The cool parkour robot—Atlas from Boston Dynamics—is an experimental research platform that the company uses to test and evolve robot bodily dexterity. It learns its sweet moves primarily through simulation and mimicry. And while Atlas is interesting and makes for a good viral video, humanoid robots are the exception in the world of robotics. For practical applications, they are not at all useful.

Robots are typically used to perform tasks that are considered “dull, dirty, or dangerous.” The most common type of robot you will find in the world is an industrial robot, like the articulated arm manufacturing companies use on assembly lines. Robotic arms have a stable, immovable base, between two and five moving joints, and an appendage at the end—called a manipulator—that functions like a hand. Other common robots are professional service robots, like the logistics robots that look like large hockey pucks that Amazon uses to move goods through a warehouse, or domestic robots that vacuum your house. 

None of these robots are doing parkour.

Robotic automotive assembly line.

Robots are often defined as physical machines that can “sense, think, and act.” Sensors measure external conditions and deliver that data to a computer processor or controller within the robot that makes sense of that data then decides how to act. For example, a vacuum cleaning robot will use a vision sensor to image a room, identify a bit of dirt, then move to vacuum it up.

Robots, in one form or another, can perform just about any task that a human can dream up. As such, robots need a wide array of sensors to sense the world and move through it.

Visions sensors

Robots usually need to see what they’re doing. But robots don’t typically see like a human does. The human eye is one of the most sophisticated sensors ever created. Robots do not need nearly as much clarity and focus, but rather can get by with simple vision sensors that, when coupled with machine learning software, can allow the robot to do just about anything that we want. 

For instance, iRobot’s Roomba vacuum robots do not use high-tech sensors, but rather inexpensive cameras that could’ve been found in smartphones more than 10 years ago. For the most part, robots are not trying to make out images with high definition (which takes more internal computing power to process and can stress battery life), but rather are looking for general shapes, outlines, or colors that the computer vision algorithm can identify. 

Common types of vision sensors in robotics include 2D and 3D cameras, lidar, radar, infrared, and phototransistors. If the vision system is working in the visible range, then lighting becomes necessary, where other vision sensors allow robots to work in the dark. Common visible light tasks include pick & place or assembly tasks, object detection, navigation, and some types of inspection.

One interesting example is called a “delta” style robot that you might find on the conveyor belt of a consumer packaged goods factory. The robot hangs over a conveyor belt and uses simple vision sensors like a camera and/or a barcode scanner to identify items coming down the line. It will then use its manipulators, like little suction cups, to pick up items and pack them into boxes to be shipped. 

Some common use cases that a robot will need to use vision sensors include:

  • Quality assurance: using sensors to quickly assess if a product has a defect.
  • Object detection: see an object and determine what it is.
  • Material handling: identify material and move it from place to place.
  • Navigation: avoid obstacles while moving through a room to get from one specific point to another.
  • Mapping: see an area and create a computer-generated model of it. 

Audio sensors

Why does a robot need to hear? The realm of robots is rapidly changing. For many years, most robots were the articulated arms found on factory floors, or other types of professional services in factories or warehouses. Robots in these environments have little need to hear.

As robots interact more with humans, audio sensors have become more important. Audio sensors—microphones, mostly—are often used for speech recognition so that a person can talk to a robot, and it will be able to “understand” and then act. Audio sensors can also be used for navigation (sonar or echolocation, for example), or detect pressure differences within an environment. 

Common types of audio sensors and varieties of microphone include: acoustic pressure sensors, pressure microphones, high amplitude pressure microphones, and probe microphones.

An interesting hypothetical: are smart speakers with virtual assistants considered to be robots? Siri, Alexa, Cortana, and the Google Assistant do what a robot does according to our definition above. They sense with microphones, then compute the speech, then respond (sense, think, and act). And yet, most people would not classify them as robots.

Movement sensors

Movement sensors often work in conjunction with a robot’s moving bits (actuators) to assist in the robot’s mobility. One of the most common types of a movement sensor is called an incremental encoder which is often found within an industrial robotic arm. It measures the rotation of the joints on the arm so that it moves at the right angle and speed. 

Other movement sensors include accelerometers, gyroscopes, inertial sensors, and GPS sensors.

Touch sensors

You wouldn’t think that a robot needs to “feel” like a human would. But touch sensors allow robots to have more nuanced capabilities than the typical “see an object, move and object” that they are often used for. 

Robots use touch sensors for a variety of tasks. For example, bump sensors are used for navigation to tell the robot that it bumped into an object and thus must change course. Force sensors allow the robot to know when pressure or mechanical stress is being applied. Temperature sensors tell the robot if something is hot or cold (and thus must be avoided).

Sensor fusion: combining sensors to make more complete robots

The dramatic increase in computer processing power (along with cloud computing) over the decades has made robots much more useful than before. While every individual sensor within a robot will have its own feedback loop within the robot’s controller, the combination of disparate sensor systems (“sensor fusion”) is giving us sophisticated robots that are better at performing a variety of tasks.

The most prominent example of sensor fusion right now is probably in the realm of autonomous vehicles. While the notion of a car driving itself without human input is not currently practical, we’re moving in that direction by combining powerful computers with sensor systems like lidar, radar, 2D and 3D cameras, accelerometers, gyroscopes and more. Whereas simpler robots may have one or two of those sensors to perform singular tasks, sensor fusion allows the vehicle to take all those data inputs and make split second decisions. 

Barring some dramatic breakthroughs in artificial intelligence, we will likely never be welcoming our new robot overlords. In fact, robotic systems often work better when paired with human capabilities. We have all the tools to build more powerful and capable robots that will help us as we tackle the problems of the present, and well into the future.