Safe Robots Rely On Sensors

March 20, 2013
Make a safe robot involves sensors to detect problems before they become critical. One reasons robots can interact more closely with people is because of the improvements in sensors.

As more robots operate near people, they must be built so they don’t cause any injuries—and that requires all of the computational and sensor technology we can muster. The first mobile robots often used a single ultrasonic or infrared sensor mounted on a servo for a wide range. Today, even small robots are ringed with these sensors as well as image sensors. Multiple sensors also are common for robots like the PR2 from Willow Garage, because a single sensor type won’t provide a full range of capabilities .

The sensors in use now are not much different from years ago with the exception of the Primesense/Microsoft Kinect (see table). What has changed is the size, cost, accuracy and use with the sensors and matching processor support. Mobile robots initially used a single ultrasonic or infrared sensor often mounted on a servo to provide a wider sensing range. Today robots are sometimes ringed with these kinds of sensors and image sensors are the norm for even small robots. Multiple sensors are also common for robots like the PR2 from Willow Garage (see the figure) because one type of sensor is usually not all encompassing in terms of capability.

Figure 1. The PR2 from Willow Garage includes a variety of stereo cameras in its movable head and hides a Hokuyo UTM-30LX laser scanner in the neck. Microsoft Kinects have been seen to sprout from the top of the head.

Robotics has benefited from other markets such as smart phones and tablets that have made motion and image sensors smaller and cheaper as volumes have grown tremendously. Lower costs and smaller, high-performance computing packages can change the way robots are designed. For example, robots often used movable heads to reorient their image sensors because moving those heads to cover wider areas was cheaper and easier than including additional sensors. Now, robots may have multiple cameras and multiple microprocessors.

Table 1. Robots can employ a range of sensors often in concert with each other.

Sensor Technology Advantages Disadvantages Computation
Contact Simple Contact Low
Proximity Simple Low resolution, reflectivity issues Low
Ultrasonic Low cost Low resolution, reflectivity issues Medium
Infrared range Low cost Low resolution, reflectivity issues Medium
Laser High resolution Limited scan area High
LIDAR High resolution High cost High
Single camera vision Large scan area High computation costs High
Dual camera vision Large scan area, range information High computation costs High
Projected/camera vision Large scan area, range information High computation costs High

Sensor Challenges

One reason robots tend to have more than one kind of sensor is that the performance characteristics of each is different. It is not much different from humans and animals that have different senses from touch to hearing to vision. At this point robots tend to deal with touch and vision or variants thereof with hearing not being of much use other than speech recognition.

The combination of sensors is normally used to allow a robot to do their job. They are also used in robots that interact with people to provide a safe interaction. This means that the sensors need operate reliably and the robot's response needs to be predictable and same. The latter is normally a computational issue and will not be addressed here. It is one area where behavior-based programming has become important.

One issue that may not be apparent is robot and sensor interaction. Many will see a robot and be able to identify the sensors but not sense their in operation. For example, infrared sensors are normally active devices projecting infrared light and sensing its reflection off an object. This is great in isolation but it does not work if two sensors are active at one time. This is actually useful if coordinated by a robot with multiple sensors but not if multiple robots are operating in proximity of each other. Ultrasonic sensors have similar issues.

The type of sensors used by most robots for keeping an eye on humans include force, range and imaging.

Force Sensors

Touch or bump sensors are the simplest. They normally are a switch attached to a surface or lever that moves. They’re popular because they’re easy to incorporate and the interface is trivial. Force sensors, though, provide a more graduated response. They are more complicated and come in a range of form factors with different response characteristics. Analog versions are connected to an analog-to-digital converter (ADC), so the host needs to do more work to track the effects.

Force sensors provide a more graduated response. They are more complicated and come in a range of form factors with different response characteristics. Analog versions are connected to an ADC so the host needs to do more work to track the effects.

Touch sensors fall into this category and there are a range of solutions that can address large sensor areas. A touch sensor can cover a large area but the response is either that something within the area has happened or not. Touch sensors can be more accurate. In fact, the resolution for touch sensors on smartphones and tablets is under 1mm.

Touch sensors have become popular for multitouch screens and even buttons and controls. Touch-sensor controller chips do most of the heavy lifting and often have built-in microcontrollers. Robots also can use them. Resistive and capacitive systems are the most popular. Capacitive systems can provide proximity information and can cover a large area as well.

Force feedback systems also are useful. They often can acquire data without any additional hardware when sensorless or sensor-based electric motor control is used. In this case, force feedback occurs when part of the robot is being moved and it meets an obstacle like a person. The amount of force needed to move the robot goes up as contact is made.

Force-based sensing is usually the last level of feedback for safety related issues. Contact is usually required so change of movement needs to take inertia into account.

Range Sensors

Sensors that provide range information are useful for robotics because they allow the robot to detect objects before they come in contact with them. This information is used for planning and allows a robot to avoid contact. Short range sensing can be used to prevent immediate collisions while long range sensing can be used for mapping an environment.

Most range-based sensing systems are active employing a light or sound transmitter. Proximity sensors can use ambient light but they are typically limited in range, accuracy and precision. They are akin to bump sensors in providing a true/false interface.

Ultrasonic sensors consist of a transmitter and receiver. The echo delay from sound emitted by the transmitter recognized by the receiver provides range information. Ultrasonic sound is uncommon so noise is less of an issue. Ultrasonic sensors are directional unless multiple transmitters and receivers are employed. They typically rated to provide range information within a sensing cone centered on the sensor. Problems can occur because of the audio reflectivity of some materials. Range for typical ultrasonic sensors is normally limited to a few meters.

Infrared sensors are like ultrasonic sensors and work in the same fashion with a transmitter/receiver pair. Interference from the outside world is also greater since infrared light is common. Most infrared sensors base the range on the intensity of the incoming light rather than the time taken for a round trip. This greatly simplifies the hardware but limits the accuracy. These sensors often do not work in sunlight or under certain types of artificial light. Filters can often mitigate ambient lite issues.

Laser range finders are similar to infrared sensors but they normally determine the travel time of reflected light between the transmitter and receiver. This requires faster, more sensitive hardware but it provides faster, very accurate range information. The precision and accuracy comes at a higher price.

Radar also fits into this category, but its use with robots tends to be limited to specialized environments due to cost. Automotive radar systems are starting to become more common in high-end cars, and this technology may eventually find its way into robotics.

LIDAR (light detection and ranging) takes a laser or radar sensor and scans 2D as well as 3D areas. It provides a very accurate 2D or 3D map, but the sensing system must move accurately, at a greater expense. A rotating mirror often keeps the transmitter and receiver fixed. This mechanical complexity along with the large amount of data processing keep the price of these systems very high. Smaller, cheaper versions are used in mobile robots like Aethon's autonomous TUG robots used in hospitals.

Image Sensors

LIDAR used to be the holy grail for robot designers because imaging was even more computational complex and costs were high. That has all changed and now image systems are small, cheap and highly accurate. Sensors like the Microsoft Kinect based on Primesense technology (see “How Microsoft’s PrimeSense-based Kinect Really Works”) provide 3D range information in addition to color images.

Basic image sensing uses a single camera. The resolution, precision, depth of field, and other factors all come into play in object identification and tracking. The cameras used with robots range from the tiny ones found in smart phones to ones that incorporate a zoom lens system. One challenge with all video-based systems is lighting since ambient light is typically used.

A single camera can provide object tracking and even range information if the object or camera is moving, but these functions require significant amounts of computational power that goes up as the frame rate, resolution, and precision of the system increases. A camera also provides a video stream that can be useful in general. Two cameras can provide stereo vision and make it easier to extract depth information. The tradeoff is a second camera that used to be expensive.

Primesense made a major difference in the gaming and robotic markets when it provided the technology behind the Microsoft Kinect. The system consists of three main components including two cameras. An infrared transmitter throws a pattern instead of a single dot, so no scanning is involved. An infrared camera detects the change of the pattern, providing the depth information. A color camera provides a conventional image. An ASIC analyzes the depth at 30 frames/s.

Another way range information can be obtained using a single camera to project a laser. There are a number of ways to detect the depth information from the image. Sometimes a fixed pattern is used in which case the mechanical complexity compared to a LIDAR system goes away resulting in a lower cost solution.

Using two cameras for provide stereo vision. It is easier to obtain depth information by comparing the two images. Of course, this needs to be done at the frame rate so it is not a lightweight computational task. Still, it tends to be less complex and more accurate than single camera depth detection.

Primesense made a major difference in the gaming and robotic markets when it provided the technology behind the Microsoft Kinect. The system actually consists of three main components including two cameras. There is an infrared transmitter that actually throws a pattern instead of a single dot so there is no scanning involved. There is an infrared camera and a color camera. The former is used to detect the change of the pattern which provides the depth information. The color camera provides a conventional image. An ASIC does the depth analysis at 30 frames/s.

Primesense technology has made a major impact on robot research but the Kinect is optmized for the Xbox 360 and this affects how well it can perform with robots. It has been more effective on larger robots where its depth of field can be accommodated. Like infrared range-based solutions, it is has issues with sunlight and lighting in general.

Other Sensors

The sensing systems already described are oriented towards obstacle detection. This is a primary issue for mobile and articulate robots but it is only the tip of the iceberg when it comes to sensors. Many other sensors are available and can impact both operation, interaction with people and safety.

Sensors are available to detect various gases and liquids. Sound has already been mentioned but that is an another area where robots can gain additional insight into their surroundings. Combining all this environmental information and then acting upon safely is a daunting chore but one worth pursuing.

The goal of having robots that interact as well with people as people is laudable but sensor capabilities and computational support will dictate what is possible and practical.

Sponsored Recommendations

What are the Important Considerations when Assessing Cobot Safety?

April 16, 2024
A review of the requirements of ISO/TS 15066 and how they fit in with ISO 10218-1 and 10218-2 a consideration the complexities of collaboration.

Wire & Cable Cutting Digi-Spool® Service

April 16, 2024
Explore DigiKey’s Digi-Spool® professional cutting service for efficient and precise wire and cable management. Custom-cut to your exact specifications for a variety of cable ...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!