Robots are regularly used in production lines, often near people. They’re accurate and lightning-fast, performing repetitive tasks without regard to the rest of the environment. But these efficient constructive actions also can be accidentally destructive if someone moves into their area of operation.
Simple safety measures are used in these kinds of environments where robots are stationary, like physical barriers. Optical sensors, which are common, build virtual walls. Cross this barrier, and the robot stops, making these sensors automatic kill switches.
Yet robots are becoming more mobile and need to interact with people more regularly. They’re running around warehouses and are providing telepresence like Anybot’s QB and VGo’s VGo (see “Cooperation Leads To Smarter Robots”). Someday, autonomous robots and robotic swarms could become the norm. The trick lies in making them safe (see “Ethical Robots?”).
Swarms Bring More Robots Into Play
Autonomous robots don’t require regular user intervention, although they usually respond to parameters set by a user in addition to taking their surroundings into account. Robot swarms also communicate with robots within the group, usually to perform tasks that an individual robot alone cannot.
There’s a lot of robotic swarm research in academia as well as in the industry. Robot swarms regularly fly at the University of Pennsylvania (UPenn) General Robotics, Automation, Sensing and Perception (GRASP) Lab. It isn’t unusual to see a dozen or more nano quadrotors flying in formation or building a structure (Fig. 1a, Fig 1b). Professor Vijay Kumar highlighted these cooperative swarms at a recent TED conference.
a.The compact quadrotors have their own microcontrollers and wireless communication, but each has limited autonomy and sensing. The precision flying is accomplished through external cameras and lighting plus a central computing system that controls the swarm.
This configuration is just one approach to flying swarms. UPenn graduates Alex Kushleyev and Daniel Mellinger started KMel Robotics to expand the research in this area, including combining aerial and ground robots. Experimenting with small robots is important for swarm research because of the cost, control, and safety issues involved.
Quadrotor robots have become very popular because of their mobility and stability, as well as their cost. Parrot’s AR.Drone only costs $299 and can be controlled using an Android smart phone or an iPhone (see “Smart Phone Controls Low-Cost Quadrotor”). Its protective foam cowl helps prevent injury, but its four whirling propellers make it a robot to use carefully.
Parallax’s ELEV-8 has exposed propellers (see “Multicore Propeller ‘Flys’ Quadcopter”). The robot also is a conventional aerial remote control system. Like the AR.Drone, its intelligent flight control system provides stable flight.
These platforms can be more expensive, making them more likely to be used individually rather than in swarms. Aerial platforms can be more dangerous due to their high-speed movements and lift mechanisms. Swarm research can be performed using simulation, which has been an invaluable tool. But real-world operation is necessary, because simulations are often too clean.
Two ground-based robots, Rice University’s r-one and Harvard University’s Kilobot, also are contributing to swarm research (see “Tiny Swarm Goes To Harvard”). About the size of a DVD, the r-one has a range of communication and location options, from wireless to infrared solutions (Fig. 2a, Fig 2b). It’s designed for teaching robotics to individuals as well as for cooperative use so large collections of r-one robots can work together.
a.A hundred compact and inexpensive Kilobots can easily fit on a desktop, making swarm research practical without needing a football field for field tests. The Kilobot uses an unusual form of locomotion based on three rigid feet. A pair of vibrating motors wiggles the robot so it can move in any direction. Its movement is good, but it isn’t as accurate as other robotic motive systems.
The lack of high accuracy can be a good thing during experiments since robots need to deal with sensors that provide a limited and sometimes inaccurate view of the world. Assembly-line robots typically ignore most of their environment, and designers and programmers make significant assumptions about that environment. Small robots make experimentation easier, and sometimes these platforms can be utilized in the field. But many target systems are much larger.
Large Autonomous Automobiles
Lockheed Martin’s Squad Mission Support System (SMSS) is a rather large robot that supports autonomous and semi-autonomous operation (Fig. 3). Based on a six-wheel all-terrain vehicle (ATV) with a heavy-duty turbo-diesel engine, it can carry half a ton to support a squad in the field.
The SMSS can handle chores like autonomous resupply and medical evacuation over rough terrains safely. Its range of sensors, including LIDAR (LIght Detection And Ranging), enable it to operate in close quarters with people and animals. Multiple SMSS vehicles can operate in convoys by playing “follow the leader,” and they can even be set up to follow an individual soldier.
When it comes to safe operation, there are some obvious differences between large robots like the SMSS and tiny robots like the r-one. For example, when an r-one bumps into you, it’s annoying. But when an SMSS hits you, it can be deadly.
Designing the software to minimize accidents is very important. It means including features like logical governors to limit robot speeds when they’re around people. Also, multiple kill switches could be handy to halt large vehicles when people need to move around them.
Robotic safety and swarm research are becoming more important as robots move into close quarters with people. Self-driving cars like the SMSS offer many of the same design and interaction issues. From 2004 to 2007, for instance, the DARPA Grand Challenge evolved from desert courses to metropolitan environments (see “Autonomous Vehicles Tackle The Urban Jungle”).
DARPA’s competition focused more on proofs of concept than on demonstrating how safe the technology is. Some vehicles met its challenges, though many others achieved less than optimal results. The lack of injuries was due to the vehicles’ isolation, like assembly-line robots.
Moving robots closer to people remains the end goal of services like self-driving cars. Google’s experimental self-driving Toyota Prius has had a very good safety record, logging more than 150,000 miles without an incident. Of course, those miles were closely watched so there were probably more sensors and eyeballs on the Prius than conventional cars with human drivers.
Nevada still requires approval of self-driving cars, but keep an eye out for the red license plates there. Laws are changing so robots can begin to coexist with people. Don’t expect a flood of self-driving cars to enter Nevada, but their legality makes the long-term goal of fleets of self-driving cars more reachable.
The colorful license plate isn’t a panacea, but it will help. Drivers tend to give cars with “Student Driver” signs on them a wider berth because they know that those vehicles may act erratically. The same is true for the current crop of robots.
Still, robotics are quickly becoming incorporated into readily available products in transportation. In fact, these technologies are often found in higher-priced vehicles as features. Lane-change notification, obstacle recognition, and similar advanced driver assistance systems (ADAS) are augmenting the human driver’s capabilities. They’re also the minimum requirements for a safe robotic car. ADAS support will be standard on all cars by the time self-driving cars become generally available to the public.
Autonomous Aircraft
The quadrotors mentioned earlier are limited in their performance and are often flown indoors. They typically need less than a football field area for most use, and high winds are detrimental to useful operation.
At the other end of the spectrum are large unmanned aerial vehicles (UAVs) like the General Atomics Predator and Northrop Grumman’s Global Hawk (see “Unmanned Military Vehicles: Robots On The Rise”). These UAVs are operated as remote piloted vehicles (RPVs), often controlled from locations on the other side of the globe.
The Global Hawk can track areas over 40,000 square miles. It can fly for more than 24 hours and climb to altitudes over 60,000 feet. It also can cover over 11,000 nautical miles in a single flight. The Predator has been armed with missiles and is feared by many because of this deadly capability.
These and other UAVs are regularly used in military operational areas. The Federal Aviation Administration (FAA) has approved the use of small UAVs along the U.S.-Mexico border and in a number of research areas around the country.
Many companies would like to use UAVs for applications like power-line and pipeline inspection. Search and rescue could also use smaller UAVs, and much of the technology employed for military use has similar domestic uses. Smaller UAVs with limited operational areas that would not interfere with larger aircraft are more likely to be available for use because they rely on isolation safety like assembly-line robots, though with larger operating environments.
Additionally, small UAVs are easier to implement than ground vehicles because of sensor issues. Airspace is easier for robots to scan than the ground because all other objects in the air can be considered obstacles. It’s also easy to recognize such objects at a distance and avoid them.
Sensing Safety
UAVs, UGVs, and cars benefit from their available degrees of freedom (DOFs). Cars and ATVs can turn and go forward and backward, though they can’t move in the Z-axis. Helicopters such as quadrotors can pivot, tilt, and move up and down. Airplanes are more limited than helicopters, but they have more DOFs than ground vehicles.
Keeping the number of DOFs low makes safety design easier, but any moving object tends to be a challenge when it comes to safety. Add in the complexity of robot design and programming in general, and one can see why robotics is a challenging area.
Increasing the DOFs for a robot adds to its complexity, but it also makes it more capable. Typically, designers increase their robot’s DOFs by adding arms and torsos. It’s an obvious solution for human-like robots, and it provides human-like capabilities such as picking up an object.
Willow Garage’s PR2 is a bit smaller than an adult, but at 450 lb, it could go on a diet. It also has a pair of arms that Bosch helped design to provide precise operation near people, but it can’t punch a hole in the wall. Such power could be detrimental to those nearby.
The PR2 is an experimental platform, and flexibility in programming is one of its design targets. For example, the company took less than a week to get the PR2 to play pool since many of the necessary features already were available such as object recognition, general movement, and planning (Fig. 4). Programmers were able to concentrate on addressing the aspects of the game.
The programmers also could ignore safety issues. The robot could bump into someone, but it wouldn’t do much damage. Accidents happen with people, but in general they tend to be non-fatal, and that’s the desire with robots as well.
Festo designs and sells a range of robotic arms and technology found on many production lines. Its Bionic Handling Assistant arm has a design based on an elephant’s trunk (Fig. 5). It’s not even fair to count its DOFs since each of its segments can move. A gripper at the end of the arm can even handle fruit. The arm has 11 DOFs with 13 actuators and 12 position sensors.
The Bionic Handling Assistant is a pneumatic system that employs an active damping system with advanced feedback and feedforward support. It’s more functional and flexible than other robotic arms, allowing it to move into positions that would be difficult for competing technologies. Of course, this means the arm could potentially be more difficult to incorporate safely.
Festo used Mathworks’ Matlab and Simulink to help design its arm and program its operation. The Simulink PLC Coder is utilized for programming the system.
HDT Global’s MK2 is a more conventional robotic arm with a modular design (Fig. 6). The cylinders that make up the arm are self-contained actuators that snap together. The connections at each end of the cylinders link to help form a CAN and Ethernet network within the arm.
The actuators are strong, but they also incorporate a force feedback system. It probably isn’t a good idea to get close to a quickly moving arm, but it can detect an obstacle via feedback. A typical system has more than a dozen articulated joints.
TElepresence Robots
People are likely to interact with robots they recognize as robots in the form of toys and telepresence devices. Toys tend to be safe because they’re small. Tele-presence robots are larger but have a limited number of DOFs.
Anybot’s QB and VGo’s VGo have a motive platform, balancing on two wheels. Their interaction interface comprises an LCD and audio output along with a camera and microphone to provide feedback to the remote control source. These interfaces can be Web browsers, so a tablet or PC can control the robot.
The robots tend to be lightweight, so running into a person can be more damaging to the robot. Likewise, the lack of sharp edges provides a sleek design.
Proximity sensors are often used for collision detection. They’re inexpensive but not necessarily as accurate as range finders like those based on lasers or radar. Capacitive proximity sensing may start to gain an advantage given the interest in touch interfaces, but its sensing range tends to be lower than infrared or ultrasonic sensors. Still, proximity sensors can detect objects before they come in contact with the robot.
Robots And Distance
Force feedback and other contact systems are the sensors of last resort when it comes to safety. Like people, it’s better to avoid contact if possible. Human beings use sight, although sound sometimes comes into play. For example, electric cars are made noisier so people, especially the blind, can hear them.
People and animals alike regularly depend on mutual avoidance to prevent unwanted contact. We drive on one side of a street to avoid oncoming traffic. It works well if everyone follows the same rules. It also helps if both parties have similar sensing capabilities compared to their rate of movement.
One sensor that has proven a boon to robotics is Microsoft’s Kinect, based on technology from PrimeSense (see “How Microsoft’s Kinect Really Works”). Parallax’s Eddie and the Bilibot (see “Cooperation Leads To Smarter Robots”) are rolling robot platforms that incorporate the Kinect.
Microsoft’s Robotics Developer Studio 4 supports the Eddie. The Kinect isn’t the only sensor on the Eddie, though. Ultrasonic sensors ring the robot as well. The robot has an on-board control computer, but the laptop is extra. Embedded systems also could be the main computer.
The control computer uses Parallax’s eight-core Propeller chip (see “Parallax Propeller”). The Propeller would be hard pressed to provide safety-related support alone, but it easily provides feedback and control for a host computer that would also take advantage of the Kinect.
The Kinect is an interesting example of a flexible system that can be employed for some robots. However, it isn’t applicable at this point for many of the systems already discussed. Part of this goes to the way the Kinect works. It projects an array of infrared dots that its imager then detects. The hardware and firmware generates a 3D image map based on how the dots are positioned and deformed. It doesn’t work well in some lighting conditions where the infrared dots would be washed out.
The Kinect combines multiple sensors in one package. It also has a color video camera that’s aimed in the same direction as the infrared 3D sensing system. The output of the two sensors then can be combined so the output is a 3D depth map with colors associated with the map. This is how the Kinect is used to help detect faces and hands. Position information also comes into play since a head is assumed to be connected to a torso and hands are connected to arms.
Video games based on the Kinect make many assumptions about the information coming from the Kinect. It provides great feedback and impressive game interaction, but most users tend to overestimate the system’s capability. It appears that the system is recognizing all the movements and intentions of the user when it’s really recognizing a limited subset of what a human sees and considers.
The original target for the Kinect was the Xbox 360, where the feedback is a television screen. But its use with robotics makes the potential feedback more physical. The Kinect is finding a home on a wide variety of robots like the Eddie and even as an addition to the PR2.
Bodymetrics is using PrimeSense technology to generate full 3D models of human bodies. The data can be used for clothes shopping. Imagine a robot sizing up a client and then grabbing the proper size jacket. That’s a bit further afield than a voice-activated smart phone.