Image

Vision Systems Give Robots a Glimpse at their Work

July 20, 2015
Camera use is exploding in industrial robots, improving both efficiency and safety
Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.

Machine vision has been used in industrial automation and robotics ever since cameras were first attached to computers. Improved cameras and faster computational platforms are providing improved image analysis at rates often faster than a person can react to. Lower costs and more compact sensors are enabling robots to have (in many cases) dozens of cameras that provide more input about their environment. This is leading to more autonomous systems, as well as entities that are safer to work around.

1. Sawyer (left) and Baxter (right) are a pair of robots from Rethink Robotics that use embedded vision in a number of areas including each hand.

Robotics and industrial automation have been partners for decades, with robots doing the heavy lifting and repetitive work on assembly lines found everywhere from automotive plants to bakeries. At first they were simple mechanical devices that had a limited set of movements, which were triggered by mechanical switches. Mechanics and hydraulics still play a major part in many assembly lines, often eliminating the human element for much of the process, but advances in robotics (and especially, sensing and process control) are changing this. Vision systems are one piece of the puzzle that is having a major impact the capabilities of industrial robots.

Rethink Robotics’ one-armed Sawyer and two-armed Baxter (Fig. 1) are examples of more human friendly robots that are designed for more flexible assembly lines (see "These Robots Are No Danger, Will Robinson"). They are designed to be easy to program (see "Programming the Baxter Robotic Platform at Rethink Robotics") because of their low impact arms and grippers, as well as their advanced imaging system.

The tablet-like eyes provide visual feedback, but there is a camera built into the tablet. And this is only one of many cameras: Others are hidden in each hand, along with range finders that allows the software to view what the gripper will come in contact with. There are two cameras in the chest, as well, designed to view objects moving by on an assembly line.

2. The iRobot 510 PackBot has a number of built in cameras along with mounting points for more specialize cameras and sensors.

An example of a mobile robot with numerous cameras is iRobot’s PackBot (Fig. 2). This robot is actually a teleoperated device designed for tasks such as hazmat operations, public safety, and bomb disposal. The robot has multiple cameras, sensors, and range finders. Some are located in the base, providing an operator with a fixed orientation, while others are situated on the arms. These often are used to watch a gripper in action. Others cameras are found on attachments; these typically are specialized devices like high-resolution or thermal cameras. They may be combined with range finders.

Can you find the cameras in ABB’s IRB 360 (Fig. 3)? The IRB 360 is at home on an assembly line, but it is designed for faster and repetitive operations than a platform like Rethink Robotics’ Baxter and Sawyer. The IRB 360 can handle up to 8 kg, watching it in action can be dizzying.

As for the cameras, they are built into the housing that mounts above the assembly line. They provide a downward looking view that includes the IRB 360’s arms. Additional cameras may be included in an installation depending on the operations required.

Not all robots will be picking up objects. Many perform visual inspection of everything from circuit boards and chips to Ford F-150s coming off the assembly line (see "Ford Rouge Factory F-150 Assembly Line Tour"). Visual inspection by robots are usually faster and more accurate than what a person is capable of, but they also can utilize a wider range of visual sensing ranging from infrared to ultraviolet.

Sensor Variety

Image sensors that deliver 1080p HD and now 4K are becoming more common and less expensive because of their use in other applications, ranging from camcorders to automotive applications. Higher resolution is both an advantage as well as disadvantage. Obviously the advantage is more resolution providing a more accurate image. The disadvantage is the larger amount of data the must be processed. Luckily processing power has been increasing and multicore solutions work well with this type of data. Hardware specifically for video analysis is also more common. Even specialized applications like knitting streams from four cameras into a 360-deg. surround view for automotive applications can be useful in providing a safer robot.

3. ABB’s IRB 360 can move 8 kg and its tool flange can handle a range of large grippers.

Another technology that has been gaining more mindshare is the sensors that deliver 3D range information. Microsoft’s original Kinect used one approach (see "How Microsoft’s PrimeSense-based Kinect Really Works"), but the Kinect 2 and a number of other solutions utilize a time-of-flight (ToF) approach (see "Time-Of-Flight 3D Coming To A Device Near You"). These types of sensors often are combined with a camera that captures an image in the visible color space so a pixel has a color and a depth component.

Full-frame depth imaging is not the only use of ToF technology. Leddertech’s IS16 (Fig. 4) directs 16 beams across a 45-deg. span. It delivers an angular and distance result. The sensor works with a range of solid and liquid targets up to 50 m away at 50 Hz. The system is immune from interference from ambient light. It is designed for harsh environments in its IP67 weather-resistant enclosure. It has a USB and serial interface. There is a quick mode that allows defining a zone with a near and far limit.

Other sensor technologies include thermal imaging and radar. Flir is well known for its wide range of consumer and industrial thermal imaging solutions. It also has OEM products. Its Quark 2 is an uncooled, longwave, thermal imaging camera that has a 640 {MULT}512 pixel resolution at 60 Hz. It is compact enough to be used in mobile applications including quadcopters where the Quark 2’s high shock and vibration tolerance are important features.

Rohm Group’s Lapis Semiconductor has a medium-resolution IR image sensor. The ML8540 has a 2,256 pixel resolution. It operates at 6 Hz but only needs 5 mA of power at 5 V. Its operating range is -30 to +85°C.

Ultraviolet sensors are less common. They often are found in EUV (extreme ultraviolet) applications like semiconductor lithography. IMEC has a number of solutions in this space, as well as numerous image sensors for visible spectrum light.

Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.

Integrated Solutions

Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.

Imaging hardware does not operate in isolation. Imaging software is needed to analyze the data stream. Filters and analysis libraries are just the starting point. 3D image processing requires support like the open source Point Cloud Library (PCL) from Open Perception. PCL can work with both 2D and 3D images. The cross-platform system has a BSD license that allows commercial use without the need to expose the application’s source code.

4. Time-of-flight sensors like Leddertech’s IS16 provide depth information up to 50-m along 16 segments spanning a 45-deg. view.

Applications such as surface and object inspection can often be more focused allowing more complete solutions to be created. One example is National Instruments’ (NI) Vision Build for Automated Inspection system (Fig. 5). It works with NI frame grabber, NI Compact Vision System, NI Embedded Vision System, and NI Smart Cameras, as well as USB3 Vision, GigE Vision, IEEE 1394, and USB DirectShow cameras.

It includes machine vision tools like geometric matching, optical character recognition (OCR), and particle analysis. This allows the software to locate, count, measure, identify, and classify objects. Applications can also be integrated with Programmable Logic Controllers (PLC) and other automation devices. The system allows quick and easy construction of complex pass/fail decisions so applications can provide system control messages.

NI has other vision system support, but solutions like NI’s Vision Build for Automated Inspection system are designed for users that want to concentrate on the application rather than customizing the vision system. They are designed to work with off-the-shelf cameras and to integrate with existing automation systems.

Hardware Analysis

Software image analysis systems take advantage of the multicore computing resources that are currently available. Tools like OpenCL and NVidia’s CUDA allow applications to tap multicore GPUs. Altera has opened FPGAs to OpenCL, as well (see "OpenCL FPGA SDK Arrives").

Still, hardware that targets image processing can provide significant speed, making many applications practical. One example is the use of neural nets in hardware. These are designed in a fashion similar to neurons in the brain. A net can be implemented in software, but the inherent parallelism is more efficient when implemented in hardware.

5. National Instrument’s Vision Build for Automated Inspection software works with NI’s Smart Cameras.

Cognimem Technologies’ Cognimem PM1K is a hardware implementation of neural net support (see "Neural Net Chip Enables Recognition For Micros"). It supports pattern recognition, allowing the system to be used for a range of chores including object recognition, anomaly detection, target tracking, and template matching. This chip can be used for any type of pattern recognition including images. The interesting aspect of the design is that PM1K chips can be used in parallel as well, allowing more patterns to be stored.

Automotive Advanced Driver Assistance Systems (ADAS) are already including parallel processing systems in the system-on-chip (SoC) solutions targeting new cars. Freescale’s S32V234 ADAS system-on-chip (SoC) uses Cognivue’s APEX-2 Image Cognition Processor (ICP) (see "ADAS Micro Handles Multiple Cameras"). The ICP is a SIMD parallel processing array that incorporates specialized DMA and sequencing units along with hardware accelerators to handle images from streaming video in real time.

Real time image processing is still a relatively new technology for industrial applications, but new hardware, sensors, and software have moved it from the research lab to the production floor.

Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.

About the Author

William G. Wong | Senior Content Director - Electronic Design and Microwaves & RF

I am Editor of Electronic Design focusing on embedded, software, and systems. As Senior Content Director, I also manage Microwaves & RF and I work with a great team of editors to provide engineers, programmers, developers and technical managers with interesting and useful articles and videos on a regular basis. Check out our free newsletters to see the latest content.

You can send press releases for new products for possible coverage on the website. I am also interested in receiving contributed articles for publishing on our website. Use our template and send to me along with a signed release form. 

Check out my blog, AltEmbedded on Electronic Design, as well as his latest articles on this site that are listed below. 

You can visit my social media via these links:

I earned a Bachelor of Electrical Engineering at the Georgia Institute of Technology and a Masters in Computer Science from Rutgers University. I still do a bit of programming using everything from C and C++ to Rust and Ada/SPARK. I do a bit of PHP programming for Drupal websites. I have posted a few Drupal modules.  

I still get a hand on software and electronic hardware. Some of this can be found on our Kit Close-Up video series. You can also see me on many of our TechXchange Talk videos. I am interested in a range of projects from robotics to artificial intelligence. 

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!