Vision Sensing Enables Safer Vehicles

Jan. 1, 2006
Vision sensing is right at the top of the list of enabling technologies that will make future vehicles safer. Proponents envision six or more cameras looking at the road ahead, adjacent vehicles and lanes, and obstacles behind the vehicles, as well as passengers and the driver. While these cameras and vision sensors have appeared on a few high-end vehicles today, this article uncovers engineering efforts under way to bring this capability to mainstream automobiles and provide all the functionality that automakers are considering for vision-enabled safer vehicles.

For the PDF version of this article, click here.

AS AN ENABLING TECHNOLOGY, VISION SENSING PROMISES TO make future vehicles safer. Developers are looking at six or more cameras for a variety of safety applications. Silicon imaging sensors, similar to those used in both high-performance cameras and low-cost consumer products, such as cell phones and PDAs have numerous potential applications in vehicles. As shown in Figure 1, applications range from adaptive cruise control to rear video and include smart airbags and high beam dimming. Both charge-coupled devices (CCD) and CMOS image sensors currently have applications in vehicles.

CCD technology uses semiconductor processing and design methodology developed specifically for imaging applications and has been the choice for high-performance applications. However, these cameras have been used in many consumer applications that require low cost and are currently used in vehicles for night vision and rear view vision applications. In contrast, CMOS imaging systems take advantage of the processing technology widely used for logic circuitry.

According to Paul Gallagher, director of technical marketing for Micron Technology's Imaging Group, “The most significant difference is right at the core, the process used to make a CCD does not have logic in it, so it requires additional ICs to drive it and to receive and format the signal for system usage.” CMOS is a logic process so extensive integration can be obtained on the same chip as the photo plane, providing a single-chip camera. The support logic for a particular CMOS camera can be specifically designed for the camera and end applications. This translates into lower cost, smaller size and higher reliability due to the use of less chips and interconnections.

VISION SYSTEMS

Vehicle image-sensing applications fall into two categories: enhancing human vision and providing machine vision. Micron calls these applications scene viewing and scene understanding. In scene viewing, the output from a video sensor goes to a display and the driver makes decisions. For example, rearview mirror assist, lane-marking assist, and side view applications are scene-viewing applications.

Typically, in scene-understanding applications, no one sees the video — the sensor's output goes to a processor that identifies key features, makes decisions and feeds the requirements to control systems on the vehicle. These systems include occupant detection and position for airbag deployment, lane tracking, collision warning and avoidance, windshield wiper control, headlight dimming and other automated systems.

In many situations, what is perceived as a good image by human standards for understanding what is in the scene is not good enough for a processor to make a decision. For scene-understanding applications, making the image's output easier for the processor to extract the information degrades the human perception of the picture. For example, in lane tracking, the lane markers need to be seen up to a tunnel and in the tunnel at the same time. The human eye cannot perform this but the system defines and breaks the image up into 25 separate regions. The gain in any region is set based on the situation. In the case of the tunnel on a bright day, everything up to the tunnel could be overexposed to see inside the tunnel but the resulting view would be abnormal for human viewing. The processor can easily handle this type of data.

Looking at the data sheet parameters for different requirements in these areas, the scene viewing is a system-on-chip solution, so a significant amount of processing is performed on the chip to make displaying the data easier. Displays handle three colors in the same physical location but the imager has specific pixels for red, green and blue. Processing the data at each pixel determines what red, green and blue values should be displayed. The data is then reformatted for the display. The primary output from Micron's scene-viewing V125 is NTSC or PAL that goes directly from the camera to the input port of the display. In contrast, the V111 provides a digital output for digital input display. The scene-processing V022's output does not have the same type of processing since in most cases it will be monochrome. If color is required in the application, it has to be processed differently to find certain features such as the red in a stop sign with the data presented in a raw format. Table 1[1] shows a summary of some of the key differences between Micron sensors developed for scene viewing and scene-processing applications.

CHALLENGES OF VISION SENSING

The biggest challenge for imaging sensors in vehicles is the uncontrolled lighting environment. The lighting situation ranges from high beam headlights coming directly at the sensor to a country road with no lighting, or direct sunlight to tunnel transitions, or fluorescent lighting in a parking garage, which has a beat frequency. So imagers either need to adapt to the lighting or have enough dynamic range to be able to handle the variations.

In a specific application such as airbag deployment, the person must be observed when the sun is shining through the window and at midnight without the addition of the dome or other interior lighting in the vehicle. Since silicon has a broader wavelength range, 1000 nm vs. about 650 nm to 750 nm of the human eye, the solution is to add illumination outside of the visual range by enhancing the near IR energy either with supplement light or by designing existing lights such as headlamps with a near IR source for night vision. In the V022 imaging sensor, Micron increased the near IR sensitivity, the quantum efficiency, from around 10% to around 35% to 40% in the 850 nm range for interior applications so that less LEDs are required for the interior application and still obtain the same response. For forward-looking applications, the imager can see farther. In contrast to these automotive requirements, non-automotive applications actually need reduced near-IR sensitivity to improve the color performance, so the unique requirements of automotive dictate an application-specific solution. The auto-specific change provides increased performance or reduced cost.

In addition to lighting, the requirements vary for different automotive applications. The 752 × 480 pixels in the MT9V022 has more resolution than the airbag deployment application requires but the more rectilinear design instead of a 4/3 ratio provides more resolution in the horizontal direction. With 752 × 480, a small subwindow can be selected that runs up to 60 frames per second at full resolution. With this high resolution, the system finds the occupant in a small subwindow perhaps 200 × 200 pixels, which can be moved around inside the 752 × 480 window and track the passenger as he moves. The smaller window can be operated up to 200 fps. The airbag deployment requires the faster operation because of frequent update time.

In contrast, collision warning needs high resolution. A three-lane highway would require 1200 × 500 pixels to be able to see all three lanes from a far right or far left position. The V022 has a stereovision mode, so the output from one sensor can be fed into another. The interlaced datastreams from both imagers have the output from the second one provided to the possessor. For stereovision, the lenses converge on a point and the two units can provide depth perception. For foveal processing, the lenses are either parallel or they diverge. The system can be designed for an overlap at a distance, such as 50 meters, to have an overlap of 300 pixels between the field of view. As a result, the two imagers produce a 1200 by 480 image but the straight-ahead view has twice the resolution. In comparison, a single 1200 by 480 imager in silicon would significantly reduce the amount of chips in the physical layout in a given wafer diameter and increase the cost. The lens itself would also be large causing mounting problems in the application. As a result, the two-lens approach is about on par with the cost of a single large lens but easier to package in the vehicle. By designing automotive imagers that cover a range of applications, Gallagher said the entire cost of ownership is reduced including a lower number of replacement parts in inventory.

VISION-SENSING ANSWERS

With more than 28 million cameras expected to be installed annually on vehicles by 2011, according to Strategy Analytics[2], existing automotive suppliers and a host of imaging sensor suppliers are pursuing solutions to the sensor challenges. Examples of new products from existing automotive suppliers include AMI Semiconductor's AMIS-70700 image sensors, Melexis' MLX75006 CIF and MLX75007 PVGA automotive CMOS camera ICs.

The AMIS-70700 monochrome CMOS imager, shown in Figure 2, has a resolution of 750 × 400 pixels and a frame rate of 60 fps[3]. The unit has a global shutter mode to capture fast-moving scenes with-out motion artifacts that operates with a shutter efficiency greater than 99.5%. The digital block includes a high-speed on-chip ADC. For high dynamic range, the company's LinLog technology enables a linear or programmable linear ogarithmic response for acquiring high contract images without image lag or smear.

Melexis' automotive CMOS camera ICs include a common intermediate format (CIF) for inside vision applications such as occupancy detection and a panoramic VGA (PVGA) resolution for outside vision applications such as lane-departure warning systems (LDWS). The camera-on-a-chip design is a fully integrated device with a plastic packaging and integrated lens options (Figure 3).

Designed for a temperature range of -40 °C to +105 °C and to pass the automotive AEC-Q100 qualifications, the units use a plastic overmold process as shown in Figure 4[4]. The overmold protects the wirebonds. With the optical center at the mechanical center, the package has a cavity for direct die lens mounting and for seal coating.

FUTURE IMAGE SENSING

A number of non-traditional automotive suppliers, including several startups are targeting improved automotive vision systems. These companies bring considerably different backgrounds and experiences to vision sensing. In some cases, the company has designed similar systems for military or aerospace applications. In many cases, the company has discussed vision-sensing requirements with potential automotive clients and developed a technology and technology roadmap to address the shortcomings. The complexity of these advanced vision systems has caused the formation of several cooperative research projects.

One Silicon Valley Company, Canesta Inc., has developed a technology called Electronic Perception Technology (EPT) that can resolve the three-dimensional features of a scene and provide ranging and recognition[5]. The design employs an invisible infrared (IR) light source, a special optical sensor chip module and embedded imaging software. The software runs within the sensor chip module to provide the ranging and recognition from a single chip. Every pixel in the CMOS imaging sensor provides distance information that is interpreted by the chip to provide a 3-D image to the system. By using the time-of-flight methodology and measuring the time it takes light to travel to and from the object, the sensor determines the distance of the object at that location as well as providing the imaging data. The single camera can replace a stereo imaging system and lidar or radar sensors to reduce system cost.

SENSOR DATA FUSION

The Sensors and system Architecture for VulnerablE road Users protection (SAVE-U) project funded by the European Commission (EC) between 2002 and 2005, investigated a high-performance sensor platform for the active protection of pedestrians. The recently concluded program used camera and radar information in a combined low-level and high-level fusion to obtain reliable information for pedestrian vs. vehicle collision prevention. Figure 5 shows the sensor data fusion architecture[6]. Among the conclusions of the project were:

  • The performance of available sensors is not sufficient for non-reversible deployments such as windshield or other pedestrian airbags;
  • Further research is necessary to meet the requirements posed by the EC to reduce the number of fatalities by 50% in 2010.

Several companies are investigating sensor fusion to effectively use the data from imaging sensors and range detection sensors to make timely decisions in advanced safety systems. “Just about everybody in the industry trying to do sensor fusion is doing what is called dual-mode tracking,” said Dan Preston, president and CEO of Medius, a start-up company with extensive experience in defense and aerospace applications[7]. “Dual-mode tracking is nothing more than brute force fusion and, in fact, it's not fusion. It is two sensors that mechanically yield a better answer.”

By gating two single-mode tracks though a dual mode and using a matrix to establish not only the existing location but also the likelihood of a position at a future point in time, Medius applies a multiple hypothesis tracking technique developed in the 1990s to the problems of advanced driver assistance systems (ADAS). This approach allows decisions sooner for avoiding or preparing for eminent collisions by using historical data with a Kalman filter and forcing the data forward to predict what is going to happen. These mathematical calculations allow decisions within the required reaction time of 2 ms to 10 ms.

One of the benefits of imaging sensors in a regulated or legislated automotive system is the capability to provide a multifunction platform. In airbag deployment, weight-based determination of an out of position occupant is all that the system provides. With an imager, the system can identify the person and locate the person's head and even the eyes to automatically adjust the mirrors or pedals, the seat and other controls to properly position the driver. In addition, the imagers could detect a drowsy driver and alert an inattentive driver.

Regulated systems are not confined to a single region. By the end of this decade, Japan will require the driver to be able to see a one-meter tall post that is one meter away from the vehicle anywhere around the vehicle. These same cameras could be used for blind spot warning or rear vision assistance. In Europe, pedestrian collision safety is another area where detecting a person will be important to avoid improper deployment of pedestrian airbags and the associated cost to reinstate the system. The same cameras could perform lane tracking at higher speeds.

These additional functions make imaging a value-added sensing option. Carmakers can recoup the cost of the mandated system by providing a function that has immediate value to the car buyer. According to Micron's Gallagher, “There is a whole bunch of new regulatory requirements coming out potentially in the next four to five years that will be ideal for imagers.”

ABOUT THE AUTHOR

Randy Frank is president of Randy Frank & Associates Ltd., a technical marketing consulting firm based in Scottsdale, Ariz. He is an SAE and IEEE Fellow and has been involved in automotive electronics for more than 25 years. He can be reached at [email protected].

References

  1. Micron Technology, http://www.micron.com/products/imaging/applications/auto.html.

  2. Strategy Analytics' presentation at the Automotive Sensor Symposium at Sensors Expo, June 2005.

  3. AMI Semiconductor, http://www.amis.com/products/image_sensors/.

  4. Melexis, http://www.melexis.com.

  5. Canesta, http://www.canesta.com/.

  6. SAVE-u Project, http://www.save-u.org/; ADASE II — 3rd Concertation meeting, Jan. 19-20, 2004; Bruxelles, Philippe Marchal (Faurecia).

  7. http://www.medius1.com.

Sponsored Recommendations

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

How to Quickly Leverage Bluetooth AoA and AoD for Indoor Logistics Tracking

March 27, 2024
Real-time asset tracking is an important aspect of Industry 4.0. Various technologies are available for deploying Real-Time Location.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!