Fig. 1

A lot of people are wandering by the Gold Lot at the 2017 Consumer Electronics Show to get a ride in a self-driving car like Audi’s and NVidia’s demonstration vehicle (Fig. 1). Fully autonomous vehicles are still 10 to 20 years away, but it was a fine day in Las Vegas where lots of passengers were getting rides without any collisions.

Cars equipped with vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), and vehicle-to-pedestrian (V2P) are all part of a V2X environment that will be the norm in the coming years. It would be a boon to the automotive electronics industry since every car, most stop lights, and almost anything else near a roadway will likely have wireless “doodads” attached to them, so they know not only what is in near proximity but where they are headed with respect to each other. All of this information will be combined into a sensor fusion complex that will provide drivers—including those self-driving cars—with environmental context that a Mark One eyeball cannot, since it cannot look around corners and tends to be focus in a single direction.

Fig. 2Savari is one of the leads in this arena. I took a ride in their chauffeured SUV (so I could take pictures) on the roads around the convention center that had stoplights and other items equipped with their sensors (Fig. 2). This information can be used to provide hints to driver and driverless cars to optimize movements around areas like intersections or work areas. For example, it can indicate that a slower speed might allow movement through an intersection where a stoplight is red but changing soon, or that speeding up may be insufficient to beat a red light.

Savari ran their SUV through a number of real-world scenarios, from driving across the path of another suitably equipped card to work areas where a person from Savari was playing the part of a worker equipped with a V2X transceiver (Fig. 3). The display in our car showed both with indications of other objects and their possible movements with respect to our SUV.

I wrote a little about Ford’s new research Fusion hybrid before I left, and finally got a good look at it up close. Most of the electronics are packed inside the trunk (Fig. 4). At this point the CPU/GPU complex includes six of the latest Intel Core i7 processors and a pair of NVidia GPUs to provide sensor fusion and data analysis, using deep learning/deep neural nets (DNNs) that have been the key to many of the self-driving advancements.

I talked with Ford’s Dr. Bryan Goodman about the company's research, much of which is utilizing open-source platforms like Caffe and Tensorflow. Initially, all of the compute power of the pair of GPUs was needed to support the initial implementation of the software. Optimization now allows the same functionality to work using only part of a single GPU, providing significant headroom for new software development. This highlights both the rapidly changing infrastructure as well as the advantage of software-based solutions that can provide improvements without changing the underlying hardware.

Fig. 3

All this is still a work in progress, and these platforms are likely to be a far cry from what will eventually be on the road in a non-research capacity, but they are a significant improvement over systems that were cobbled together in previous years.  

By the way, hidden behind the left bumper is one of the radar arrays. This is made possible by the fact that the plastic bumpers do not obstruct radar, allowing them to be hidden (in contrast to the exposed image sensors mounted on the roof). It also means the radar is protected from the environment while ways to keep the image sensors clear of debris are needed.

I saved the groundbreaking announcement till last (in part  because I don’t yet have any flashy photos or figures). Leddartech was showing off a solid-state LiDAR system with a resolution of 512 × 64 and a field of view of 120-×-20 deg. Most LiDAR systems having moving components to provide a wider field of view, and often they are limited to a line rather than an array. More complex mechanics are needed to cover a rectangular area. The new system is expected to detect pedestrians at ranges up to 200 meters and vehicles as far away as 300 meters with the accuracy expected of a LiDAR system. The new technology initially targets Tier-1 and OEMs in the automotive space, but it has wide application in areas ranging from drones to robotics.

Fig. 4

One key component of the system comes from Trilumina. It is a 2D laser illuminator that provides the near infrared light beams for the Leddartech LiDAR system. Linear versions are available, but the arrayed versions are what really make the system work. It is also much harder to construct that one might guess, since the alignment and focus are critical for accurate placement of the beams—and hence, the information obtained from the sensor system.

There is a lot more automotive technology to cover, but that's it for now. I need to get back to the south hall to check out the AR and VR technology.