Deep-Learning-Based Perception Algorithms Tuned to L2 ADAS
Check out more of our CES 2023 coverage.
What you'll learn:
- How SVNet enables external vehicle perception.
- The four types of SVNet vision perception software.
- Details of TI's TDA4VM processor family.
Like human eyes, cameras capture the resolution and vividness of a scene in a way that no other sensors such as radar, ultrasonics, or lasers can match. South Korea’s StradVision, a company specializing in deep-learning-based vision perception software using camera sensors, announced that its SVNet now enables the use of TI's TDA4 processors for Level 2 advanced driver-assistance systems (ADAS) and automated driving.
According to StradVision, it’s the first neural network running deep-learning-based object detection software with full-featured video and vision acceleration across Texas Instruments’ TDA4VM processor family. SVNet has already been applied to more than 18 platforms for hardware-optimized performance, fully running on commercially available platforms as well as those in development stages.
What is SVNet?
SVNet software enables external vehicle perception to detect and recognize objects (even in poor lighting or harsh weather conditions), such as other vehicles, lanes, pedestrians, animals, traffic signs, and lights.
Applications that can be implemented using just a vision system include forward, rear- and side-mounted cameras for pedestrian detection, traffic sign recognition, blind spots and lane detect systems.
SVNet's implementation for the TDA4 processor is said to combine high performance along with low power consumption and a reduced bill of materials. This brings a level of flexibility important to today's ADAS market and can enable wider mass-market production of L2 systems among automotive OEMs.
"Enabling more cars on the road with advanced driver-assistance capabilities can lead to greater driver comfort and improved road safety. The hardware-optimized SVNet software makes it possible for automotive designers to leverage our automotive system-on-chip products to enable surround-view vision, helping them improve the driver experience and road safety," said Aish Dubey, general manager of automotive processors at TI.
Volume production of automobiles with fully autonomous control is probably still years away at this time, although today’s trials demonstrate that much of the essential technology for self-driving cars already exists.
Junhwan Kim, CEO at StradVision, said, "At StradVision, our goals have always been ambitious, and the next step in our journey is providing a vision solution for OEM mass production that meets key performance requirements for L2 and the next level."
SVNet is a lightweight solution using compact DNN-based algorithms that StradVision says will minimize the amount of computation required per frame. This helps maximize efficiency by minimizing the memory and power consumption required.
TDA4VM Processors
TI's TDA4 processor family offers an entry camera solution for high-volume L2 ADAS applications. The processor is purpose-built for ADAS and autonomous-vehicle applications and builds on market knowledge accumulated by TI over a decade in the ADAS processor market.
The TDA4VM processor family is based on the Jacinto 7 architecture. The combination of high-performance compute, deep-learning engine, and dedicated accelerators for signal and image processing in a functional-safety-compliant targeted architecture also make the TDA4VM devices a good fit for industrial applications.
The TDA4VM provides high-performance computing for both traditional and deep-learning algorithms at a leading power/performance ratio with a high level of system integration. This enables scalability and lower costs for advanced automotive platforms supporting multiple sensor modalities in centralized electronic control units (ECUs) or standalone sensors.
Key cores include digital signal processing (DSP) with scalar and vector cores, dedicated deep learning and traditional algorithm accelerators, Arm and GPU processors for general computing, an integrated imaging subsystem (ISP), video codec, an Ethernet hub, and an isolated MCU island. All are protected by automotive-grade safety and security hardware accelerators.
Deep-Learning Perception Software
SVNet constructs deep-learning-based perception software utilizing data transferred from the camera to recognize and classify various objects. StradVision holds more than 400 patents for deep neural networks, has obtained ASPICE certification, and has already applied it to various commercial projects to demonstrate its performance.
The accuracy of SVNet is achieved by reflecting a variety of state-of-the-art techniques in learning algorithms, rather than simple learning, such as meta-learning-based data sampling, feature-enhancing learning, hard example mining, and knowledge distillation. This allows SVNet to find as many target candidates as possible and refine them to deliver higher performance.
Four Types of Software
SVNet vision perception software comes in four different product lines:
CompliKit is a machine-learning pipeline that handles overall performance improvements from data sampling, annotation, model training, and platform optimization to final evaluation.
ProDriver provides perception to fully support basic ADAS functions (L1-L2) that satisfy Euro GSR/NCAP at minimum, as well as autonomous driving capabilities up to L3 safety level and above. It detects various objects such as vehicles, pedestrians, cyclists, traffic signs, and traffic lights on the driving path and autonomously recognizes traffic directions using data flowing through single- or multiple-camera systems. The combination of front, rear, and side camera data transmitted through the multi-camera system provides 360-degree visibility around the vehicle and prevents blind spots so that safety margins can be secured under any circumstances.
ParkAgent is a scalable parking solution from entry-level parking assist to automated valet parking with a surround-view monitoring system. Four fisheye cameras cover larger areas to detect parking spaces with no blind spots. Visual SLAM1 (Visual simultaneous localization and mapping) delivers automatic valet parking.
ImmersiView provides precise positioning information to support more realistic and immersive navigation and hazard warning features.
As of June 2022, 559,967 vehicles were operating SVNet; more than 50 vehicle models with 13 OEMs are being developed using SVNet software. StradVision’s sales goal is to introduce SVNet in 50% of the automobiles produced annually within 10 years.
Check out more of our CES 2023 coverage.