Members can download this article in PDF format.
This year is giving every indication of becoming a watershed period in the development of AI-based vision processing. And, if things happen as expected, the results could be as big as, or bigger than, consumer PCs were in the 1970s, the web was in the 1990s and smartphones became during this century. The artificial-intelligence (AI) vision market is expected to be valued at $17.2 billion in 2023, growing at a CAGR of 21.5% from 2023 to 2028 (Source: MarketsandMarkets).
The question is not whether it will happen, but rather how do we want to do it? How do we want to develop vision-based AI for collision avoidance, hazard detection, route planning, and warehouse and factory efficiency, to name just a few use cases?
We know a surveillance camera can be smarter with edge AI functionality. And, when we say smarter, we mean the ability to identify objects and respond accordingly in real-time.
Traditional vision analytics uses predefined rules to solve tasks such as object detection, facial recognition, or red-eye detection. Deep learning employs neural networks to process the images. A neural network has a set of parameters that are trained using input images so that the network "learns" the rules, which are then applied to perform tasks like object detection or facial recognition on future images.
AI at the edge happens when AI algorithms are processed on local devices instead of in the cloud and where deep neural networks (DNNs) are the main algorithm component. Edge AI applications require high-speed and low-power processing, along with advanced integration unique to the application and its tasks.
TI’s vision processors make it possible to execute facial recognition, object detection, pose estimation, and other AI features in real-time using the same software. With scalable performance for up to 12 cameras, you can build smart security cameras, autonomous mobile robots, and everything in between (Fig. 1).