What you’ll learn:
- The value of edge AI within various industries.
- How edge AI utilizes machine learning.
- Which hardware works best with edge AI workloads.
From smart-home assistants (think Alexa, Google and Siri) to advanced driver-assistance systems (ADAS) that notify a driver when they’re departing from their lane of traffic, the world relies on edge AI to provide real-time processing to these increasingly common and important devices (Fig. 1).
Edge AI uses artificial intelligence directly within a device, computing near the data source, rather than an off-site data center with cloud computing. Edge AI offers reduced latency, faster processing, a reduced need for constant internet connectivity, and it lowers privacy concerns. Still, there can be challenges.
This technology represents a significant shift in how data is processed. And as the demand for real-time intelligence grows, edge AI is well-positioned to continue its strong impact on engineers.
The most significant value of edge AI is the speed it can provide for critical applications. Unlike cloud/data-center AI, edge AI doesn’t send data over network links and hope for a reasonable response time. Instead, edge AI computes locally (often on a real-time operating system), which excels at providing timely responses. For situations like ADAS, engineers develop algorithms that enable vehicles to process data from cameras and sensors onboard, allowing for real-time decision-making for navigation, obstacle detection, and safety features without relying on cloud processing.
Edge AI for Real-Time Processing
Many real-time activities are driving the need for edge AI. Applications such as smart-home assistants, patient monitoring, and predictive maintenance are notable uses of the technology that impact engineers. From quick responses to household questions to wearable health devices analyzing biometric data locally, edge AI offers swift responses while minimizing privacy concerns.
>>Check out this TechXchange for similar articles and videos
We’ve seen edge AI do well in the supply chain, particularly with warehousing and factories, for quite some time. There’s also been substantial growth for the technology within the transportation industry over the last decade, such as delivery drones navigating through conditions like clouds.
Edge AI is also doing great things for engineers, especially in the med-tech sector, a critical area of advancement. For example, engineers developing pacemakers and other cardiac devices can equip physicians with the tools to look for abnormal heart rhythms while also proactively programming devices to offer guidance on when to seek further medical intervention. Med-tech will continue to increase its use of edge AI and build out further capabilities (Fig. 2).
How to Generate Edge AI Models
As more systems in everyday life integrate some level of machine-learning (ML) interaction, understanding this world becomes vital for engineers and developers to plan the future of user interactions.
The strongest opportunity with edge AI is ML, which matches patterns based on a statistical algorithm. The patterns could be sensing a human is present, that someone just spoke a “wake word” (e.g., Alexa or “Hey Siri”) for a smart-home assistant, or a motor starting to wobble. For the smart-home assistant, wake words are models that run at the edge and needn’t send your voice to the cloud. It wakes the device and lets it know it’s time to dispatch further commands.
Generating an ML model can take several pathways, such as with an integrated development environment (like TensorFlow or PyTorch) or using a software-as-a-service (SaaS) platform (like Edge Impulse). Most of the “work” in building a good ML model goes into creating a representative dataset and labeling it well.
Currently, the most popular ML model for edge AI is a supervised model. It’s a type of training based on labeled and tagged sample data, where the output is a known value that can be checked for correctness, like having a tutor check and correct work along the way. This training is typically used in applications such as classification work or data regression. Supervised training can be useful and highly accurate, but it depends greatly on the tagged dataset and may be unable to handle new inputs.
Hardware Needed to Run Edge AI Workloads
Companies such as DigiKey are well-positioned to assist in edge AI implementations, as they generally run on microcontrollers, FPGAs, and single-board computers (SBCs). DigiKey partners with top suppliers to provide several generations of hardware that run ML models at the edge; for example, NXP’s MCX-N series and STMicroelectronics’ STM32MP25 series.
In past years, dev boards from the maker community have been popular for running edge AI, including SparkFun’s Edge Development Board Apollo3 Blue, Adafruit’s EdgeBadge, Arduino’s Nano 33 BLE Sense Rev 2, and Raspberry Pi’s 4 or 5.
Neural processing units (NPUs) are gaining ground in edge AI. NPUs are specialized ICs designed to accelerate the processing of ML and AI applications based on neural networks, structures based on the human brain with many interconnected layers, and nodes called neurons that process and pass along information. There’s a new generation of NPUs being created with dedicated math processing, including NXP’s MCX N series and ADI’s MAX78000.
AI accelerators for edge devices are on the rise, too. The space has yet to be defined, though, with early companies of note including Google Coral and Hailo.
The Importance of Machine-Learning Sensors
High-speed cameras with ML models have functioned in supply chains for quite some time. They have been used to make decisions on where to send products within a warehouse or find defective products in a production line. Suppliers are now creating low-cost AI vision modules that can run ML modules to recognize objects or people.
Although running an ML model will require an embedded system, more products will continue to be released as AI-enabled electronic components. This includes AI-enabled sensors, also known as ML sensors. While adding an ML model to most sensors will not make them more efficient at the application, ML training can enable a few types of sensors to perform in significantly more efficient ways:
- Camera sensors where ML models can be developed to track objects and people in the frame.
- IMU, accelerometer, and motion sensors to detect activity profiles.
A number of AI sensors come preloaded with an ML model that’s ready to run. For example, the SparkFun eval board for sensing people is preprogrammed to detect faces and return information over the QWiiC I2C interface. Some AI sensors, like Nicla Vision from Arduino or the OpenMV Cam H7 from Seeed Technology, are more open-ended and need the trained ML model to determine what they’re looking for (defects, objects, etc.).
Using neural nets to provide computational algorithms makes it possible to detect and track objects and people as they move into the field of view of the camera sensor.
The Future of Edge AI
As many industries evolve and rely more on technology for data processing, edge AI will continue to see more widespread adoption. By enabling faster, more secure data processing at the device level, innovation in edge AI will be profound. A few areas expected to expand in the near future include:
- Dedicated processor logic for computing neural-network arithmetic.
- Advancement in lower-power alternatives compared to cloud computing’s significant energy consumption.
- More integrated/module options like AI Vision parts will include built-in sensors and embedded hardware.
As ML training methods, hardware, and software evolve, edge AI is well-positioned to grow exponentially and support many industries.