This article is part of TechXchange: AI on the Edge
NXP delivers a wide range of processing solutions from the compact Kinetis and LPC microcontrollers to high performance SoCs like is i.MX and Layerscape application processors. What most developers may not know is that machine-learning (ML) applications can run on all of these. Of course, developers will need the associated software and development tools to make them work. This is where NXP’s new eIQ framework and development tools come into play (see figure).
“Having long recognized that processing at the edge node is really the driver for customer adoption of machine learning,” says Geoff Lees, senior vice president and GM of Microcontrollers, “we created scalable ML solutions and eIQ tools, to make transferring artificial-intelligence capabilities from the Cloud to the Edge even more accessible and easy to use.”
NXP’s eIQ framework and development tools brings machine-learning applications to its family of microcontrollers and application processors.
NXP’s eIQ is designed to bring ML to every NXP developer that typically uses stock hardware without ML-specific hardware acceleration. The solution takes advantage of existing hardware that can accelerate ML applications, and is useful for other chores such as graphics processing or real-time system control. This means using hardware such as NEON GPUs and DSPs in addition to using CPUs. Of course, your mileage may vary because ML tends to be compute-heavy. Still, even a microcontroller can implement ML applications that have been suitably scaled to match the system’s resources.
The company is looking to black box many of the applications and services that use ML techniques, such as vision-, voice- and sensor-processing applications where deep neural networks (DNNs) and convolutional neural networks (CNNs) are used for interference for applications like facial recognition, speech recognition, and anomaly detection. The eIQ framework is designed to work with hardware abstraction layers like OpenCL, OpenVX, and the Arm Compute Library, as well as inference engines like the Arm NN (neural net), Android NN, GLOW, and OpenCV.
The system will handle model conversion for platforms such as TensorFlow Lite, Caffe2, and PyTorch. It will also address other classical ML algorithms including Support Vector Machine (SVM) and random forest.
NXP is moving toward more ML-specific hardware while trying to support the wide variety of existing and new ML models. For example, its latest LPC5500 Cortex-M33 systems incorporate a MAC co-processor. The co-processor can accelerate ML and DSP functions including convolution, correlation, matrix operations, transfer functions, and filtering. It delivers 10 times the performance of the Cortex-M33 core for these types of services.
Read more articles on this topic at the TechXchange: AI on the Edge