STMicroelectronics and AWS (Amazon Web Services) have created the AWS STM32 ML at the Edge Accelerator. The application demonstrator leverages the B-U585I-IOT02A Discovery Kit, the STM32Cube.AI Developer Cloud, and AWS’s solutions to run an audio classification model on an STM32U5 microcontroller. This application demo shows how ST technologies like the Model Zoo and Board Farm can expand AI at the edge.
It starts with the YAMNet-256, an Audio Event Detection model found in the ST Model Zoo that uses the B-U585-IOT2A Discovery Kit along with the X-CUBE-AWS, an extension pack integrating FreeRTOS with AWS IoT Core for seamless cloud connectivity. The architecture supports the entire MLOps process, and the machine-learning stack handles data processing, model training, and evaluation. The IoT stack deals with automatic device flashing using over-the-air (OTA) updates to ensure that all devices are running the latest secure firmware.
The pipeline stack coordinates the CI/CD (continuous integration/continuous delivery) workflow, ensuring the work is always updated and the code optimized. Developers can automate the deployment of the ML and IoT stacks to address the entire development lifecycle.
To monitor the devices involved and visualize the data, Amazon Grafana was used to create dynamic and interactive dashboards for real-time monitoring and analysis. This demo features a Yamnet audio classification model optimized for the STM32 MCUs running an audio event detection program. It's able to distinguish between a wide range of noises, from dogs barking to birdsong, to people coughing or sneezing, and other sounds.