Bagchi Award Lo 60c2142ae92c3

Purdue Researchers get Amazon Funding for IoT Streaming Video Analytics

June 10, 2021
Saurabh Bagchi and his team have developed a method that allows small devices in the Internet of Things to perform analytics on streaming video.

One of the aspects to the increasingly important nature of the Cloud and its devices to provide the functionality and services we now need to run our daily lives is the need to deliver advanced imaging solutions. Devices such as intelligent cameras, smartphones, augmented and virtual reality reality headsets, and robots of all kinds will need next-generation machine vision with features like image classification and object detection, with pattern and activity recognition.

Recently Purdue University researchers received an Amazon Research Award for their method of analytics on streaming video on small devices in the IoT, which can perform computationally expensive tasks accurately in real time. Led by Saurabh Bagchi (see photo), professor of electrical and computer engineering and computer science (by courtesy) at Purdue University, the team worked with researchers from the University of Wisconsin-Madison.

“It had been believed to be impossible to create machine learning models for these demanding tasks that can provide some guarantees, even in the face of uncertainties,” said Bagchi, who also directs the Center for Resilient Infrastructures, Systems and Processes (CRISP). “Hopefully, this provides the community with a path forward, including adoption in many real-world settings where this is a current technology bottleneck.”

Bagchi’s team developed an approach that provides probabilistic guarantees on accuracy when the characteristics of the content change, such as when one part of a video includes a complex scene with many objects and high motion, while another part of the video has a relatively static scene with few objects. The team includes Somali Chaterji, assistant professor of agricultural and biological engineering at Purdue, and Yin Li, assistant professor of biostatistics and computer sciences at the University of Wisconsin-Madison.

According to Bratin Saha, Vice President of machine learning at Amazon AWS, “The line of work by Saurabh and his collaborators is of interest as it pushes the limit for what can be done with lightweight machine learning techniques. We are providing both intellectual input and compute resources to help move this work forward quickly, and we’re excited to see how this innovative research can be applied in IoT devices and edge computing.”

Li, co-investigator on the project, said, “In terms of its ability to provide accuracy guarantees, this work points to how computer vision models can be deployed for critical applications on mobile and wearable devices. Our technology might be used to help smart phones and augmented reality glasses to understand streaming videos with high accuracy and great efficiency.”

 “This work is applicable to a variety of domains — those that demand low inference latencies with formal accuracy guarantees, such as for self-driving cars and drone-based relief and rescue operations,” Chaterji said. “It is also applicable to IoT for digital agriculture applications where on-device computation requires approximation of the neural network architectures. When we started the investigation, even the leading-edge devices and software stack could only support one or two frames per second. Now, we can support all the way up to 33 milliseconds, which is desirable for real-time video processing.”

Tarek Abdelzaher, the Sohaib and Sara Abbasi Professor of Computer Science at the University of Illinois at Urbana-Champaign and a leading researcher in the field of cyberphysical systems, said, “The work is a beautiful example of resource savings by understanding what’s more important and urgent in the scene,” said Abdelzaher, who is not affiliated with this project. “Humans are very good at focusing visual attention on where the action is instead of spending it equally on all elements of a complex scene. They evolved to optimize capacity by instinctively spending cognitive resources where they matter most. This paper is a step toward endowing machines with the same instinct – the innate ability to give different elements of a scene different levels of computational attention, thus significantly improving the trade-off between urgency, quality and resource consumption.”

This research has been supported by the National Science Foundation and the Army Research Lab.

About the Author

Alix Paultre | Editor-at-Large, Electronic Design

An Army veteran, Alix Paultre was a signals intelligence soldier on the East/West German border in the early ‘80s, and eventually wound up helping launch and run a publication on consumer electronics for the US military stationed in Europe. Alix first began in this industry in 1998 at Electronic Products magazine, and since then has worked for a variety of publications in the embedded electronic engineering space. Alix currently lives in Wiesbaden, Germany.

Also check out his YouTube watch-collecting channel, Talking Timepieces

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!