pointgragpromo.jpg

Apply Deep Learning to Building-Automation IoT Sensors

July 27, 2016
Real-time systems like smart sensors in commercial buildings are taking advantage of the richer computation level of deep-learning-based technology.
Download this article in .PDF format
This file type includes high-resolution graphics and schematics when applicable.

Jonathan Laserson, Senior Algorithm Researcher, PointGrab

In building automation, sensors such as motion detectors, photocells, temperature, and CO2 and smoke detectors are used primarily for energy savings and safety. Next-generation buildings, however, are intended to be significantly more intelligent, with the capability to analyze space utilization, monitor occupants’ comfort, and generate business intelligence.

To support such robust features, building-automation infrastructure requires considerably richer information that details what’s happening across the building space. Since current sensing solutions are limited in their ability to address this need, a new generation of smart sensors (see figure below) is required to enhance the accuracy, reliability, flexibility, and granularity of the data they provide.

Data Analytics at the Sensor Node
In the new era of the Internet of Things (IoT), there arises the opportunity to introduce a new approach to building automation that decentralizes the architecture and pushes the analytics processing to the edge (the sensor unit) instead of the cloud or a central server. Commonly referred to as edge computing, or fog computing, this approach provides real-time intelligence and enhanced control agility while simultaneously offloading the heavy communications traffic.

Continued innovation in computing technology has yielded cheap and energy-efficient embedded processors that can handle such data processing. In principle, this makes it possible to process the data at the sensor level and only send the final summary of the analysis over the network. This approach, if implemented, will yield a thinner volume of data and a shorter response time. The major question, however, is what kind of data-analysis approach is best suited for these embedded analytics sensors.

Rule-Based or Data-Driven?
The challenges associated with rich data analysis can be addressed in different ways. The conventional rule-based systems are supposedly easier to analyze. However, this advantage is negated as the system evolves, with patches of rules being stacked upon each other to account for the proliferation of new rule exceptions, thus resulting in a hard-to-decipher tangle of coded rules.

As the hard work of rule creation and modification is managed by human programmers, rule-based systems suffer from compromised performance. They have shown to be less responsive in adapting to new types of data, such as data sourced from an upgraded sensor, or a new sensor of previously unutilized data. Rule-based systems can also fail to adapt to a changing domain, e.g., a new furniture layout or new lighting sources.

PointGrab's CogniPoint sensor utilizes deep-learning-based technology to track movement of building occupants to provide energy savings in commercial buildings.

These deficiencies can be readily overcome with data-driven “machine-learning” systems, which have proven to be superior tools for rich data analysis, especially when cameras are employed at the sensing layer. Machine-learning systems transfer the labor of defining effective rules from the engineers to the algorithm. As a result, the engineers are only tasked with defining the features of the raw data that hold relevant information.

Once the features have been defined, the rules and/or formulas that use these features are learned automatically by the algorithm. The algorithm must have access to a multitude of data samples labeled with the desired outcomes for this to work, so that it can properly adapt itself.

When the rules are implemented within the sensor, it runs a two-staged, repeating process. In stage one, the human-defined features are extracted from the sensor data. In stage two, the learned rules are applied to perform the task at hand.

The Deep-Learning Approach
Within the machine-learning domain, “deep learning” is emerging as a superior new approach that even alleviates engineers from the task of defining features. With deep learning, based on the numerous labeled samples, the algorithm determines for itself an end-to-end computation that extends from the raw sensor data all the way to the final output. The algorithm must discern the correct features and how best to compute them.

This ultimately fosters a deeper level of computation that’s much more effective than any rule or formula used by traditional machine learning. Typically, a neural network will perform this computation, leveraging a complex computational circuit with millions of parameters that the algorithm will tune until the right function is pinpointed.

The implications of deep learning on system engineering are profound, and the contrast with rule-based systems is significant. In the rule-based system world, and even with traditional machine learning, the system engineer requires extensive information about the domain in order to build a good system. In the deep-learning world, this is no longer necessary.

With the arrival of the IoT and the proliferation of data across the network, deep learning allows for faster iteration on new data sources and can use them without requiring intimate knowledge. When applying a deep-learning approach, the engineer’s main focus is to define the neural network’s core architecture. The network must be large enough to have the capacity to optimize to a useful computation, but simple enough so that available processing resources aren’t outstripped.

A neural network can be tailored to fit any given time budget to the maximum threshold to ensure maximum exploitation of the processing power. If the computational budget rises and there’s more time to run the calculation, a larger network can be assessed utilizing the new budget.

Once the architecture is defined, it stays fixed while the parameters of the neural network are tuned. This process can take days and even weeks for even the highest performance machines. However, the computation itself, extending from raw inputs to output, takes a fraction of a second, and it will remain exactly the same throughout the process.

The scalability and flexibility of deep learning distinguish it as a powerful approach for a real-time system like a smart sensor in the continuously changing environment of commercial buildings. Another advantage of using neural networks is that they’re extremely portable, and can be very easily built and customized using available software libraries. This allows the neural network to run the same network on different types of devices.

Moreover, such portability allows for quick turnarounds between the learning sessions, which typically use powerful machines. On top of that, engineers can observe how the neural network behaves when it’s deployed on embedded processors.

About the Author

Jonathan Laserson | Senior Algorithm Researcher

Dr. Jonathan Laserson is a senior algorithm researcher at PointGrab, and a machine learning expert and consultant. He holds a PhD from the Computer Science AI lab at Stanford University and was a lecturer at Bar-Ilan University in Israel. After academia, he worked at Google and used machine learning to enhance the security of Google user accounts. At PointGrab, his primary focus is the practical use of deep-learning algorithms in embedded environments. 

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!