Delivering Neuromorphic Computing to Embedded Systems
What you’ll learn:
- What are spiking neural networks (SNNs)?
- Why neuromorphic computing is important.
- How BrainChip’s Akida platform brings neuromorphic computing to embedded applications.
Artificial intelligence and machine learning (AI/ML) are everywhere. A host of different implementations, from convolutional neural networks (CNNs) to large-language-model (LLM) chatbots, have turned search engines into dubious deliverers of search information. Neuromorphic computing is one approach that’s ideal for embedded applications that require low power.
Steven Brightfield, Chief Marketing Officer at BrainChip, about neuromorphic computing and its Akida spiking neural network (SNN) platform. This includes chips like the Akida AKD1000 (Fig. 1) as well as IP that can be used to incorporate neuromorphic computing in custom microcontrollers and processors.
SNNs are the most common implementation approach for neuromorphic computing. Input is event-driven, where a particular change forwards this information to one or more neurons in the first layer, and these in turn may generate a signal. If the input is from a camera, then the changes for individual pixels are used by the model. This type of operation is why neuromorphic computing tends to be significantly more power-efficient than AI models like convolutional neural networks (CNNs).
Of course, CNNs are more common, so BrainChip has a way to cover those well-developed models to SNNs using its CNN2SNN application. Likewise, the Akida platform can include hardware for converting input data, such as an image frame buffer, into a series of changes in an image that the SNN needs as input. An event-based camera like the one from Prophesee can drive the SNN directly.
BrainChip offers a PCI Express (PCIe) board (Fig. 2) with the AKD1000. The AKD1000 includes a 300-MHz, 32-bit Arm Cotrex-M4. It also packs 512 MB of LPDDR4 SDRAM and a 128-Mb Quad SPI (QSPI) NOR flash memory. There’s a PCIe 2.0 x1 host interface, too. As with other AI/ML hardware, the company provides a framework and API for working with the SNN neural processing unit (NPU) in the hardware. The MetaTF Development environment has software support for the hardware.
BrainChip’s hardware can support temporal neural networks (TENNs). Its TENNs-PLEIADES (PoLynomial Expansion in Adaptive Distributed Event-based Systems) is designed to work with spatio-temporal classification event-based sensors like the Prophesee cameras.