NVIDIA’s Jetson series has been popular for machine-learning (ML) applications, but the modules have been a bit large. The new Jetson Nano (Fig. 1) lets developers pack the performance of a Jetson TX1 into an even more compact package. The 70- by 45-mm DIMM form factor is designed for industrial environments. Priced at only $129 in quantities of 1000+, the Jetson Nano will be a formidable ML target for developers.
1. The compact Jetson Nano DIMM packs in a 64-bit, quad core CPU plus a 128-CUDA-core Maxwell GPGPU with 4 GB of DRAM.
The specs for the Jetson Nano are on par with the Jetson TX1, including a 64-bit, quad-core ARM Cortex-A57 CPU complex along with a 128-CUDA-core Maxwell GPGPU designed to handle video streams as well as ML chores. The system delivers 472 GFLOPS of performance.
In demos, the DIMM was able to process eight 1080p streams while using deep neural networks (DNNs) to identify objects in each stream. Video processing is enhanced using hardware video encode and decode support. It can encode 4K, four 1080p, or eight 720p streams at 30 frames/s. It can decode a single 4K stream at 60 frames/s or two at 30 frames/s as well as eight at 1080p or 16 at 720p. There are a dozen MIPI CSI-2 DPHY 1.1 lanes. The system can also drive two displays using HDMI 2.0, DisplayPort (DP) 1.2 or eDP 1.4, as well as DSI.
The DIMM uses under 5 W or 10 W of power depending on configuration and performance, allowing passive cooling to be used in most applications. External interfaces include x4 PCI Express and USB 3.0 as well as SDI, SPI, and I2C ports. The system doesn’t include the wireless networking support found on the Jetson TX1, although these could easily be added using external interface ports.
The DIMM comes with 4 GB of 64-bit LPDDR4 with a 25.6-GB/s bandwidth. There is also 16 GB of eMMC flash-memory storage.
2. The Jetson Nano Developer Kit includes the DIMM, but uses a MicroSD slot for non-volatile storage.
The Jetson Nano does run standalone; it’s designed to work with a carrier board. To get developers started, NVIDIA provides the $99 Jetson Nano Developer Kit (Fig. 2). Yes, it includes the DIMM and is less expensive than the DIMM alone, but the latter is qualified for industrial use and will be available for at least five years. The kit runs the same software. It’s also designed to use a MicroSD card for flash-memory storage; its DIMM is missing the eMMC storage.
The Jetson Nano can work with the same NVIDIA Jetpack software suite as the Jetson TX1, Jetson TX2, and Jetson AGX Xavier (Fig. 3). The main differences are performance levels, power requirements, and size. The software support includes the training tools for developing models to run on the Jetson Nano.
3. The Jetson Nano fills out the lower end of NVIDIA’s ML spectrum. The same software runs on all three platforms.
The JetBot (Fig. 4) is an open-source project powered by the Jetson Nano Developer Kit. The big difference between it and other compact robots is the JetBot needs only a single camera to handle object recognition and collision avoidance. Of course, the Jetson Nano could handle much more, but it’s impressive to see how fast the robots can zip around while avoiding each other and nearby obstacles. This takes robotics to a higher level.
4.The JetBot is powered by the Jetson Nano Developer Kit.
All Jetson platforms support the popular ML frameworks like Tensorflow via NVIDIA’s TensorRT and its CUDA deep-neural-network (cuDNN) framework. Tools like the company’s DeepStream SDK streamline video analytic applications by making it easy for developers to connect hardware-accelerated building blocks together. The Jetson Nano will also support NVIDIA’s TrustedOS.
NVIDIA only sell modules. The chips aren’t available separately, so the Jetson Nano represents the smallest form factor and lowest power of all the modules in the family. The Jetson AGX Xavier needs up to three times the power of the Jetson Nano, but delivers well over 20X the performance in addition to having a more advanced GPGPU with multiple ML hardware accelerators.
The Jetson Nano brings ML applications to smaller form factors while providing high-performance ML acceleration. The DIMM form factor allows developers to take advantage of future versions that deliver even more advanced support without changing their carrier board.