This file type includes high resolution graphics and schematics when applicable.
The embedded computing realm continues to expand even as embedded devices shrink. At one end of the spectrum are the “stylish” devices, where wearable computing meets fashion (Fig. 1). At the other end is the cloud, where massive clusters utilize as much power, performance, storage, and communication bandwidth as can be delivered by designers.
Wearable Computing
Depending on who you talk to, the current crop of wearable-computing products (see “Wearing Your Technology”) garner responses ranging from “the greatest thing” to “clumsy renditions of real fashion.” In any case, the trend is clear and designers are doing their best to meet demand by reducing size and power requirements while increasing functionality. It’s not an easy task, though.
Improvements in sensors and sensor fusion technology will make a big difference. The drive for sensor enhancements is being fed by a number of sources, including smartphones and tablets. There’s also a move to asymmetric multicore mobile solutions to optimize power utilization (see “Hierarchical Processors Target Wearable Tech”).
Wireless communications is almost always a requirement. The new Bluetooth 4.2 standard provides a range of new features, including more robust location support and selective beacon connections. The latter, for instance, could let a smartwatch receive advertisements from a nearby local establishment that’s preferred by the wearer while ignoring others.
Wearable is also headed into the wireless-charging arena. USB connectors are getting smaller, but they detract from the system design and collect dust. Wireless charging eliminates the connector and provides a more convenient solution.
However, wearable technology represents just the tip of the iceberg when it comes to the Internet of Things (IoT). IoT is designed to simplify information exchange, but the underlying infrastructure is very complex (Fig. 2). IoT support can be a challenge, because most solutions tend to be from one or a pair of solution providers. The Open Interconnect Consortium will be worth watching on this front. It’s working on an open connectivity framework that will include IP protection with certification and branding.
Security is one key aspect of IoT receiving more action these days, especially given the rash of security breaches in other areas. The large number of future IoT devices opens up to an ever-expanding area of attack. Tools like firewalls help, but application developers will need to consider—if they haven’t already—security as a natural part of application design and implementation.
Small Scale
Stackables like PC/104 still have a role to play in embedded applications. They remain a solid solution because of the plethora of products available and the ability to combine them easily. The PCI Express variants now offer a better complement of peripherals, and some applications are pushing the bus bandwidth into this space.
Another issue that’s become more acute is the lack of ISA support in processor chips. It’s easy to design an ISA peripheral, but PCI Express (PCIe) bridge chips add more complexity to the motherboard.
One alternative is computer-on-modules (COMs); expect more options this year. Still, the challenge remains the variety of incompatible options available. A number of COM standards, such as PICMG’s COM Express standard, have made adoption easier and provided developers with a growth path.
SMARC (Smart Mobility ARChitecture) from the Standardization Group For Embedded Technologies (SGeT) is one of the latest on the scene. There are full-size SMARC systems like ADLINK’s LEC-iMX6 (Fig. 3) and half-size systems like Kontron’s SMARC-sXQU (Fig. 4). The SMARC-sXQU, based on Intel’s Quark X1000 series, has 1 GB of DDR3 DRAM. With a power envelope under 6 W, SMARC can target small-form-factor applications. In addition to SMARC, SGeT hosts the Qseven module standard.
In general, x86 platforms have dominated C/104 and COM Express, while ARM processors dominate the compact COM space. Two emerging processor platforms to watch in this sector are Intel’s Quark and ARM-based Cortex-A50 processors.
Medium Scale
Mid-range, board-level system designs include standards such as VME, VPX, CompactPCI, CompactPCI Serial, SHB Express, AdvancedTCA, and MicroTCA. ISA has effectively disappeared from this space due to new designs utilizing high-speed serial interfaces like PCIe, and Ethernet becoming dominant. Still, parallel bus architectures like VME and CompactPCI have a long-term installed base in medical, military, and avionic applications, and they continue to undergo improvements in processor and peripheral support.
Of course, speed is a significant requirement for many applications, and 10 Gigabit Ethernet and PCIe Gen 3 are commonplace. For VPX, as well as InfiniBand and Serial RapidIO, 40 Gigabit Ethernet is the goal for 2015.
Most of the action remains in new processors, better storage, and more features. Interface standards like PCIe, which boards and backplanes are based on, will remain relatively stable for a while. Watch for PCIe Gen 4 looming on the horizon. The biggest change will likely be a move toward 2.5 Gigabit Ethernet from 1 Gigabit Ethernet, along with the matching migration of higher-speed
Ethernet based on these.
In some markets, such as communications, the move toward software-defined networking (SDN) and network function virtualization (NFV) will cause some shifts in the sand. SDN will have the biggest impact in terms of hardware design, because it utilizes switch hardware that differs from conventional network switches. The result is a more flexible, upgradable system. On the other hand, NFV is essentially a virtual-machine (VM) server; therefore, existing hardware can address this software framework. Though SDN and NFV are often discussed at the same time, they’re distinct technologies that can be implemented independently of each other.
SDN and NFV are already major factors in the enterprise, but they will be found in the embedded arena, too. Both will have more influence on large-scale systems, even though they’re moving out from the enterprise toward the end nodes.
Large Scale
Racks of 1U servers will continue to support many applications. However, large-scale systems that provide public and private cloud services are looking to split compute, communication, and storage into large collections to be allocated as needed. In this case, compute platforms have minimal communication and storage hardware. It requires a high-speed communication infrastructure that lends itself to large-scale environments and does not usually scale down satisfactorily.
This rack-scale-architecture (RSA) disaggregation approach differs from many blade-style systems that provide a hot-swappable version of 1U servers. These typically incorporate compute, communication, and storage on the same motherboard or blade with a fabric like Ethernet connected via a top-of-rack (ToR) switch.
The major component of RSA is the system interconnect. Fiber appears to be the holy grail for this approach. The latest fiber technology, silicon photonics, provides low-latency, high-speed communication. It also supports transfers over longer distances than copper.
In the end, RSA addresses efficiency using very large resource clusters. It will be useful for companies with large clouds like Amazon and Facebook, and may prove effective in smaller-scale environments down the road.
Flash-Storage Improvements
Regardless of the scale, storage remains a critical factor in embedded-system design. Flash storage continues to influence changes in non-volatile storage, both in its usage and how it’s viewed. Rotating magnetic media remains a critical element within many applications, but the storage hierarchy typically has a flash-memory component in the mix. Even hybrid hard-disk drives combine magnetic and flash storage in the same package (see “Seagate Delivers 2nd Generation Hybrid Hard Drive”). Though these typically target laptop and tablet applications, they’re applicable to a variety of embedded applications as well.
In 2014, we saw the release of 12-Gb/s SAS drives and controllers like Avago Technologies’ LSI MegaRAID 9361-8i (Fig. 5). They can handle hard-disk storage such as Seagate’s 2.5-in., 15K, STM600MX series of 12-Gb/s enterprise drives. Of course, the higher bandwidth will also benefit flash drives that have significantly lower latency and higher bandwidth.
The challenge for users comes down to flash-drive selection due to the array of types, form factors, functionality, and provisioning. For instance, there’s single-level-cell (SLC), multi-level-cell (MLC), and triple-level-cell (TLC) NAND flash. Form factors and interfaces include drives like the 2.5-in Micron M500DC (Fig. 6), which has a SATA interface. The 20-nm MLC enterprise drive with a five-year lifetime can handle three full writes per day (see “Enterprise SSD Targets Big Data Applications”).
NVM (non-volatile memory) Express (NVMe) is another flash interface that’s rising in popularity. Based on PCI Express and available in board or disk form factors, it brings flash closer to the processor. Furthermore, it eliminates the need for a SAS/SATA interface.
Diablo Technologies’ Memory Storage Channel (MSC) technology moves flash even closer to the processor (see “Memory Channel Storage Puts SSD Next to CPU”). Systems can mix MSC flash and DRAM DIMMs, eliminating even PCI Express interface overhead.
The hardware is finally in place to allow some interesting non-volatile memory storage hierarchies for enterprise environments such as SanDisk’s non-volatile memory file system (NVMFS). NVMFS allows low-level atomic writes that can provide a significant performance boost to database applications, since the applications needn’t implement a double-buffer scheme (see "Flash Software Rules in Hierarchical Storage”).
This file type includes high resolution graphics and schematics when applicable.
New C++
Developers will be able to take advantage of new services like NVMFS using the latest C++ standard, C++14 (see “C++14 Adds Embedded Features” in electronicdesign.com). The new auto keyword can simplify applications—the compiler is able to determine the data type of a variable rather than the developer having to explicitly specify the type. It’s particularly handy for lambdas that were introduced in C++11.
Of course, it may be things like readable literals that will endear C++14 to embedded developers, such as this example:
auto mac_address = 0x01’23’45’ab;
Sometimes it’s the little things that make all the difference.