More cores, more flash, and low power remain the themes for new digital technology this year. Add in microelectromechanical systems (MEMS)—where prices are falling and functionality is on the rise—as well as new network technology and the latest transistor technology, and 2012 is shaping up to be quite exciting for designers.

Several technologies are pushing the transistor envelope. For example, Intel’s 3D tri-gate transistor technology (Fig. 1) will be in this year’s Ivy Bridge chips (see “Moore’s Law Continues With 22-nm 3D Transistors” at electronicdesign.com).

SuVolta’s Deeply Depleted Channel (DDC) transistor architecture (Fig. 2) improves on the standard planar bulk CMOS transistor (see “DDC Transistor Brings Low Power And High Performance To Portable Devices” at electronicdesign.com). SuVolta is targeting the embedded system-on-a-chip (SoC) market.

Smaller geometries, faster processors, and lower power are the target for these new approaches, which will begin to appear in 2012. Yet there also will be a lot of transistor activity in chips that aren’t necessarily pushing the edge, from tiny micros to GPUs.

Big And Little Micros

The span of processor technology is very wide, ranging from micros that fit into 2- by 2-mm packages to processors with hundreds of cores on a single chip. Minimal power usage remains the watchword regardless of size. At the low end, this means maximizing battery life. At the high end, it means keeping the system cool.

Hardware acceleration is a common trend creeping into very small micros like Microchip’s Configurable Logic Cell (CLC), found in the company’s PIC10F32X line (see “The Best Digital Technology Comes In Small Packages” at electronicdesign.com). It provides basic CPLD-like (complex programmable logic device) logic, eliminating the need for external chips as well as reducing processor overhead and, in turn, power requirements.

Meanwhile, Cypress Semiconductor’s PSoC (progammable SoC) provides configurable peripherals with a platform that’s a step down from an FPGA in terms of customization.

Asymmetric multicore architectures are common in SoCs, but they will be a big trend this year in standard chips as well. NVidia’s quad-core Cortex-A9 Tegra 3 actually has a fifth low-power core, using the four cores or the fifth core as needed.

NXP’s LPC4000 blends a Cortex-M0 and Cortex-M4, while Freescale plans to mix a Cortex-A5 with the Cortex-M4. In both cases, the Cortex-M4 handles DSP chores while the other core handles communications. Of course, the Cortex-A5 is a bit heftier and can run operating systems (OSs) like Linux.

Also, keep an eye on ARM’s 64 bit offering, which may see hardware this year (see “ARM Joins The 64-Bit Club” at electronicdesign.com). The ARMv8 architecture will raise the low-power bar and move into servers dominated by x86_64 chips.

Symmetrical multiprocessing (SMP) multicore chips will still dominate everything from smart phones to servers. They simply match the available software too well. These platforms also typically support virtualization. Virtualization is the basis for “the cloud,” and it’s invaluable for embedded designs where multiple operating systems allow the mixture of secure and non-secure clients as well as legacy clients.

Things get more interesting with a hundred cores on one chip like Tilera’s Tile Gx (see “100 x 0.5-W Cores = Cloud Computing” at electronicdesign.com). Developers also will be able to get their hands on Intel’s MIC (Many Integrated Core) platform (see “Get Ready For Some Hard Work With Multicore Programming” at electronicdesign.com).

Programmers have already addressed SMP non-uniform memory architecture (NUMA). But these many-core platforms offer their own advantages and challenges, especially in terms of scalability to multiple chips and general issues with I/O.

GPUs have been packing hundreds of cores into a single package, and their use as computational elements is going to increase further. One big difference will be the use of this capability in single-chip solutions like those in smart phones and tablets.

Previously, GPUs tended to be low-end platforms that didn’t have this capability. That’s changing as computational demands rise. Likewise, display support will continue to push resolution, frame rates, and 3D support.

Security hardware is becoming much more common, but developers will need to differentiate the features and target application support. Hardware encryption and random number generators are invaluable, though alone they add little support for preventing counterfeits.

Anti-counterfeit technology is becoming more commonly available, so it’s no longer just for financial or high-value applications. On-chip key storage also opens possibilities, including key management and secure boot.

PC platforms can now take advantage of the Unified Extensible Firmware Interface (UEFI), which replaces the PC BIOS. Secure boot is possible with the addition of the Trusted Platform Module (TPM). UEFI is platform-independent, so ARM-based solutions may appear this year.

FPGAs Go Micro

FPGAs started as completely blank slates of lookup tables (LUTs), flip-flops, and an interconnect fabric. Hard logic was included for I/O, and DSP blocks have long been common. Hard-core processors found limited use in high-end systems, but they were more like super-DSP blocks. This is changing with the incorporation of full microcontroller cores. It also matches the trend for soft-core processors in new designs since hard-core logic is usually faster and more efficient.

Microsemi’s SmartFusion has been available for some time (see “FPGA Combines Hard-Core Cortex-M3 And Analog Peripherals” at electronicdesign.com). This single chip combines a Cortex-M3 microcontroller with a programmable analog subsystem and a standard FPGA fabric. It includes a full complement of microcontroller peripherals and can operate independently of the FPGA fabric.

The system is designed so the micro can take advantage of the FPGA, but this approach gives developers a standard software target. It’s more of a microcontroller with an FPGA front end rather than an FPGA with a processor core.

Xilinx’s Zynq-7000 EPP FPGA family upped the ante with a pair of Cortex-A9 processors (see “FPGA Packs In Dual Cortex-A9” at electronicdesign.com). Like SmartFusion, its set of peripherals allows the micros to operate without depending upon the FPGA. This is especially handy because Xilinx’s part is RAM-based. In theory, the micro can dynamically program the FPGA. Zynq-7000 chips will be readily available this year.

Altera is also following the Cortex-A9 route (see “Dual Core Cortex-A9 With ECC Finds FPGA Home” at electronicdesign.com). The addition of error correction coding (ECC) support makes it very desirable for safety-critical applications.

Microcontroller-based FPGAs will not eliminate the need for more conventional FPGAs. In fact, there will likely be a lot more FPGA platforms that are already targeting everything from smart phones to EDA simulation platforms.

Interconnects And Networking

Surprises are unlikely for interconnects and networking primarily because of the standards processes, the level of backward compatibility required, and the momentum behind existing hardware. This means USB 3.0, PCI Express 3.0, 1-Gbit Ethernet, and 6-Gbit/s SATA will be the norm for most motherboards.

Also, 12-Gbit/s SAS will be hitting the enterprise. PCI Express 4.0 is on the drawing board, though it’s years out. A few developers could use its bandwidth, but PCI Express 3.0 is more than fast enough for most applications.

Embedded developers will be able to take advantage of backward compatibility as USB 2.0 and 10/100 Ethernet are less taxing on low-end hardware. Low-end interfaces like CAN/LIN, SPI, and I2C will continue to be standard fare on microcontrollers. Quad-SPI use continues to rise with some micros supporting direct execution from fast, nonvolatile serial storage.

EtherCAT and IEEE 1588 support is becoming more of a common option in microcontrollers as CAN runs out of steam for many applications. The OPEN (One-Pair Ether-Net) Alliance is bringing 10/100 Ethernet to automotive applications. This is definitely a game changer in the automotive arena, filling the space between CAN and MOST.

OPEN also can deliver power over the same unshielded, twisted-pair cable. It will be ideal for other application areas like robots and control systems where CAN or conventional Ethernet currently dominates.

Lower-power, lower-cost parts will continue to be the norm for wireless support. There will be the usual cast of characters including Wi-Fi, Bluetooth, 802.15.4, ZigBee, and variations like RF4CE. Wi-Fi Direct, the new peer-to-peer standard, will emerge in 2012 and give Bluetooth some competition.

Sensors

Smart phones and tablets have been driving down the price of MEMS sensors in addition to reducing size and power requirements. Best of all, more sensors are sporting digital interfaces, making incorporation into systems significantly easier.

Analog devices are less expensive and can be customized, but this integration is beyond most software developers because of the often arcane nature of the devices. Calibration and drift management aren’t easy chores. They’re now usually performed by a micro that sits between the sensor and the host.

Integration of multiple sensors into a single device is becoming more common. Sensors with 10 degrees of freedom such as those that include a 3D gyro, a 3D accelerometer, a 3D magnetometer, and a pressure sensor are now available. These inertial measurement units (IMUs) account for temperature fluctuations and require a single digital interface.

MEMS and other sensors aren’t restricted to motion and position detection. Pressure, temperature, and even gas and liquid analysis sensors are available. Integration of sensors with micros is on the rise here as well, leading to packaging challenges since these sensors typically need to be exposed.

Finally, there are optical sensors including digital camera chips. Again, smart phones and tablets plus digital cameras are garnering a sizable chunk of these chips, but many are finding their way into robotics, surveillance systems, and even Advanced Driver Assistance System (ADAS) automotive applications. OPEN Ethernet also could come into play in camera ADAS applications.

Memory And Storage

DDR3 remains the mark for PCs, servers, and other computational platforms. DDR4 is still in the future. LPDDR3 will be coming online this year. DDR2 and other earlier memory platforms will remain, providing embedded developers with options.

Flash memory technology remains hot, but nonvolatile alternatives like FRAM, MRAM, and phase change are marking out territory where their features beat flash technology. RAID cache and serial memories are just a few places where these technologies are being employed. Still, there is extremely high demand for NAND and NOR flash as standalone chips as well as embedded memory.

One place where single-level cell (SLC) and multi-level cell (MLC) flash memory is being used is SATA and SAS solid-state disk (SSD) storage. SLC maintains the edge on performance and lifetime, but MLC has even moved into enterprise storage as flash memory controller technology improves. In a couple of years, two-thirds of the flash storage will wind up in smart phones and tablets where users will use the cloud for additional storage.

Look for a consolidation in the enterprise SSD market as well as a marked differentiation between enterprise and client SSDs. There will continue to be increased demand for self-encrypting drives (SEDs) in the enterprise. SSDs are the base of a three-tier storage hierarchy followed by 15K/10K SAS drives and then 7200-RPM archival storage.

SATA and SAS SSDs will be getting competition from two emerging standards, Nonvolatile Memory Express (NVMe) and SCSI Express (SCSI on PCI Express). Both are based on PCI Express with the expressed goal of providing high-bandwidth access to solid-state memory.

NVMe brings a new interface to the mix (see “NVM Express Delivers PCIe SSD Access” at electronicdesign.com), while SCSI Express is essentially a SCSI controller interface on the PCI Express side and a flash controller on the other. The advantage of the SCSI Express approach is that it looks like a SAS controller. It uses the same OS drivers as a result, so the storage device could be any type from flash on-board to a SAS hard drive.

NVMe and SCSI Express can be implemented as plug-in adapters, on motherboards or even their own drives. SCSI Express has a PCI Express interface. The main difference between a SATA or SAS drive and these drives is the ability to stack PCI Express lanes. SATA and SAS have a single connection.

Hard-drive consolidation has already occurred, and natural disasters have put a pinch on product delivery. Capacity growth is slowing with 1 Tbyte/platter being the 3.5-in. capacity for more than the next year. Slowing capacity will make system performance more important.

Hybrid solid-state/hard drives like Seagate’s 5400 RPM Momentus XT deliver two-tier performance that’s close to all flash speeds but at a much lower cost (see “Controller Combines Flash Speeds And Hard-Drive Capacity With Transparent Optimization” at electronicdesign.com).

Touchy Displays

LCDs and e-paper technologies dominate the smart-phone, e-book, and tablet market. Most incorporate touch interfaces, and multitouch is in demand. Resistive multitouch will be getting a leg up with ROHM Semiconductor’s responsive offering.

LED matrix multitouch offerings are going to give capacitive options competition too. They have the advantage in terms of cost, but capacitive touch sensors will still lead in terms of responsiveness and multiple touch functionality.

For capacitive touch, watch Cypress Semiconductor and its TrueTouch Gen 4 technology. It allows direct lamination of the shield and sensor layers, providing better performance at a lower cost.

Embedded developers will definitely have plenty of choices on the hardware side to complement their software applications this year.