Researchers at the Massachusetts Institute of Technology have developed a new chip that could run AI programs locally, making conclusions about data without having to route it through the cloud. (Image courtesy of Thinkstock).
Researchers at the Massachusetts Institute of Technology have developed a new chip that could run AI programs locally, making conclusions about data without having to route it through the cloud. (Image courtesy of Thinkstock).
Researchers at the Massachusetts Institute of Technology have developed a new chip that could run AI programs locally, making conclusions about data without having to route it through the cloud. (Image courtesy of Thinkstock).
Researchers at the Massachusetts Institute of Technology have developed a new chip that could run AI programs locally, making conclusions about data without having to route it through the cloud. (Image courtesy of Thinkstock).
Researchers at the Massachusetts Institute of Technology have developed a new chip that could run AI programs locally, making conclusions about data without having to route it through the cloud. (Image courtesy of Thinkstock).

Artificial Intelligence Gets its Head out of the Cloud

Feb. 22, 2016
MIT researchers have developed a new chip that could run AI programs locally, making conclusions about data without having to route it through the cloud.

Artificial intelligence researchers are teaching programs to identify objects, recognize patterns, and even learn from experience like humans. These tasks are made possible using so-called neural networks, programs with millions of interconnected neurons that simulate the human brain. While these networks have incredible potential, they often have to process information over the Internet.

Now, a research team has developed a new chip that could bring the same artificial intelligence to the hardware of smartphones, sensors, and other mobile devices. The new technology, developed by researchers at the Massachusetts Institute of Technology, could enable these gadgets to run programs on chips themselves, making conclusions about the data without having to route it through the cloud.

Neural networks typically run on very powerful graphics processing units (GPUs)—the same chips that are used to display images on a screen. The new chip, which the researchers dubbed “Eyeriss,” is 10 times more efficient than a typical mobile GPU, which contains about 200 processing cores.

The research team built Eyeriss with 168 processing cores, at the same time increasing its speed and lowering the amount of power it consumes. Joel Emer, a research scientist for Nvidia, which specializes in graphics chips for robotics and autonomous vehicles, worked with the research team.

Neural networks are designed with millions of processors that work together like neurons sparking information across the human brain. They are generally organized into layers, each of which contain processing nodes connected by artificial synapses. The nodes divide up the data, manipulate it in some way, and then sent it to the next layer for additional processing. This process continues until the program has run its course.

Google and Facebook are among the most prominent companies turning to neural networks, making programs that can recognize patterns in search engine results, for instance, or even medical records in an attempt to help treat diseases. Neural networks attempt to find correlations between raw data and labels applied by human programmers. Newer programs—like the Go-playing program built by Google’s DeepMind research team—also employ unsupervised learning, allowing the program to teach itself new strategies for solving problems.

“You can imagine that if you can bring that functionality to your cell phone or embedded devices, you could still operate even if you don’t have a Wi-Fi connection,” says Vivienne Sze, who led the research team. Sze adds that processing locally may be desirable for privacy reasons, or else to avoid the time it takes to send data back and forth from the cloud.

The researchers made Eyeriss more efficient and powerful by limiting the amount of electricity it consumes. A major drain on neural network processors is exchanging data between the processing cores and memory. Unlike traditional GPUs, where all the cores connect to a single memory bank, Eyeriss’ cores each have their own memory. The cores don’t have to waste time sending data to and from distant memory chips. In addition, data can be shared among cores without having to reroute it through memory.

The researchers also built Eyeriss with special circuitry that manages both the manipulated data and the instructions sent into each processing core. The new circuits distribute that data into the cores in a way that maximizes output before fetching more data from main memory. Another circuit compresses data, then sends it to individual cores.

The researchers suggest that the new chip could help bring artificial intelligence to sensor technology being used to build out the Internet of Things (IoT). With access to artificial intelligence, sensors embedded in manufacturing equipment, buildings, or even animals could transmit conclusions about their surroundings rather than just the raw data.

Other companies have also been trying to put artificial intelligence into chips, casting the structure of the human brain into silicon transistors. These so-called “nueromorphic chips” are built with huge numbers of programmable points that act like the neurons and synapses within the brain. In 2014, for instance, scientists from IBM revealed a new “brain-inspired” chip called TrueNorth with one million programmable neurons, 256 million programmable synapses, and 46 billion synaptic operations per second per watt. While only the size of a postage stamp, the chip contains around 5.4 billion transistors.

General Vision, a neuromorphic chipmaker, has built the NeuroMem CM1K chip. According to the company’s website, this device “can solve pattern recognition problems for text and data analytics, vision, audio, and multi-sensory fusion” with much less power than modern microprocessors. These chips, which have been licensed to Intel and others, can be linked together in bunches to form powerful neural networks.

With these neuromorphic chips, manufacturers are trying to make devices more powerful and yet more self-contained. “These brain-inspired chips could transform mobility, via sensory and intelligent applications that can fit in the palm of your hand,” says Dr. Dharmendra S. Modha, IBM’s chief scientist of brain-inspired computing.

Sponsored Recommendations

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

How to Quickly Leverage Bluetooth AoA and AoD for Indoor Logistics Tracking

March 27, 2024
Real-time asset tracking is an important aspect of Industry 4.0. Various technologies are available for deploying Real-Time Location.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!