The Silicon Valley laboratory of Intermolecular, a maker of advanced materials like metal oxides and nitrides used in semiconductors. (Image courtesy of Intermolecular).
Brian Krzanich, Intel's chief executive, has talked up the company's swift move into markets like cars and sensors, but he has been forced to admit a slowdown elsewhere. Intel's engineers are now taking two and half years to double the number of transistors in new generations of chips, instead of the two years prescribed by Moore's Law.
The delays have fueled concerns that Moore's Law is ending. As individual transistors are made smaller than viruses, electrical leakage has prevented engineers from making chips faster and more power-efficient. Moreover, the cost of making smaller transistors has stopped falling and chip makers are spending a fortune on manufacturing tools.
But a panel of semiconductor experts at the South by Southwest festival in Austin, Texas, is not crying over the possibility that the chip industry has reached the end of its decades-old guidebook. The panelists said that industry would move toward designs that drive down the power consumption of chips rather than the size of transistors.
“Moore was just saying if you give me a dollar, I’ll give you a hundred transistors. In two years, if you give me a dollar, I’ll give you two hundred transistors," said Tom Conte, an electrical engineer at the Georgia Institute of Technology and chairman of the Rebooting Computing conference in Washington, D.C.
"He didn’t say anything about them being smaller or faster."
Tsu-Jae King Liu, a professor at the University of California at Berkeley who invented the Finfet transistor with her colleagues in 1998, suggested that the process of scaling down transistors would eventually be overwritten by new innovations. "Innovation is going to be driver of technology advancement in the future," she said.
Chip scientists are eyeing chips with "orders of magnitude improvements in energy efficiency," Liu said. Ideally, these chips would be efficient enough to be paired with microphones to understand speech and cameras to classify images without consulting the cloud. Others could be powered by ambient light or wireless signals, lasting for years without a battery replacement.
To boost performance and cut down on power consumption, many chip makers are turning to chips specifically suited to handle tricky tasks like machine learning and image recognition better than traditional computers. The latest sign of that shift came from Intel, which recently said that it was spending $15.3 billion on Mobileye's hardware and software for processing images in autonomous cars.
The deal showed that making a traditional chip work like Mobileye's was likely far too expensive and time-consuming for Intel to afford. “The economics of shrinking chips raises the question of who can really afford to make them and buy them,” said Greg Yeric, an ARM Fellow. ARM's chip designs power the vast majority of smartphones and, increasingly, the electronics inside and cars.
These shifting tides show in other companies bought by Intel, whose founder Gordon Moore made the observation that became Moore's Law. In the last two years, it acquired FPGAs in its $16.7 billion deal for Altera and A.I. accelerator chips from the start-up Nervana Systems. The deals could help it stay relevant for customers like Google, whose custom chips run algorithms faster and more efficiently in data centers.
“Any future transistor technology has to be much more efficient than technology today,” said Liu, who last year joined Intel's board.
Many scientists are drawing inspiration from neuroscience, making artificial neurons that enable silicon chips to act like the human brain. Researchers from IBM, for instance, are also chip materials that melt and harden at different temperatures, introducing randomness to the system and making the transistor act more like real neurons.
IBM has already developed a type of neuromorphic chip called True North to be used in supercomputers at the National Nuclear Security Administration. The chip is significant because IBM claims that it can classify up to 2,600 image frames per second, around ten times faster than existing chips, while drawing as little as 25 milliwatts of power.
“When Google went out and beat a world champion at Go, it used a neuromorphic approach, but it consumed kilowatts, whereas the [brain of the] champion that eventually beat AlphaGo consumed ten watts,” Conte said.
There is also the possibility of recycling devices that have piggybacked on chips, like microelectromechanical systems. These microscopic switches, also known as MEMS, can be fabricated in chip factories and are widely used in accelerometers and other sensors. Liu said that these mechanical structures could make for millivolt computers, which could consume a million times less power than a one-volt chip.
Other options, Yeric said, include so-called "frankenchips," which combine different types of computing, or inexact computers that save power by spitting out calculations just accurate enough for tasks like image processing.
“I think it’s a question of pain versus gain,” said Conte, when asked about what technology might win out. “The least painful is finding a way to replace CMOS and life will be wonderful. But the majority of the approaches are going to require us abandoning billions of dollars we have invested in software. And that is disruptive."
"This isn’t as simple as porting from one to another," he added. "This is fundamentally changing how we program different systems.”
In the meantime, Yeric said, the shift away from one-size-fits-all computing could relieve some of the industry's pressure to further shrink transistors. “Now we are talking about slowing Moore’s Law. Now, we are talking about giving bigger changes breathing room,” Yeric said. “And that’s the exciting part.”