Image

Tiny Transistors! Giant Molecules! Moore’s Law Crashes Into The Laws of Physics

April 9, 2012
The evolution of the transistor has seen expensive failures, billion-dollar success stories, and a fair share of missed opportunities. Digging back in the literature when single transistors were coming to market, the perspective is interesting.

The evolution of the transistor has seen expensive failures, billion-dollar success stories, and a fair share of missed opportunities. Digging back in the literature when single transistors were coming to market, the perspective is interesting. Everyone saw the value of replacing the vacuum tube, but initially nobody envisioned multiple devices on a single chip.

For a brief period, people didn’t see that next step. Moore’s law, stating that the number of transistors on a chip would double approximately every two years, was coined in 1970 when the state of the art was four NAND gates on a single chip. When the Intel 4004 was fabricated in 1971, Moore’s law moved inside the chip and the race for smaller, faster, and more powerful ICs was underway.

The 4004 was built in 10-µm CMOS, with 4 bits of processing power, 2300 transistors, and a 740-kHz clock. By 2011, 28-nm devices at 3.9 billion transistors were commercially available, and even that will be outdated soon. But will this growth end someplace?

Delaying The Inevitable

Semiconductors have hit a number of bumps in the road trying to make things smaller and faster. Many designers have claimed that the end is near. Yet engineers have found some creative ways to avoid the “Big End” to semiconductor scaling.

So, what has changed? Also, is the end really near? The doomsday predictions of Y2K are ancient history, and the Mayan calendar may run out, but somehow we keep on progressing. Pure physics may win out in some areas, yet there are some clever ways to sidestep some of the issues.

Speaking of physics, do our chips need “warp drive” capability yet? Some would argue that this isn’t really relative. The speed of light has nothing to do with signal propagation down a wire. But it’s supposedly the Holy Grail of speed limits, right?

For a rough approximation, with a 5-GHz clock, a single transition travels 6 cm, with everything perfect. Electromagnetic propagation is slower on a wire, but problems with interconnect RC limitations happen long before we run into the Holy Grail of speed limits.

Smaller than 0.25 µm, the RC interconnect delay generally has been the speed bottleneck—not the transistors themselves! Slower aluminum connections evolved to copper for lower resistance. Along with copper came the planarization of metal layers, where the interconnections were flattened out, layer by layer.

With older methods, connection stacking turned into a mess of hills and valleys, which limited layers. Planarized copper eliminated this problem. Many ICs today use 10 metal layers without any problems. The latest buzz is over graphene, which has good potential for low-resistance interconnects (see “Graphene Ready To Conquer The Terahertz Terrain”).

After getting the resistance down, it’s then a question of interconnect capacitance. What about the “low-K dielectrics” we keep hearing about? This sounds promising, until you examine the numbers. Silicon dioxide presently has a K of 3.9. Air has a K of 1, so even if you hang the wiring in midair, you aren’t going to get radical improvement.

Things that may work in a foundry get K down to around 2. However, the physical issues in making it reliable have been a bit of a challenge. Nonetheless, you may be able to halve the capacitance with heroic measures. Physics wins this contest. It’s tough to get rid of parasitic capacitance. Making things smaller with better lithography methods has helped, but it hasn’t eliminated the capacitance problem.

The refinements made in lithography and fabrication can fill many books. Light sources used for the exposure of wafer masks had to move to shorter wavelengths to avoid standing wave effects from the light source itself. Optical techniques like immersion technology have been used to change the wavelength of light and avoid the refractive index of air. Wafer masks have introduced phase shifts into the light to prevent standing wave effects.

Attempts at X-rays and their shorter wavelengths have been used, although problems creating X-ray opaque masks presented obstacles for a while. Now widely used are halo implants, where ion implanting is done at an angle, allowing preferred doping and closer features than perpendicular ion implants. Lithography refinements have been a repetitive exercise in clever engineers walking around the laws of physics to make everything smaller.

As transistors got smaller, gate oxides got progressively thinner. This led to tunneling current, commonly called “gate leakage” occurring at lower voltages and damaging the oxide. The solution was lower voltage power as transistors got smaller. However, gate oxides start to look a little “lumpy” as you push the limits.

A 32-nm transistor uses oxides about five molecules thick, and the limitation of the molecule’s size starts to bite you. Gate oxides that are one molecule thick exist in research, but practical application, yield consistency, and gate leakage make them impractical. Using a smaller atom as a gate insulator might be possible, a carbon compound instead of a silicon compound, but even this is an evolutionary refinement of the transistor.

Looking For Solutions

A lot of what’s been done has been evolutionary, rather than revolutionary. The existing technology has been well developed, and squeezing a bit more performance out of known methods has served pretty well for a number of years. The diameter of a silicon atom is about 0.25 nm, so a 22-nm CMOS channel has 80 to 90 atoms under it. Conventional CMOS starts to fall apart as we need to count atoms.

Intel has made strong progress in the FinFET in which a gate stripe wraps around the channel, instead of the older planar device. The device uses conventional lithography methods and thus extends the silicon process out for another generation.

Since conventional methods have not led to easy speed increases, design efforts have turned to parallel processing and multiple cores working together. Since smaller sizes are running out of steam (or atoms in this case), going wider produces some improvement.

Wafers have been stacked vertically to create chip-on-chip structures. But this technique requires very thin wafers and the ability to insert direct vertical interconnects between wafers. What needs to be done here is recognized as a set of solvable problems.

A lot has been published recently about through silicon vias and vertical silicon interconnects (see “Setting A New Standard For Through-Silicon Via Reliability”). Expect these methods to extend classical silicon for a few more generations. The invention of the elevator allowed the building of skyscrapers, and the ability to go vertical looks promising.

Carbon nanotubes have been used in transistor bodies in research. If you can’t solve the interconnect speed issues, though, a transistor faster than the connections doesn’t help that much. Using graphene as an interconnect and as a current steering switch may be one of the breakthroughs to a new logic device.

Can we design a computing element the size of an atom or smaller? How about selectively moving one electron around? As with any newly emerging field, the path to success is unclear. Papers are being written about quantum computing, molecular scale electronics, and related topics on nanotechnology. What will survive research and see commercial use remains to be seen.

Predicting the future has never been easy. Even as we count silicon atoms in transistors, innovations like vertical stacking and more parallel processing keep things moving forward, while the technology of smaller and faster at the atomic scale gets worked out. That Intel 4004 seems quaint today, and that logic chip with 4 billion transistors will be quaint tomorrow.

About the Author

Jerry Twomey | Founder and Engineering Consultant, Effective Electrons

Jerry Twomey is an engineering consultant with Effective Electrons. He has extensive experience designing electronics, medical devices, electromechanical systems, and transistor-level integrated circuits. He is also the author of the book Applied Embedded Electronics – Design Essentials for Robust Systems. This is an in-depth reference that covers the design process from initial concept to final PCB, for all embedded-system electronics.

In addition to design, he teaches in both industry and academia and is a Senior Member of the IEEE. He holds an MSEE from Worcester Polytechnic Institute.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!