Multicore My Way

Jan. 18, 2007
The trend today isn’t just putting multiple cores on one chip, but also connecting multicore chips to build large systems.

Multicore designs are all the rage, and the reasons why are easy to understand. Faster clock rates are shooting power consumption and heat dissipation through the roof. Major architectural improvements have been implemented, and larger caches simply take up space.

If all things were linearly related, tradeoffs would be more interesting. But in practice, using more processor cores running at a lower clock rate provides more throughput while consuming less power. Sun Microsystems' UltraSparc T1 chip layout also highlights the issue of chip real estate (see the figure).

The move to 90- and 45-nm technologies significantly raises the number of cores that can fit on a chip. Likewise, the area allocated to things other than the processor core has been growing significantly. Caches are expanding, and memory controllers have moved on-chip for many architectures.

Multiple-core architectures are quite varied (see the table). Typically, an architecture is optimized for its main target environment. For example, Azul Systems' Vega 2 targets the Java enterprise market where Java applications run in a J2EE (Java 2 Enterprise Edition) environment (see "Lots Of Java").

Instruction sets don't make much of a difference in multiple cores, since Intel and AMD share a common 32and 64-bit instruction set architecture (ISA)—well, almost, but the overlap is well over 95%. AMD uses HyperTransport links to connect chips together (see "HyperTransport: The Ties That Bind"), while Intel uses a central memory-controller architecture (see "Memory Front And Center").

Benchmarks often highlight architectural differences, but keep in mind the phrase "liars, damn liars, and chip vendors." A more important issue when looking at multicore chips will be threading in application software.

Putting multiple cores on-chip will benefit all but the system that runs one major application with one thread. Mileage may vary, but the benefits of a large number of cores are likely to remain greater for servers until application developers adjust to the plethora of threads.

It's interesting to note that only some multicore chips implement multithreading. The bottom line is that this class of chips should really be rated by the number of threads they can execute simultaneously. Also, parallel programming languages are a hot research topic—the hardware has finally caught up to their needs. But that's another story.

Sponsored Recommendations

Near- and Far-Field Measurements

April 16, 2024
In this comprehensive application note, we delve into the methods of measuring the transmission (or reception) pattern, a key determinant of antenna gain, using a vector network...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Empowered by Cutting-Edge Automation Technology: The Sustainable Journey

April 16, 2024
Advanced automation is key to efficient production and is a powerful tool for optimizing infrastructure and processes in terms of sustainability.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!