Latest from Test & Measurement

27253616 © Nomisg | dreamstime.com
dreamstime_nomisg_27253616_promo
ID 319909889 © Media Whalestock | Dreamstime.com
Needles on automatic test equipment
Cabe Atwell and Dreamstime_psstockfoto_45092862
dreamstime_psstockfoto_45092862
Dreamstime_svetlanadiacenco_325233820 and LEM
dreamstime_svetlanadiacenco_325233820_promo
ID 322271709 © Cherezoff | Dreamstime.com
roboticarm_dreamstime_l_322271709

Dipping in the Hardware Emulation Archives (.PDF Download)

Feb. 15, 2018
Dipping in the Hardware Emulation Archives (.PDF Download)

Every verification engineer today knows about hardware emulation and its capabilities. Some may know it’s a technology employed since the 1980s. However, not everyone knows the story of emulation’s winding path from a tool relegated to the dusty backrooms to widespread adoption, or the players involved.

Follow me as we take a stroll through the hardware emulation archives for its remarkable journey.

What Started It All

In the second part of the 1980s, hardware emulation sprang from the invention of the field-programmable gate array (FPGA). By building an array of interconnected FPGAs configured to “emulate” the behavior of a design before silicon, it was possible to verify the design at speeds unapproachable by any software-based simulation algorithm. The high-speed led to testing the design-under-test (DUT) with real input-output traffic via a physical target system, where ultimately the chip would reside once released by the foundry. The DUT combined with the target system setup is called in-circuit emulation or ICE.

FPGA-based emulators were time-consuming to deploy and rather difficult to use. In fact, the industry devised the expression “time-to-emulation” (TTE) to measure the time required to bring up the DUT for emulation and start to emulate it. Measured in several months, the TTE often exceeded the time it took for the foundry to release first silicon.

It was apparent from the very beginning that the FPGA-based emulation architecture was severely deficient. What was even more troublesome was that it could not be re-engineered to eliminate its inherent drawbacks. A long setup time, exceedingly slow compilation speed, and poor visibility into the DUT were hallmarks of the off-the-shelf FPGA-based emulator.

By the middle of the 1990s, a few innovative startups proposed new technologies to overcome the drawbacks. They believed that only custom-made silicon designed for emulation held the promise to remove the pitfalls of the old class of emulators.

New Design Approaches

From the start, all of the initiatives were based on custom reprogrammable devices deployed in two rather different emulation architectures.

One made use of a custom FPGA designed to provide 100% native internal visibility of the DUT without compiling probes. The unique architecture also offered easier setup time and significantly faster compilation speed.

The other architecture took a radically different approach, achieving the same objectives: 100% native visibility into the DUT, easier setup time, and very fast compilation speed. It was called a custom-processor-based emulator.

Twenty years later, these design approaches are the architectures for today’s hardware emulation, though far, far superior to the early days.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!