Design High-Speed Data Links With Link-Level Simulation
The phrase "high-speed digital" evolved during the time of vacuum tubes and early digital computers.1 Ever since, the driving need for better system performance and higher clock speeds has presented greater challenges to designers. Those challenges became even more formidable when integrated circuits entered the picture.
When Jack Kilby and Robert Noyce invented the silicon IC in 1959, they introduced a whole new challenge—high functional integration in a compact space that wasn't possible with prior tube and discrete transistor designs. As clock frequencies elevated into the tens and then hundreds of megahertz, problems like ringing and terminations, drive levels, and interference between signals became day-to-day concerns.
Eventually, analyzing and resolving these problems came to be known as signal integrity. This discipline set out to meet the formidable task of getting data between two functional blocks while attaining the error performance required by the overall system.
In recent years, Moore's Law has taken over as a driving force in the computing industry, pushing clock rates into the gigahertz region. The design challenges for traditional parallel data-transfer buses with enough capacity and stability for use with such high-performance processing became so severe that a new approach was adopted—serialization.
Even though the data rates associated with a serial approach are much higher (multi-gigabit-per-second rates are common), serial buses offer the ability to control a single-channel signal environment while preventing many of the major problems associated with parallel buses. Numerous standards now exist for such links, including InfiniBand, Serial ATA, USB 2.0, PCI Express, Fibre Channel, RapidIO, and HyperTransport.
Yet high-speed serial data links do produce a set of their own design problems. In particular, there's the integration of a number of discrete designs across many engineering specialties that must come together to make a functional system. Fortunately, a new method is available for high-speed, serial-data-link design and integration.
DESIGN CONSIDERATIONS Each of the functions and interconnections of a serial data link represents a design activity. Consider the profile of a typical link (Fig. 1).Signal Processing—Pattern Generator/Encoder/Equalizer/ Decoder: DSP designers work with the system's signal-processing functions. On the transmitter (TX) side, the incoming data is formatted and encoded for transportation across the physical layer (PHY) according to a specification (either standard-based, such as 8B/10B, or proprietary). On the receiver (RX) side, the incoming data is sampled, equalized, and decoded before being passed on to the next layer.
EDA: Designers of these functions often use language-based tools such as Matlab or C++ for algorithmic design, then proceed to HDL synthesis and functional verification. Subsequently, this HDL code is used as input to the ASIC production process.
Challenge: One of the major obstacles of this design task is to ensure that the real-world effects from the rest of the channel are adequately modeled during the design process.
Analog IC—Pre-And De-Emphasis/Driver/Receiver: Designing the line driver and receiver functions, which may include signal emphasis, usually is carried out in a mainstream analog IC design flow. Some form of Spice simulator is the primary circuit-level design tool.
EDA: Designers of leading-edge systems either work with discrete transistors or low-level cells to create the necessary functions. Design information can exist in many forms, depending on the user. The IC fabrication process will use a netlist of the design to manufacture the circuits. End users of an IC also may receive this information if they're " inhouse." External customers wishing to use a catalog IC often will receive information either in a proprietary encrypted file format or as an open-source standard, such as IBIS or Verilog-A.
Challenge: The TX and RX designs are generally carried out with typical waveforms and worst-case loads to meet an interface standard or waveform performance, rather than in the context of an overall system.
Packaging—TX/RX Packages: Package design can be carried out by specialist packaging companies or in larger companies by in-house designers.
EDA: Design tools are typically EMbased using either full 3D or 3D planar methods. Although such tools have been memory-limited when modeling larger packages, smart processing of layout information can greatly reduce processing size, and recent advances now provide much greater flexibility.2 Information for manufacture is geometry-and materials-based. EM design tools also can be used to produce working simulation models of packages for IC designers. Options include multiport S-parameters and equivalent circuit models.
Challenge: Though equivalent models are often preferred because of their inherent compatibility with Spice, equivalent network synthesis will almost always leave the model with limited accuracy. In some cases, the resulting circuit can have thousands of components. S-parameters, though, contain all of the necessary information but can't be directly handled by conventional Spice simulators, especially as the number of ports grows.
Board—Daughtercards/Backplane/ Motherboard: Board design is perhaps the most challenging aspect of modern links. A board's interconnect design can make or break system performance. Designers typically don't have the luxury of carrying only one signal path (sometimes referred to as a "lane") on a given board.
Multiple signal paths can lead to less than ideal routing through various layers, putting them in close proximity to interfering or aggressor signal paths. A number of isolation techniques could be used, such as via fences and dedicated ground planes, but they're expensive in terms of real estate and can't always be accommodated.
EDA: Design data is geometry-based, and modeling options and issues are much the same as those for package design. However, S-parameter data is usually much simpler—for example, representing three line pairs (12 ports). Various design approaches help with this tricky problem, such as using experience-based design rules, partial modeling of critical paths using closed-form and cross-sectional models, and adopting automatic parasitic calculation tools that are part of trace-routing utilities.
Challenge: These techniques can work well from the hundreds-of-megabits-per-second region through a few gigabits per second. At higher speeds, geometry-related problems like surface-and space-wave propagation become significant. So-called phase velocity dispersion, where higher-frequency components of a signal travel more slowly on nonsymmetric transmission structures like the microstrip, can become problematic. These issues point to the use of EMbased analytic tools, though until recently, such tools were constrained by memory limitations to solving for only small parts of key signal paths.3
Connectors And Sockets: Connector design is almost always left to specialist houses. Ensuring good electrical characteristics (isolation, characteristic impedance, and skew performance) while meeting mechanical needs and keeping within cost constraints is extremely challenging.
EDA: Design often is carried out with 3D drafting tools in association with 3D EM analysis. Here again, S-parameters and equivalent-circuit models can represent the electrical data. This is usually made available to customers.
Challenge: Designers must generate accurate broadband models for customer use.
Link-Level Simulation: These various discrete-link design tasks generally occur in the context of the same overall system specification. Yet in most ways, they're isolated from one another by design, the (necessarily) different design methods, and the available tools.
For example, a Spice user can't directly-use a Matlab module, or vice versa. This isolation leaves little room for verifying a link design until the first evaluation or test boards are produced—when correcting errors becomes time-consuming and expensive.
It would be a major advance if each of the above design disciplines, with their differing models and IP, could connect through a common design tool to check and verify the link during the design phase without waiting for hardware. So, how might this be done, and what would a "link-level" simulator look like?
Such a tool, which would be called an integrating link simulator (ILS), would have to deal with the data in all forms and facilitate all of the necessary link tests (e.g., eye measurements and bit error ratio, or BER). In the meantime, it must be able to use the native models and measurement data produced at each design stage. There are three possibilities to consider when working with link models and intellectual property (IP):
- creating a single simulator that can work natively with all link functions
- directly using the original design information to create a native model for the ILS
- allowing the ILS to invoke other simulation tools when needed
The first approach appears to be the most desirable. But such a product doesn't exist yet, and no one seems to be attempting to develop such a simulator. The challenges of combining all possible current model types into one simulator are substantial. Trying to unify all design methods under a new design format would likely be highly disruptive.
The second approach could be effective when link design information comes in a form similar to the ILS's own model language. If ILS models were written in C++ and link functions also were represented in C++, then it should be possible to incorporate this link function code directly by building a new ILS model. As long as this process was reasonably simple and well documented, it would supply the necessary functionality without raising a usability barrier.
The third approach, allowing the ILS to invoke another simulation tool, is termed co-simulation. Co-simulation has the benefit of using the "right simulator for the job." For example, HDL code would run in a HDL simulator. Cosimulation requires a bridge between the host ILS and the client engine, which means developing some kind of dedicated interface on the ILS for each client simulator.
At the lowest level, such a bridge requires a numeric representation of a data flow to be passed from the ILS to the client and then back again, together with any necessary variable data and command sequences. For best flexibility, much of the data-flow formatting should be user-customizable without requiring in-depth knowledge of the interface itself.
One final constraint on co-simulation is that it should be dynamic. In other words, co-simulation takes place in the context of one simulation, rather than requiring a series of non-connected steps. This lets feedback occur across the boundary between simulators.
If we focus on the second and third approaches, it would be reasonable to see the ILS as a kind of backplane for the various models and simulators used in link design. It's analogous to a motherboard, due to the amount of standalone capability that must exist to facilitate every required function (Fig. 2). A motherboard must be functional in its own right and have capabilities that can be expanded by incorporating new models and simulators. It must be efficient—designed for the task. Advanced capability is useless if the resulting simulation takes forever to complete.
Does a simulator with this capability already exist? There are several examples of co-simulation in the EDA space. However, no public domain simulation environment seems to be able to match the proposed requirements of an ILS. But one simulator comes very close—the Ptolemy simulator developed by the University of California, Berkeley. The references offer a good outline of the challenges associated with working across multiple design disciplines and a good discussion of the available computational models and their relative efficiencies.4
The data-flow approach used in the Ptolemy engine is an efficient way of working with a system-level problem, since it's able to account for many differing levels of model abstraction. The references also offer good descriptions of the underlying Ptolemy principles.5 Ptolemy, in its commercial form from Agilent Technologies, takes a focused subset of the work from UC Berkeley and adds some key functionality to its already rich capabilities, making it suitable for an ILS.
Let's look at the ILS in action, using a Ptolemy simulation of a link that exists as an example (available by request from the author).6 It doesn't include every possible requirement as described earlier, but it does show the general principles involved and should readily be seen as extensible. Next, consider a link with the three major blocks outlined in Figure 3.
CHANNEL ADAPTATION—TX WITH PREEMPHASISFigure 4 shows a transmitter with channel adaptation. Its functional blocks are apparent: A random binary source followed by a conversion from binary to NRZ.This is followed by a bit period preemphasis generator (with options), an up-sampler (necessary to view waveforms and Eye diagrams with resolution greater than one sample per bit, but not necessary for BER), and a block that adds a time stamp to the samples entering the physical channel. This last block is key to setting up the co-simulation of the channel with the circuit-level Spice engine. It's also possible to add line-driver and receiver circuits at the circuit level. These can be represented through IBIS, Verilog A, or at the transistor level.
The Physical Channel: Equivalent-circuit models can represent a physical layout. These models come in three basic forms: closed-form analytic, cross-sectional EM models, and full-wave parametrized EM models. The last method represents true EM co-simulation.
Another way to represent the channel is to generate parametrized data models for the channel, either from circuit/EM simulation or measurements. This approach has a major benefit in simulation speed, especially for complex circuit models. Data can be parametrized for geometric or environmental parameters, allowing, for instance, exploration of system behavior across temperature. In this case (not shown), the line consists of three differential pairs that measure about 20 in. long, one of which contains an aggressor signal.
Signal Recovery—Receiver With Equalizer: The data output from the channel enters a receiver with a decision-feedback equalizer (DFE) (no pre-taps in this case). The DFE is based around a Ptolemy least-mean-squared (LMS) equalizer whose C++ source code, like so many other Ptolemy elements, is readily accessible. It's possible to have a C++ code representation of this entire equalizer and then incorporate this into Ptolemy, where it would behave in the same way as a built-in model.
RESULTS When the above link is run in Ptolemy, it produces two kinds of results: dynamic via Tk controls and graphing; and static, where data is collected into a data set and then becomes available for post-processing (Fig. 5). These plots will remain active during a simulation, showing dynamic performance changes such as equalizer training. There also are Tk "sliders" that can be used to adjust the system during run time.Figure 6 shows the Eye diagram for the post-processed data output, along with a jitter histogram. This Eye diagram incorporates all of the effects modeled in the channel, including, in this case, preemphasis and equalization plus a full channel model with reflections and aggressor signals.
An ILS can be used as a virtual motherboard for the many types of data, design IP, and simulation technologies, allowing interactive simulations of an entire high-speed data link. Such a simulator bridges the gap between algorithm and circuit, without causing disruptive changes in design flow. Therefore, it enables the design and verification of multi-gigabit-per-second transmitter and receiver algorithms and architectures. Ptolemy meets most of these requirements and provides many of the desirable benefits of an ILS, advancing the art of high-speed digital design today.
Acknowledgements: The author wishes to acknowledge the help and encouragement of colleagues in the Agilent EEsof EDA organization, particularly John Ladue, Sanjeev Gupta, and John Olah, for their encouragement and for laying the groundwork in high-speed data-link design in previous years.References:- Copeland, B. Jack, "The Modern History of Computing," The Stanford Encyclopedia of Philosophy (Fall 2005 Edition), Edward N. Zalta (ed.), http:// plato.stanford.edu/archives/fall2005/ entries/computing-history/
- Agilent Technologies, "A 3D Planar Modeling Simulator with a New 64-Bit Engine," Microwave Journal, Sept. 2005
- Edwards, T.C., Foundations for Microstrip Design, Chapter
- John Wiley and Sons, ISBN 0 471 27944 2 4. W.T. Chang, A. Kalavade, E.A. Lee, "Effective Heterogeneous Design And Co-Simulation," Department of Electrical Engineering and Computer Sciences, University of California, Berkeley
- Ditore, Frank, "Agilent Ptolemy Data Flow Primer," Agilent Technologies, http://eesof.tm.agilent.com/pdf/ ptolemy_primer_july_2003.pdf
- Agilent Technologies, Advanced Design Systems 2005A, Signal Integrity Design Guide, "Pre-Emphasis and Equalization Co-simulation"