Challenges Lie Ahead At The Physical Layer

Nov. 17, 2008
High-speed serial data has all but replaced parallel buses, but it's getting tougher to ramp up the throughput.

The physical layer (PHY) of the Open Systems Interconnect (OSI) model conveys the bit stream—electrical impulse, light, or radio signal—through a network. In the context of the OSI model, the PHY embraces the physical as well as the signaling aspects of the interconnect. Here, we will focus just on the electrical issues.

These days, designers are usually most concerned with serial signaling. This came about as data volumes increased and parallel buses just got too wide, cumbersome, and noisy to deal with. Certainly, serial buses aren’t simple, in terms of serializing, deserializing, and clocking. But they have become the dominant communications modality, even for short hops.

One way engineers are trying to meet the demand for greater speed at the physical layer is in the process technologies used for serializer-deserializer (SERDES) functions. This is of particular interest to designers in the fabless semiconductor community who may be considering using a pure-play analog foundry to develop a custom chip. For more on the latest in RF CMOS and silicon-germanium (SiGe) biCMOS, see “Process Technology Considerations For PHY ICs.”

SCALING CHALLENGES On the other hand, according to Allan Evans of Samplify Systems, manufacturers using those advanced semiconductor technologies may be able to focus solely on extending the speed limits for SERDES interfaces. But recent history amply demonstrates that manufacturers that use mainstream CMOS processes face multiple challenges keeping up with increasing line rates, while also simultaneously moving to smaller feature sizes to boost gate density and lower costs (see “Bridging The Data Bandwidth Gap”).

Leading manufacturers in the FPGA market have been continually trying to leapfrog each other with the introduction of each new product family. Yet while Moore’s Law applies directly to shrinking lookup tables with each new process step, high-speed serial interfaces still depend on analog circuit design techniques.

Merely replicating the device models for the analog circuit components is challenge enough for each new process node. To extend the speed of the interfaces, these device models must be improved, which requires an iterative process of circuit redesign, shuttle runs, and characterization.

The Altera Stratix GX family illustrates this obstacle of increasing interface line rates while moving to the next process node. Developed on 130-nm CMOS, the Stratix GX was the first FPGA family to support SERDES interfaces operating at 3.125 Gbits/s.

With the introduction of the Stratix II GX, developed on 90 nm, Altera was able to extend the maximum line rate to 6.25 Gbits/s—quite an achievement. With the Stratix III device developed on 65 nm, though, SERDES interfaces were never made available. Instead, Altera focused its engineering resources to quickly move to 45 nm with the Stratix IV and the SERDES-enabled version, the Stratix IV GX.

By doubling the gate density with each process node, the FPGA capacity increased by a factor of eight from the original Stratix GX. But even at a maximum claimed line speed of 11.3 Gbits/s, the SERDES line rate increased only by a little over a factor of four during the same period from the original 3.125 Gbits/s. Clearly, physical scaling has been challenging. In his sidebar, Evans suggests an alternative approach.

BACK TO BASICS WITH CDR Looking more fundamentally at the engineering essentials associated with extracting information from a non-returnto- zero (NRZ) serial data stream, clock and data recovery (CDR) is accomplished using phase-locked loop (PLL) and delaylocked loop (DLL) circuits. “Clock Recovery Methods for Jitter Analysis,” a Tech Brief from LeCroy, nicely illustrates the basic CDR concept as well as jitter analysis concepts (Fig. 1).

The sampling clock is derived from the data edges by phaselocking to the data transitions. The PLL generates a clock whose jitter follows that of the data for long-term variations in bit rate, but allows short-term variations to pass. The low-pass filter in the PLL feedback loop determines the jitter rates that appear on the recovered sampling clock.

That way, the receiver is unaffected by relatively large changes in the average bit rate that occur over long time periods. The detector uses the recovered clock to determine the presence of a one or zero to locate the symbol boundaries sampling the voltage at the nominal center of the symbol.

In more real-world CDR circuits, though, two separate feedback loops that share a common control voltage track the phase of the input data signal (Fig. 2). A high-speed delay-locked loop path uses a voltage-controlled phase shifter to track the highfrequency components of input jitter.

A separate phase control loop, the voltage-controlled oscillator (VCO), tracks the low-frequency components of input jitter. The VCO’s initial frequency is set by yet a third loop that compares the VCO frequency with the input data frequency and sets the coarse tuning voltage. The jitter-tracking PLL controls the VCO by the fine-tuning control. The delay and phase loops together track the phase of the input data signal.

For example, when the clock lags the input data, the phase detector drives the VCO to a higher frequency and increases the delay through the phase shifter. Both of these actions reduce the phase error between the clock and the data. The faster clock picks up phase, whereas the delayed data loses phase. Because the loop filter is an integrator, the static phase error is driven to 0°.

Continue to page 2

Speaking of the real world, the challenge in CDR comes from jitter and noise (Fig. 3). Both factors cause variations in the placement of signal transitions. Jitter affects horizontal placement, and noise affects vertical placement. Together, they introduce uncertainty in data recovery, which is perceived as a non-zero bit-error rate (BER).

Variation in edge placement essentially is a logic threshold problem that’s exacerbated by today’s high signaling rates and low operating voltages. The signaling rates mean that from bit to bit, it can be statistically guaranteed that some transitions will fall outside the logic’s setup-and-hold requirements, and the operating voltages squeeze logic high and low signal thresholds.

There are many ways to characterize timing variations, depending on how you want to think about them. In addition to timing and amplitude, jitter and noise can be further broken down into random and deterministic categories (Fig. 4). What’s significant about the random component of both is that it doesn’t correlate to system operation. It must be dealt with by the system design.

“Deterministic” means that those characteristics are repeatable and predictable. Also, their peak-to-peak values are bounded and can usually be observed or predicted with high confidence based on a reasonably low number of observations. Deterministic jitter and noise have further “periodic” and “data-dependent” components. And, jitter has a “duty-cycle-dependent” component.

Periodic jitter repeats in a cyclic fashion. It’s uncorrelated with any periodically repeating patterns in a data stream. Rather, such jitter is typically caused by external deterministic noise sources like switching power-supply noise, a strong local RF carrier, or an unstable clock-recovery PLL. On the other hand, data-dependent jitter correlates with the bit sequence in a data stream.

Duty-cycle-dependent jitter may be predicted based on whether the associated transition is rising or falling. This may be because the slew rate is different for rising or falling edges, or the logic threshold voltage is different for the two cases. These situations are similar for random and deterministic noise, except there’s no parallel duty-cycle-dependent situation for noise.

BER derives from time interval error (TIE), the difference between data edges and edges of the recovered clock. To measure BER, instrumentation can measure a data sample’s TIEs and present a histogram of TIE values versus the number of occurrences of each value, showing the probability of a data edge occurring at a given time within a bit period, given that the data is sampled at that time.

A TIE bathtub curve integrates the probabilities for all values of offset. Total jitter is the width, and the sides of the bathtub give the bit error rate for any given sampling point within a bit interval. The horizontal distance between the curves at a given vertical displacement or bit error rate gives the eye opening at that BER. As long as the sides of the curve do not touch, there is a sampling point at which the desired bit error rate can be achieved.

Sponsored Recommendations

What are the Important Considerations when Assessing Cobot Safety?

April 16, 2024
A review of the requirements of ISO/TS 15066 and how they fit in with ISO 10218-1 and 10218-2 a consideration the complexities of collaboration.

Wire & Cable Cutting Digi-Spool® Service

April 16, 2024
Explore DigiKey’s Digi-Spool® professional cutting service for efficient and precise wire and cable management. Custom-cut to your exact specifications for a variety of cable ...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!