Electronicdesign 4655 Xl testandmeasurement150x155 1
Electronicdesign 4655 Xl testandmeasurement150x155 1
Electronicdesign 4655 Xl testandmeasurement150x155 1
Electronicdesign 4655 Xl testandmeasurement150x155 1
Electronicdesign 4655 Xl testandmeasurement150x155 1

Wring Performance From High-Bandwidth Scopes

May 24, 2011
The latest high-bandwidth scopes from Agilent, LeCroy, and Tektronix are extremely powerful instruments. Learn how to get the most out of them for your money in this tutorial-oriented article.

If you’re contemplating upgrading to a top-level scope with a bandwidth of 20 GHz or more, chances are you have top-level debug and validation work on your plate. You may be sorting out a gnarly differential signal issue on a high-speed serial bus, looking for sources of noise and jitter that are making your eye diagram look, well, not so open.

The latest high-end scopes from the Big Three—Agilent Technologies, LeCroy Corp., and Tektronix—all have the bandwidth and low enough noise floor to do the job. But using these scopes properly is not a given, and mistakes will cost you in either time or accuracy, if not both.

The good news is that all three companies have taken measures to build in a lot of smarts that stand ready and waiting to help you capture that elusive glitch. In this article, we’ll look at some best practices for these scopes, centered around the user interface and on maximizing signal-to-noise margins as well as on probing. We’ll also examine some comparative data to help you decide which scope is best for your needs.

Signal Integrity Basics

An initial item of concern is maximizing signal integrity. You must first come to an understanding of just how much bandwidth you really need to make your measurement (Table 1). Let’s say you’re using a 32-GHz scope such as Agilent’s flagship Infiniium DSA-X 93204A.

To achieve the lowest possible jitter in your measurement, you need to have an idea of how much content there is in the signal of interest (SOI). If you’re measuring a 20-GHz signal but running the scope at its full 32-GHz bandwidth, you add 12 GHz of noise and jitter to your measurement.

In terms of amplitude, the instrument should be adjusted to have the proper bandwidth to accurately recreate signals. The frequency response of the oscilloscope should have smooth rolloff at 3 dB and achieve a 3-dB point that is adequate to represent the signal of interest.

“Customers have rules of thumb,” says Ken Johnson, senior product marketing manager at LeCroy Corp. “One is a scope bandwidth with a frequency response that’s at least five times the fundamental frequency of my serial data signal.” The fundamental frequency for a 5-Gbit serial data signal is 2.5-GHz fundamental frequency, so engineers want at least a 12-GHz scope to measure a 5-Gbit signal.

The reality, says Johnson, is that no matter what test vendors say to their market about this topic, customers says they want five times the fundamental and won’t take less than four times. One might think that the more bandwidth, the better, but this is not the case.

At bandwidths higher than five times the fundamental, the scope adds thermal noise and you get no benefit in terms of lower jitter measurements. So, it turns out that the rule of thumb regarding five times the fundamental is born of the experience of the collective hive and holds up in practice.

High-end scopes from all of the Big Three provide noise-reduction control that dials them down to the proper bandwidth for the SOI. Be aware that all scopes default to the highest bandwidth and require a manual adjustment.

To determine how much bandwidth is needed, use your scope’s fast-Fourier transform (FFT) function to find the point at which the content is down 40 dB or more. Anything below that is buried in scope noise.

Yet another best practice in terms of minimizing jitter is to get the proper amount of signal on the scope’s screen. Again, for any high-end scope, your best bet is to adjust the vertical axis so the signal waveform is somewhere between 80% and 90% on screen. Beyond that, you run the risk of clipping the signal in the analog-to-digital converter (ADC).

“When evaluating scopes, pay attention to what the vendor shows you,” says Brig Asay, Agilent Technologies’ product manager and planner for high-performance digital scopes. “A vendor may show you one scope at 90% of screen with another at 75%. The latter will have better numbers. That’s an unfair comparison. Everything needs to be the same to be fair.”

Critical Scope Specs

When you shop around for an oscilloscope at these rarified levels (or any level, for that matter), what are some of the critical specifications you should examine and ask questions about (Table 2)? One is step response. How faithfully does an instrument reproduce a digital transition? How fast is its rise time? If I input a perfect square step, how close does it look to perfection on the scope’s display?

Generally, a sampling scope is the only way to achieve perfection in this regard, and it is used as a golden reference. Of course, frequency response is crucial, and all of the test vendors do a fine job of properly representing their products. They also provide the bandwidth that they advertise, with more than acceptable rolloff. And, all use DSP techniques to match channels.

Jitter noise floor, which is a measurement of the very best a scope can do, and timebase stability, which represents the scope’s stability over a long signal acquisition, will ultimately determine the instrument’s sensitivity to capturing high-speed glitches. When capturing eye diagrams, this is not quite as critical. Application of the DSP to the measurement filters out the instability. But turning off the DSP can provide insight into what is really happening in an acquisition.

Any high-bandwidth digitizing system, such as the ADCs found in scopes, comprises multiple devices with sample rates within the bandwidth measurement range of the instrument. There will always be spurs on acquired signals, and this is measured in the scope’s spurious-free dynamic range (SFDR). No one likes to see spurs on waveforms that aren’t coming from the waveform. But if those spurs are far enough below the fundamental frequency of the waveform, as in 40 dB down or more, you shouldn’t be overly concerned.

The Four Measurement Processes

Engineers navigate four phases or processes in the course of taking a measurement. The first of these is discovery, which encompasses these signal-integrity concerns. The idea here is to find that signal of interest, make sure it’s displayed properly, and view it on the instrument.

Engineers using a scope with a bandwidth of 20 GHz or more are probably more concerned with validation and characterization than with debugging. Debug work for, say, an embedded system of some sort does not call for a scope with that sort of bandwidth. But characterization is a different story. Here, very precise and repeatable measurement results are extremely important. An example of characterization would be preparing a system-on-a-chip (SoC) for production, when you want to ensure that it delivers the performance that will be specified on the datasheet.

In measuring digital signals, it’s essential to understand the role of sampling rates in the process. The Nyquist theorem tells us that we should sample at 2.5 times the frequency of the incoming signal.

“That’s a good starting point but the higher the sampling rate, the better,” says Chris Loberg, senior technical marketing manager at Tektronix.

Using a higher sampling rate than 2.5 times the input frequency provides more sampling points on the signal of interest. This is particularly important in a later stage of the measurement process, namely analysis of extremely high-speed signals. Oversampling of input signals has become a critical element of the downstream signal analysis that all high-end scopes perform.

In addition to frequency response and adequate oversampling, a third issue that impacts signal discovery is noise. In any scope, some amount of noise comes along with the acquisition system and the clock. Noise impacts measurements both by having a damping effect on amplitude and by changing the nature of the timing of signals being measured. High-end scopes all offer extremely stable timebases for tracking incoming samples.

The scope vendors also specify rise-time performance for their products, a measure of how accurately an instrument can reproduce the rise time of a digital signal. This is a very important scope specification, as designers must know whether their scope is up to the task of measuring the rise time of their signal. For third-generation PCI Express, rise times are in the area of 30 ps.

A final element of scope performance that impacts signal discovery is effective number of bits (ENOB). A scope’s ENOB spec is a great way to determine the effectiveness of the ADC circuitry in any digital communications system. It is an IEEE specification, and many designers use it to evaluate their ADC designs.

When looking at ENOB specifications for various scopes, remember that no one number represents the scope’s capabilities in this regard. The spec will vary with frequency. Even if a scope vendor specifies ENOB at a given frequency, you’ll still want to see a plot of ENOB versus frequency to get a true picture of performance.

Translated for the instrument market, engineers can evaluate their scope’s ability to accurately represent the real digital performance of an electrical signal using the algorithm provided by the IEEE. (You can find that algorithm in a Tektronix application note at www2.tek.com/cmswpt/tidetails.lotr?ct=TI&cs=Technical+Brief&ci=17752&lc=EN.) The higher the ENOB result, the better the instrument can accurately represent the digital performance of the electrical pattern/clock being measured.

The second phase of the measurement process is capturing the signal of interest. This is where triggering systems come into play. When buying a scope, you must ensure that its triggering system is fast enough to keep up with the signals you intend to measure. Often, engineers are overly concerned with scope bandwidth as they contemplate a purchase. It’s worthwhile to consider the trigger bandwidth as well.

The circuitry used to support the high-speed acquisition system of today’s performance oscilloscopes can also be deployed to speed up the triggering performance of an oscilloscope. With fast circuitry, trigger systems can process more quickly through bit patterns to determine proper trigger points based on complicated trigger setups.

Once a signal has been captured, the next phase of the process is search. Somewhere in that waveform is the glitch(es) you need to find. Today’s scopes can store hundreds of millions of sample points, which is more than enough to capture at least 10 ps of timing on a very fast signal. To properly evaluate a signal, you must capture enough clock cycles (or unit intervals) to ascertain that the signal is stable over time.

Effective searching through a signal record requires the instrument to have utilities tied to the triggering system. Scrolling through a multi-million-point acquisition with the scope’s horizontal scroll knob to find a glitch is rather inefficient, let alone boring. A better way is to use the scope’s search utilities to apply trigger events to the search. Most, if not all, modern scopes allow you to search for a particular pattern or error associated with the signal.

The final stage in the measurement process is analysis, a stage that is critical for users of high-performance scopes. The eye diagram is a fundamental tool for analysis. It basically overlays all of the rising and falling signal instances. The reoccurrence of all these signal instances creates the eye diagram. Engineers are looking for a wide-open “eye” shape.

All communications protocols specify attributes of the eye to pass certification standards. The scope vendors build software into their instruments that automates many measurements such as eye diagrams.

“We grabbed developers from protocol teams to do protocols on the scopes,” says Agilent’s Brig Asay. “In all these packages, depending on the protocol, we have all the tests automated. You can pick a jitter test for a given standard and the application already has that measurement done. The measurement is scripted.”

These very powerful tools couldn’t be easier to use. You already have your signal coming into the scope. A single button-push performs quick protocol analyses and tells you exactly where errors are occuring so you can zoom in on them and debug.

You can inform the software that you’re measuring, say, a PCI Express signal. The software automatically sets up the scope to take the measurement, saving users from the drudgery and the associated potential for errors in doing the setup manually.

Get To The Bottom Of Jitter

Similarly, the scopes provide jitter measurement tools to help users get a better handle on one of the most critical timing measurements and potential sources of error in their systems. Designers must determine whether their circuits pass the jitter specs required of them.

It’s important for scopes, then, to incorporate a jitter tool that not only captures and displays those specs, but also looks for why faillures occur. The tools should break down random and deterministic jitter, as well as go further by sorting through the components of deterministic jitter to help determine how to control or eliminate sources of jitter.

Scopes also provide tools for serial-link data analysis, using if-then analysis to help designers understand the performance of their transmitter, receiver, and link as a system. These tools will automatically de-embed the effects of the measurement, saving them the trouble of trying to figure out the effects of probes and/or fixtures on the circuit and how they might be affecting the measurement itself.

Tektronix’s SDLA tool uses S parameters to de-embed (or embed) such effects. Another example is Agilent’s recently launched PrecisionProbe software (see “Software Helps Compensate For Probes, Cables”).

The PrecisionProbe software will AC-calibrate your probe to the exact requirements of the measurement you need to take. As a result, it eliminates all probe-to-probe variation by calibrating the cable and probe down to the tip. The AC calibration will take care of pitch adjustment.

A neat capability of the PrecisionProbe software is that it precludes the need to haul out a time-domain reflectometer to derive S-parameter values for simple cables. The software turns the scope itself into a time-domain reflectometer (TDR) and comes up with an S-parameter model for use in de-embedding the cable from measurement results. But keep in mind that the results you achieve in de-embedding are only as good as your S-parameter file, and not all software that purports to extract S parameters is necessarily equal.

Tools such as Tek’s SLDA also analyze signal equalization techniques. As signal speeds top 5 Gbits/s while still traversing printed-circuit board (PCB) traces suited for much slower signals, the signals take a beating. It becomes nearly impossible to get a proper eye diagram at the receiver. Tools such as SDLA create filters that enable the evaluation of different equalization schemes.

Breaking It Down

Decomposition of jitter with serial-link analysis tools is very accurate if the tools have knowledge of the data pattern. This is especially true with data-dependent jitter. But with tools such as LeCroy’s SDAII serial data analysis software, it’s a one-pushbutton operation to measure total, random, and deterministic jitter.

Deterministic jitter can be broken down further in the LeCroy scheme with the company’s software, which uses not a spectral breakdown of jitter but rather a method based on an NQ-Scale algorithm. According to LeCroy’s Ken Johnson, a spectral breakdown runs the risk of misinterpreting what is actually random jitter as deterministic jitter.

This can occur, for instance, in a multilane system with crosstalk between the lanes. That crosstalk can result in data-dependent jitter that leaks into adjacent lanes. Such jitter is only pseudorandom because the data changes, so it appears as random jitter.

In LeCroy’s view, the NQ-scale algorithm is a better way to separate that cross-talked, data-dependent component out and consider it as a deterministic jitter and not as random jitter. It will return a different result from the spectral method used by other vendors. If crosstalk is not a factor, the two methods won’t return a very different result. But as signal speeds and densities rise, the increased radiated energy in smaller systems has tended to make crosstalk a more pervasive problem.

According to Martin Miller, LeCroy’s chief scientist, the spectral method makes a very simple assumption: everything in the spectrum that is a peak is associated with deterministic jitter. It also assumes, says Miller, that the rest of the jitter, or what some mathematicians would call the background, is random in nature. The problem with that assumption is that in many scenarios, the background is not a Gaussian process but is in fact a bounded process. It is random, so it’s confused with something assumed to be random when in fact it’s not.

An NQ-Scale algorithm is better for crosstalk because it looks at the problem statistically, not spectrally. Crosstalk-related, data-dependent jitter is random in nature but what it does is in fact purely bounded, says Miller. But because it’s random, it cannot be predicted and manifests itself in the spectrum as purely random with no periodicity. The spectral method mistakenly assumes that to be Gaussian and inflates the random-jitter component of the deterministic jitter total.

Using the NQ-Scale algorithm is not without its tradeoffs, however. It requires lots of statistics, which means lots of measurements. Thus, it takes longer than the spectral approach. “The spectral method has the advantage of being convergent very quickly, albeit sometimes on the wrong number,” says Miller.

Probing Matters

The interface between scope and signal is the probe, and with extremely high-bandwidth signals, it’s not too hard to create problems in the form of ground loops, poor connections, and other issues. Even when probing is done properly, probes inherently vary from one to another.

As a result, it’s imperative for probes to be carefully matched and correction to be done properly. Solder-in browser tips must be carefully soldered without cold solder joints.

When probing for the purposes of signal discovery, it’s important to consider the probe and cabling as part of the overall scope system. To that end, make sure that the probe bandwidth matches the frequency response of the scope. The path that signal travels from board or chip to scope could be as little as 4 to 5 inches or as long as 7 to 8 inches. Any signal conditioning that occurs along that path has to be considered.

Most browsers offer adjustable pitches, which is a useful feature that lends flexibility to the probe. But every time you adjust the pitch on the tip, the tip’s impedance characteristics change. Bigger adjustments mean more change, so it behooves users to be mindful of this when probing. As more scope users gravitate toward software like Agilent’s PrecisionProbe, this will become less of an issue.

If you can avoid using a probe at all, of course, your measurements will be better for it. Generally speaking, probes are used during debugging work, while cable interfaces are used during validation to obtain a cleaner eye diagram. As a rule of thumb, cables deliver about a 3-dB noise advantage over probes. Thus, a higher-bandwidth probe is nice to have but not necessarily a huge advantage depending on what you’re using your scope for at a given time.

Tektronix offers a 20-GHz probe with tri-mode capability. The probe’s solder-in or browser tip has a positive and a negative contact as well as two grounds, enabling it to be used as a differential probe or as two single-ended probes. LeCroy’s claim to fame in the probe department is its resistor-equipped tips, which it says yields better signal fidelity, bringing what you see on the screen with the probe closer to what you’d see if you were using cables. Agilent’s offerings include amplifier bandwidths up to 30 GHz with 28-GHz tips.

The Hows And Whys Of DSP

All of the high-end scopes on the market use DSP, but some manufacturers use it in different ways than others. In the past, DSP was used to boost bandwidth in many instruments, but it has fallen out of use for such purposes. These days, DSP is used more often to ensure that channels are aligned in terms of rolloff. DSP enables the matching of frequency response between channels to compensate for hardware variations. Once you have matched the channels in terms of frequency response, then you can match them from the point of view of volts/division.

In its instruments, LeCroy takes DSP in a slightly different direction, providing a couple of different ways to adjust characteristics in what it terms “optimization mode.” According to Ken Johnson of LeCroy, two things affect step-response shape: the raw magnitude response, or the rolloff, and the group delay, or phase response.

“Think of phase response as something from the golden age of analog,” says Johnson. “It’s the separation between high and low frequencies, the propagation delay. There was a natural amount of group delay in any old analog scope. That caused a step response with no preshoot and only overshoot. This is because of the group delay. It’s natural and it was just how things were. It’s what people think of as normal.”

A clever technique that LeCroy employs is to measure the group delay in its scopes and apply DSP to flatten out this response. The result is equalized preshoot and overshoot, which is good for eye diagrams and serial data evaluation.

If instead you’re looking for the fastest possible rise time, rolloff is an undesirable characteristic. If, for example, you’re down 3 dB at 16 GHz, you will lose high-frequency bandwidth. To address this scenario, LeCroy offers a flatness mode, which provides the fastest possible rise time but will offer more preshoot and overshoot.

Sponsored Recommendations

X-Band Transceiver

March 13, 2024
Aerospace and Defense applications conform to the tightest standards. ADI provides you with the confidence and support to ensure your design is a success.

ADVANCED MICRO SOLUTIONS FOR AUTOMOTIVE APPLICATIONS

March 13, 2024
Our solutions and technologies are designed to enable optimum safety on the road for electronic systems enabling safe, self-driving electronic mobility solutions.

Connectors for Automated Process Systems

March 13, 2024
Almost every product that people touch in their daily lives is the result of an automated manufacturing process. This includes relatively simple assemblies such as cereal boxes...

+600C Series RTD Platinum Temperature Sensor

March 13, 2024
Innovative Sensor Technologys temperature sensors have a small but robust construction to ensure function in harsh conditions

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!