Two primary concerns dominate medical device and system testing. One is the frequent presence of high voltages, making the test process itself an area of concern, with potential safety issues for patients and technicians during diagnostic endeavors such as X-rays and CT scans. These voltages become even more critical in implantable devices such as pacemakers.

But the real elephant in the room is the recognized standard that governs the process: the International Electrotechnical Commission’s (IEC’s) 60601-1 technical standard for the safety and effectiveness of electrical medical equipment. IEC 60601-1 is a big, broad mandate encompassing a general standard, about 10 collateral standards, and approximately 60 particular standards.

In this report, we’ll look at some of the particular issues surrounding high-voltage test, such as breakdown voltages and leakage currents. We’ll also look at the transition that’s now ongoing from the second edition of the IEC 60601-1 standard to implementation of the third edition and what that means for the process of testing electrical medical equipment.

Beware Of High Voltages

The presence of high voltages in many medical systems presents particular challenges in terms of test and measurement. Devices such as defibrillators and pacemakers are subjected to stringent regulatory review by the U.S. Food and Drug Administration (FDA). A great deal of care must be taken in their design and debugging or people will be hurt or worse.

“Safety has primarily to do with having these devices discharge properly,” says Mark Cejer, director of marketing at Keithley Instruments. “Having a pacemaker fire when it’s not supposed to can be really bad, or leaking the currents from inside that device into the body. We need to ensure that it remains isolated.”

To help ensure that these devices are safely isolated, test engineers perform breakdown voltage testing and leakage current testing. They subject the devices to very high voltages to see if they can withstand them. Then, they measure the leakage current that results.

The prototypes of the end product must be tested in this fashion. The subassemblies and components have to be tested this way too. Capacitors, semiconductor devices, and other components all have very stringent test requirements. Their individual leakage currents are tabulated and aggregated to arrive at a total leakage current for the end product.

Instruments for these applications must be highly accurate and have enough dynamic range to support high voltages and extremely low currents, which may be in the picoamp range at the component or subassembly level. At the end-product or system level, currents tend to be in nanoamp territory, or perhaps hundreds of picoamps.

About the best option for these measurements is a source-measure unit (SMU). What are SMUs and how do they fit into these kinds of applications? In essence, SMUs are voltage and current sources that respond very quickly to dc voltage and/or current demands from the device under test (DUT). They are capable of four-quadrant operation, acting as a positive or negative dc source or as a sink or load (Fig. 1).

SMUs additionally provide accurate, repeatable measurements, typically with resolution of five and a half or six and a half digits. They can be utilized to perform sweeps of both current and voltage to determine the I-V characteristics of the DUT. In doing so, SMUs operate over a wide range of power and signal levels.

Devices like power MOSFETs have very low resistance and handle very large currents when turned on, but they’re also designed to have very high resistance and allow nearly zero current to flow when turned off. In the on state, currents can be in the tens of amps. When turned off, current might be in the nanoamp range or less.

Equipment such as SMUs that can take accurate measurements over this wide range can quickly pay for themselves just by reducing the number of instruments it takes to make such measurements.

A good candidate for these applications is Keithley’s Model 2651A high-power system Sourcemeter, which fulfills many of the characteristics necessary for breakdown-voltage and leakage-current tests for medical devices and equipment (Fig. 2).

The SMU sources or sinks up to 2 kW of pulsed power (±40 V, ±50 A) or 200 W of dc power. It also precisely measures signals as low as 1 pA/100 μV at speeds up to 1 μs/reading. Consequently, the instrument lends itself to the characterization of many different components and subassemblies to a high degree of detail.

Key to the Model 2651A’s functionality is a pair of very high-speed, 18-bit analog-to-digital converters (ADCs), one for current mode and one for voltage. In digitizing measurement mode, they capture up to 1 million readings/s for continuous 1-μs/point sampling. These converters run simultaneously for accurate source readback without affecting test throughput.

There’s also an integrating measurement mode based on 22-bit ADCs. This mode optimizes the SMU’s operation for applications that demand the highest possible resolution and accuracy.

It’s crucial to ensure very tight configuration control and traceability for measurements. This factors into qualification testing to meet FDA requirements. Calibration and stability over long periods of time is critical in medical test, as is a thorough understanding of the National Institute of Standards and Technology’s (NIST’s) traceability standards and things like the reference volt and ohm.

Pulse Characterization

SMUs can be very useful in qualification testing to meet FDA requirements. This kind of testing is highly variable depending on the device involved and its end applications. Typically, these tests involve sourcing the voltage and measuring the currents, and that leads to pulse characterization.

Consider an implantable defibrillator, such as those made by manufacturers like Medtronic (Fig. 3). These devices sense ventricular fibrillation, a potentially lethal condition in which the heart quivers chaotically and pumps little or no blood. The defibrillator addresses arrhythmia of the ventricles by means of a high-energy, dc pulse delivered asynchronously to the cardiac tissue. The defibrillation discharge will often restore the heart’s normal rhythm.

On the level of the components within a device like a defibrillator, there is a movement toward wider-bandgap devices such as silicon carbide, which enable faster switching speeds. So when a defibrillator or pacemaker has to send a pulse, it senses, detects, and fires much more quickly than traditional semiconductor technologies.

From a test and qualification perspective, these devices must be tested by sourcing the equivalent of a human-heartbeat waveform. The manufacturer must ensure that the shape and fidelity of the pulse delivered by the device is correct. In these highly dynamic tests, waveforms must be sourced and measured to an extremely high degree of accuracy.

Looking Into Digital X-Rays

In addition to characterization and qualification testing, high-voltage test is prevalent in other areas of medical electronics. For one, the advent of digital X-ray technology, used primarily by dental practices but also found in place of traditional X-ray systems, has introduced quite a few complications and new domains into the testing sphere.

There always has been a lot of fear about X-rays, but there also has been much innovation. Digital technology makes X-rays more efficient and easier to use. The biggest benefit comes from the safety standpoint, as X-ray technicians achieve better results with less exposure for the patient.

“Digital X-rays have been around for a number of years but surprisingly it’s a somewhat unregulated industry,” says Randy White, technical marketing manager at Tektronix.

A primary measurement for X-rays is detective quantum efficiency (DQE). In 2003, a report by the IEC (IEC 62220-1) was developed to standardize the methods and algorithms required to measure the DQE of digital X-ray imaging systems.

The DQE describes how effectively an X-ray imaging system is able to produce an image with a high signal-to-noise ratio (SNR) relative to an ideal detector. Some see it as an alternate measure of the radiation dose efficiency of a detector, because the required radiation exposure to a patient decreases as the DQE is increased for the same image SNR and exposure conditions.

“DQE tells us, for a given amount of radiation, how efficiently an X-ray system’s sensor is able to detect and reconstruct the image. It’s like the effective number of bits for a scope,” says White.

“DQE is an overall figure of merit for digital X-ray systems. Other key metrics include the system’s noise level and its noise spectrum density. All of them, however, are tied to the DQE,” he explains.

“Part of that measurement is signal and image processing,” says White. “Some is high-voltage measurement. The step generators in these systems are on the order of 50 to 60 kV. That’s with surprisingly low current of less than 1 A.”

Voltages on this scale demand specialized high-voltage probes. “At some point the probing and connectivity almost become more important than the measurements themselves from a safety and reliability standpoint,” says White.

An example of the breed for high-voltage probes is Tek’s P6015A, a single-ended passive probe rated for up to 20 kV dc and up to 40 kV peak for pulses of 100 ms (Fig. 4). In addition, the probe’s bandwidth is 75 MHz. The 40-kV pulsed peak is enough for many pulsed high-voltage scenarios. According to White, the probe has seen service at even higher voltages in power-transmission settings.

To facilitate measurements at such high voltages, the probes had to meet extra Underwriters Laboratories (UL) certifications. To mitigate the danger of arcing, an environmentally safe silicone compound serves as a dielectric. Other features include a 7- to 49-pF compensation range, a small compensation box that fits on adjacent amplifier inputs, and a readout option for use with most Tektronix digital scopes.

Low-Voltage Challenges

Challenges in medical electronics testing come not only at the high-voltage end of the spectrum, but also in the low-voltage arena. Portable equipment like the gear that is used by paramedics or gets moved around in a hospital once was seen as a backup to stationary equipment. But as portable equipment has become a mainstay, usability is a prime concern, especially battery life.

In medical and consumer applications alike, the goal is always to maximize battery life. Pacemaker batteries must last five to 10 years and still be reliable. The primary cause of battery failure is overcharging.

The good news is that battery management circuitry is quite sophisticated, not only for monitoring charge levels, but also for isolating the battery from its power supply when peak charge is reached. This must be done in a manner that’s appropriate for the battery chemistry in use, which these days is almost certainly lithium ion.

Thus, the overall trend for battery technology in portable medical equipment is toward more onboard electronics. This not only extends battery life through advanced power management, it also helps to ensure an appropriate battery-system design that will enhance reliability.

How are batteries actually tested? One way is through UL, which offers certification that performs temperature cycling, discharge cycling, and other mechanical and electrical tests.

Many of the voltages present in medical devices are extremely low. “In a lot of these devices, we’re talking about voltages in the microvolt range,” says Tektronix’s Randy White. “I don’t know of a scope on the market that can be set to less than 1 mV/division.” Thus, measurement of these voltages is itself a challenge. One method is to use averaging to increase resolution (see “Understand The Tradeoffs Of Increasing Resolution By Averaging” at electronicdesign.com).

Another approach preamplifies these small signals. Tektronix’s ADA400A active differential preamplifier is fed with a differential signal, which is amplified 100 times while subtracting out common-mode noise with up to 100 dB of common-mode rejection (Fig. 5). Factoring in the 100X gain, the result is an effective 10 μV/division.

Of course, bandwidth tradeoffs come with this technique. “You’re looking at 2 MHz of bandwidth,” says White. “For a lot of applications that need that sensitivity, bandwidth isn’t an issue at all. For example, hearing aids only need audio range up to 50 kHz. With medical sensors, it could be in the audio range or even slower.”

In addition to handling very low-amplitude voltages, the amplifier facilitates measurements of signals that aren’t ground referenced. The high impedance of both inputs eliminates the need to add additional ground points in the DUT, avoiding circulating currents that disturb the measurement or the circuit.

Regulatory Issues

Perhaps the biggest looming challenge in testing medical electronics is the pending transition from the second edition of the IEC 60601-1 standard to the third edition. Whereas the second edition is a very prescriptive testing standard, specifying many of the test conditions, the third edition represents a major shift in testing philosophy. Once the third edition becomes mandatory, medical electronics OEMs will be required to develop a risk-management strategy for their end products. The result is a somewhat more variable testing process.

Another important new piece to the third edition is the concept of “essential performance,” which is defined as “the performance necessary to achieve freedom from unacceptable risk.” The standard also notes: “Essential performance is most easily understood by considering whether its absence or degradation would result in an unacceptable risk.”

Even though the third edition of IEC 60601-1 now includes essential performance requirements, a given OEM’s essential performance requirements may vary from the standard, depending on the proposed use of the device. For example, a laser device used for removal of tattoos will be held to less stringent essential performance criteria than would a laser device that is being used for eye surgery.

The result of adding risk management and essential performance is a much more variable testing process. For example, when testing a device for spill resistance, the second edition specifies spilling 200 mL of water from a distance of 20 cm for 10 seconds. Afterwards, the standard mandates a dielectric and leakage test to determine whether the water caused any short circuits.

In the third edition, the onus falls more heavily on OEMs to determine the nature of such testing based on their own assessment of the potential risks of the product. “Now, the OEM will have to tell us what liquid to pour, how much of it, from how far, and for how long,” says Todd Konieczny, North American medical technical leader at Intertek, a provider of quality and safety certification testing in many markets.

Another scenario with big changes in the third edition is temperature testing. The second edition limited the temperature of an external pushbutton on a system to 50°C. In the third edition, the measurement is much more subjective and related to how long a person can touch that button.

Other tests that are changed from the second to the third editions include the list of single-fault conditions. As noted earlier, OEMs must submit their own risk-management checklists and supporting documentation to a third-party testing company such as Intertek before submitting the device itself for evaluation. Medical equipment must be designed and built so there are no unacceptable risks after a single fault is applied. The equipment is deemed free of unacceptable risk if those single faults are found not likely to lead to failure.

The list of single faults includes:

  • Interrupted protective grounding
  • Interrupted supply lead
  • Mains voltage on floating applied part
  • Mains voltage on data ports or enclosure
  • Detachment of wire connections, screw terminals, components, etc.
  • Locking of moving parts and rotors
  • Locking of cooling fans or interruption of cooling circulation
  • Blocking of ventilation openings or filters
  • Short circuit of one isolation in a double-isolation design
  • Short or open circuit of semiconductors
  • Failure of thermostats

The new philosophy of the third edition of IEC 60601-1 can be shown in a diagram (Fig. 6). In the second edition, there’s very little interaction between the OEM and the testing laboratory. Ideally, there would be a design review after the risk analysis and initial design are completed, but in actual practice, this often is overlooked.

In contrast, the third edition necessitates much more interaction between the OEM and the test lab. They must collaborate on a document known as the “risk management file,” or RMF, which details and quantifies every conceivable quality and safety risk inherent in the design.

According to Intertek’s Todd Konieczny, the third edition test regime does not add significant time to the actual testing process. Rather, the paperwork has ballooned due to the risk-management requirements.

When should medical equipment/device OEMs start worrying about IEC 60601-1’s third edition? If you’re in Canada or the European Union, you’re already likely to be working within it as it takes effect on June 1, 2012. In the United States, the FDA already accepts designs tested to third edition standards, but it does not become mandatory until June 30, 2013.

Preparing for third edition testing is a matter of design reviews, says Konieczny. “We advise having a test laboratory come in and take a look at the design as well as the early versions of the RMF,” he says. Intertek has retrained its test engineers in reviewing risk-management files and the software portions of the standard. OEMs would do well to study IEC 60601-1 and begin preparing now.

Test houses such as Intertek can provide guidance on risk management and to evaluate a RMF in the context of IEC 60601-1 to ensure that it makes sense. The RMF is often complex and can take some time to compile and evaluate.