Use Forward Error Correction To Improve Data Communications
As bandwidth demands increase and the tolerance for errors and latency decreases, designers of data-communication systems are looking for new ways to expand available bandwidth and improve the quality of transmission. One solution isn't actually new, but has been around for a while. Nevertheless, it could prove quite useful. Called forward error correction (FEC), this design technology has been used for years to enable efficient, high-quality data communication over noisy channels, such as those found in satellite and digital cellular-communications applications.
Recently, there have been significant advances in FEC technology that allow today's systems to approach the Shannon limit. Theoretically, this is the maximum level of information content for any given channel. These advances are being used successfully to reduce cost and increase performance in a variety of wireless communications systems including satellites, wireless LANs, and fiber communications. In addition, high-speed silicon ASICs for FEC applications have been developed, promising to further revolutionize communication systems design.
The big attraction of FEC technology is how it adds redundant information to a data stream. This enables a receiver to identify and correct errors without the need for retransmission.
As the capabilities of FEC increase, the number of errors that can be corrected also increases. The advantage is obvious. Noisy channels create a relatively large number of errors. The ability to correct these errors means that the noisy channel can be used reliably. This enhancement can be parlayed into several system improvements, including bandwidth efficiency, extended range, higher data rate, and greater power efficiency, as well as increased data reliability.
FEC requires that data first be encoded. The original user data to be transmitted over the channel is called information bits, while the data after the addition of error-correction information is called coded bits.
For k information bits, the encoding process results in n coded bits where n > k. All n bits are transmitted. At the receiver, channel measurements are made and estimates of the transmitted n bits are generated. An FEC decoder utilizes these n bit estimates, along with knowledge of how all n bits were created, to generate estimates of the k information bits. The decoding process effectively detects and corrects errors in the n-channel bit estimates while recovering the original k information bits.
Because the decoder only uses the information received and never requests a retransmission, the flow of data is always moving forward. The process is, therefore, known as forward error correction.
The FEC decoding process doesn't need to generate n-bit estimates as an intermediate step. In a well-designed decoder, quantized channel-measurement data is taken as the decoder input. This raw channel measurement data consists of n metrics where each metric corresponds to the likelihood that a particular bit is a logical 1. Furthermore, the likelihood that a given bit is a logical 0 is related to this number. These metrics are usually represented by 3- or 4-bit integers called soft decision metrics. The decoder output is an estimate of the k information bits. Typically, decoding with soft decision metrics is computationally intensive. Very often, it's performed on a decoder ASIC that's specifically designed for the task.
A code's performance is strongly dependent on the data transmission channel. In order to facilitate the comparison of one code with another, a model is used where noise is added to antipodal signals. In this model, the noise is additive white Gaussian noise (AWGN). Unrelated noise samples are added to antipodal channel symbols (Fig. 1). The variance of the noise is related to the power spectral density of the noise (No). Antipodal signaling, a mapping where the 1s and 0s will be transmitted, are sent as +Z and -Z. For example, Z could represent 1 V on a transmission wire. So, 1s and 0s would be transmitted as +1 V and -1 V, respectively. The received energy per transmitted data bit (Eb) is proportional to Z2. An important parameter in the system is the signal-to-noise ratio, Eb/No. The AWGN model accurately represents many types of real channels. Many times, channels exhibiting other types of impairments have AWGN-like impairment as well.
FEC codes come in two primary types, convolutional and block. In a simple convolutional encoder, a sequence of information bits passes through a shift register, and two output bits are generated per information bit (Fig. 2). Then, the two output bits are transmitted. Essentially, the decoder estimates the state of the encoder for each set of two channel symbols it receives. If the decoder accurately knows the encoder's state sequence, then it knows the original information sequence too.
Using Constraint Length The encoder in Figure 2 has a constraint length of K = 3 and a memory of K-1 or K-2. For this code, the optimal decoder, otherwise known as the Viterbi decoder, has 2K-1, or four states. A number of values are computed for each state. As K increases, so does the performance of the code—but at a diminishing rate. The complexity of the decoder, though, increases exponentially with K. Today, popular convolutional codes in use employ K = 7 or K = 9.Block codes are a little more straightforward. A block code will take k information bits and generate one or more "parity " bits. These parity bits are appended to the information bits, resulting in a group of n bits where n > k. The encoded pattern of n bits is referred to as a code word, and this code word is transmitted in its entirety. Figure 3 illustrates a simple (n,k) = (8,4) block-code encoder. The receiver obtains n channel metrics and the decoder estimates the most likely sequence (of which there are 2k) from these estimates. Block decoders are usually rich in algebraic structure that can be used to facilitate decoding. Perhaps the most popular block codes presently implemented are Reed Solomon codes. Often, these are a shortened version of an n = 2040 code (255 bytes). Until very recently, the most powerful codes were built from the concatenation of a convolutional code and a Reed Solomon code.
The latest advance in FEC is a class of codes called Turbo Codes. These are very powerful codes built from two or more smaller, simpler constituent codes. The constituent codes could be either systematic convolutional or block type. The idea behind Turbo Codes is to encode the data once via encoder 1, in some way scramble the order of these output bits known to the receiver, and then encode these bits with a second encoder, decoder 2. The twice-encoded bits are then transmitted.
At the receiver, decoding proceeds in the reverse order of the encoding process. The first decoding is as per encoder 2 and the second decoding is as per encoder 1. Between the two decodings, the scrambling operation is reversed.
What makes a Turbo Code unique is that the decoding process can be performed again. The second constituent decoder addresses errors left from the first. The second pass of the first decoder then addresses errors left from the second decoder. The scrambling or interleaving operation ensures that errors from one decoder are "spread out" as seen by the other decoder. In order to maximize performance, this decoding process is typically iterated several times.
For this iterative process to work optimally, each constituent decoder must take soft decision metrics as its input in addition to generating soft outputs. Every decoder has to generate an output of n soft decision metrics corresponding to the likelihood of each bit in the encoded sequence. These n output metrics take into account the code redundancy as well as the n soft inputs. This decoder property of utilizing soft inputs and generating soft outputs is unique to Turbo Codes and significantly increases the complexity of the constituent decoders.
The fact that one decoder's output feeds the input to the next decoder is somewhat analogous to a turbocharger in a high-performance engine. The turbocharger uses engine exhaust (output) to power an air intake blower, thus enhancing the input. This is where the term "turbo " in Turbo Code comes from. Turbo codes are iteratively decoded codes.
Turbo Code technology can best be illustrated with an example that uses block codes as the constituent codes. Consider the (8,4) extended Hamming code of Figure 3. This code takes 4 information bits, computes 4 parity bits, and appends these 4 parity bits to the information bits to create an 8-bit code word for transmission:
I I I I P P P P
Here, I corresponds to an information bit and P corresponds to the parity or redundancy bits. A two-dimensional product code constructed from this (8,4) extended Hamming code might be as follows:
I | I | I | I | PH | PH | PH | PH |
I | I | I | I | PH | PH | PH | PH |
I | I | I | I | PH | PH | PH | PH |
I | I | I | I | PH | PH | PH | PH |
PV | PV | PV | PV | PVH | PVH | PVH | PVH |
PV | PV | PV | PV | PVH | PVH | PVH | PVH |
PV | PV | PV | PV | PVH | PVH | PVH | PVH |
PV | PV | PV | PV | PVH | PVH | PVH | PVH |
where I corresponds to information bits, PH are the parity bits with each code word constructed horizontally, PV are the parity bits with each code word constructed vertically, and PVH are the parity bits of the encoded parity bits.
This horizontal-vertical structure results in a product code. Built from linear block codes, product codes have the property that all rows as well as all columns form code words. In general, different codes can be used for both the horizontal and the vertical blocks. This allows for a wide variety of code rates and block sizes. Plus, 3D codes are possible.
Decoding is performed one block at a time. All horizontal blocks are decoded, and then all the vertical blocks are decoded, or vice versa. The decoding procedure is iterated several times to maximize the decoder's performance.
The performance of a Turbo Code is best determined by computer simulation. The performance of a Turbo Product Code (TPC) that's built from the (64,57) code used in both the x and y dimensions is shown in Figure 4. This code is called the (64,57)2 TPC or the (4096,3249) TPC. Included also is a bit-error-rate (BER) plot of data transmitted without coding on the AWGN channel. The difference in signal-to-noise ratio (Eb/No) between the code's BER performance curve (BER simulation) and the uncoded BER performance curve, at some specified BER, is referred to as the coding gain of the code.
Simulation software, as well as a hardware evaluation board for this code and many other TPCs, is available from Efficient Channel Coding Inc. (www.eccincorp.com). In addition, TPC ASICs that support this code and others are available from Advanced Hardware Architectures (www.aha.com).
Typically, the metric used to evaluate the quality of service (QoS) of a communications channel is BER. It isn't always the best metric to use, though. For instance, in packet networks, any number of bit errors within a packet constitutes a packet error. When this happens, the packet is usually discarded and a retransmission is requested. In that case, a more appropriate QoS metric is the packet error rate.
A consequence of using powerful FEC is that when a single error event occurs, several bits are very often actually in error. As such, the packet-error-rate performance is close to the BER performance.
If no coding is used, packet errors come from the random unrelated occurrences of bit errors. Typical packet error events have only one bit in error in the entire packet. As a consequence, the packet-error rate is significantly greater than the BER. This can be seen in the uncoded code-block error rate (CER) curve of Figure 4. In that example, we set the packet size to the codeblock size of 3249 bits.
The (64,57)2 TPC simulation results show exceptional performance to within 1.2 dB of the theoretical Shannon capacity limit. Using antipodal modulation, BER = 10-6, and code rate = k/n = 0.8. Larger block sizes can narrow this gap even further.
Benefits Of Using FECFEC independently increases the reliability of data at the receiver. Within a system context, FEC becomes an enabling technology that the system designer can use in several ways. The most obvious advantage of using FEC is with respect to power-limited systems. Through the use of higher-order signaling, however, bandwidth limitations also can be addressed.
In many wireless systems, the allowable transmitter power is limited. These limitations can be brought on by adherence to a standard or to practical considerations. FEC makes it possible to transmit at much higher data rates if additional bandwidth is available.
A common requirement, for example, is to increase the throughput in a power-limited system. Assume that the desired QoS is a BER of 10-6. Without coding, a receiver requires a signal-to-noise ratio corresponding to an Eb/No of 10.5 dB. If the (64,57)2 TPC is used, this QoS can be maintained with an Eb/No of 3.2 dB. This is a 7.3-dB improvement. Therefore, the same signal energy can be "spread out" across 5.3 times as many information bits. Plus, the FEC will ensure the required QoS.
But to transmit 5.3 times as much data, 5.3 * (4096/3249) or 6.68 times as much bandwidth is required. If the bandwidth is available, the throughput can be increased by a factor of 5.3 with no increase in transmitter power.
Most network systems are designed around the transport of data packets. When the channel causes a single bit error over the entire length of the packet, the packet must be discarded and re-transmitted. For those systems, the code-block error rate, also known as the packet-error rate, is used to compare the performance between a system with FEC and one without it.
For a CER of 10-5, an uncoded system requires an Eb/No of 12.25 dB, while a system employing the (64,57)2 TPC achieves a similar CER at an Eb/No of 3.4 dB. This is an 8.85-dB improvement, or a decrease in the required power of 8 times. This means lower-power amplifiers and smaller antennas can be used. Both can significantly impact cost.
Consider the scenario that requires an increase in the battery life of a portable wireless system and, thus, a reduction in the transmit power. Assume that the desired QoS is a BER of 10-6. Using the (64,57)2 TPC again ensures the QoS at 3.2 dB. The transmit power can be cut back by a factor of 5.3.
To maintain the data rate, bandwidth expansion of 26% or (4096/3249) is required. If no bandwidth expansion is available, the transmit power can be cut back by a factor of 6.8. This, however, will result in a reduced data rate of 21% or (1-3249/4096). If battery life isn't a concern, transmit with full power. In this case, the FEC is used to enable greater range.
If the 21% percent reduction in data rate is acceptable, the range can be increased by 160%. Without the use of powerful FEC, either a higher-power amplifier or a larger antenna would be required. Unfortunately, not only can this increase the cost of the system, but in many systems these alternatives simply might not be possible.
Eliminate Some Amplifiers Long-haul fiber uses a number of optical amplifiers along the path. These amplifiers add to the cost as well as consume electrical power. By using FEC in this system, at the beginning and at the end of the link, the distance between amplifiers can be increased. This minimizes the number of required amplifiers. Employing FEC will either increase the bandwidth or reduce the throughput. High-rate codes (k/n = rate > 0.75) can minimize this effect and still yield good coding gain.To date, the code of choice for this application has been either no coding, or Reed Solomon codes. Reed Solomon codes were the only practical, available codes capable of the necessary speeds for fiber (> 1 Gbit/s). With the commercial availability of TPC decoder chips, the previous state-of-the-art Reed Solomon technology can be bettered by nearly 4 dB. This represents a significant reduction in the number of needed amplifiers.
If bandwidth limitation is the issue, higher-order modulation can be used in conjunction with FEC to achieve higher throughput while keeping the transmit power and bandwidth constant. Consider an uncoded system implementing QPSK modulation with a desired QoS that has a BER of 10-6. The required Eb/No is 10.5 dB. By employing QPSK, which is 2 bits per channel, the required Es/No (where Es is the energy per channel symbol) is 13.5 dB.
Using a higher-order modulation code like Gray-coded 16-QAM and limiting the channel to the same average Es/No requires the same bandwidth and transmitter power. But the closer symbol spacing results in a significant degradation in the QoS for a BER * 10-2. A coded system implementing the (64,57)2 TPC can take the QoS to well below a BER of 10-15 which, for all practical purposes, is error-free. Each transmitted symbol now carries 3.172 bits on average, which is a factor of 1.59 higher than the uncoded QPSK scheme. In this situation, we also increase the QoS significantly.
Code division multiple access (CDMA) spread-spectrum systems benefit greatly from the use of FEC. Different users in the system appear as "noise" to any one user. Use of FEC can greatly increase the number of simultaneous users that the system can support. In fact, maximizing the number of simultaneous users in a CDMA system requires the most powerful code available.
A low-rate FEC code and fewer chips per bit (reduced processing gain) spreading code is preferred to a higher-rate FEC and greater processing gain. Practical considerations, however, limit how low a low-rate FEC code is appropriate.
With the use of powerful FEC, the channel becomes a relatively noisy place. To achieve the full benefits of FEC, synchronization hardware for both carrier and symbol timing recovery must be able to operate in such an environment. This can be quite challenging, particularly if the modem is the burst type that's popular in packet-data systems.
The older Reed Solomon-Viterbi (RSV) concatenated codes are essentially block codes that can be quite large. The result is significant latency because the decoder can't generate output information bits until the entire block is received. In RSV systems, the larger block sizes can cause unacceptable latencies.
TPCs, though, can outperform an RSV code at significantly shorter block size and, therefore, reduce latency. Also, to attain very powerful codes at low bit-error rates, the Turbo Code block size typically is large. This does result in an increase in system latency, though systems that operate at very low bit-error rates are generally higher-data-rate systems. So, the latency penalty is small.
Research remains active in the quest for practical codes with even greater coding gains. Still, the fact that Turbo Codes achieve gains rather close to the Shannon limit means future improvements are destined to be incremental.
Turbo Convolutional Codes, or TCCs, exhibit an error floor and, thus, they perform exceptionally at lower bit-error rates. Lowering or eliminating the error floor is a very active area of study, because this floor limits the use of TCCs in many systems.
TCCs and TPCs have slightly different properties which makes them well-suited for different applications. The table in this article can serve as a guide.
The performance Turbo Codes can achieve guarantees that they are here to stay. As more communication system designers become familiar with the capabilities and design opportunities that Turbo Codes offer, they will begin to find their way into more and more systems. Third-generation (3G) wireless systems are just one example of systems slated to use Turbo Codes.
Many new communication systems are being designed with some type of Turbo Code FEC. Where high quality of service, low overhead, and high data rates are required (like many satellite systems and packet networks), TPCs are a perfect choice. Many older systems are being retrofitted with Turbo Codes where it's possible to do so. Undoubtedly, systems in the future will use Turbo Codes, perhaps at the exclusion of all other types of FEC.