Wi-Fi is continuing its march into the center of connectivity for all things Internet and broadband, and service providers are starting to feel the pain points of wireless connectivity and systems. Its proliferation in the network as the last connection between the user’s device—that is, phone, tablet, laptop, or TV—directly overlaps the wireless performance with the user’s perception of the Internet performance. Simply put, a lot of users don’t differentiate between a wireless problem and an Internet service problem, leading to service calls and truck rolls. That’s especially true if the service provider supplied the Wi-Fi access point as part of the service. Service providers are keen to improve the Wi-Fi performance and lower the support costs, and Wi-Fi testing is central to preventing unnecessary performance problems in deployments.
In all testing, several key factors reign supreme: repeatability, automation, and realism. The tests being performed on the device under test need to be well aligned with the
deployment scenarios for that device. Testing a Wi-Fi access point with all the stations placed 1 m away, with an ideal antenna orientation, might produce great peak performance results but won’t reveal much about the access point’s expected field performance. Likewise, it’s critical that test results be repeatable, which increases the confidence in the measurement. I promise not to dive into an explanation of confidence intervals in measurements, but if we consider both of these issues when developing test methodologies and implementations for Wi-Fi, devices passing the testing should exhibit improved performance in real deployments.
A simple list of common problems in Wi-Fi deployments includes issues with range, throughput, noise mitigation, airtime fairness, and interoperability. We can also add roaming and mesh to this list, if we consider that service providers will be turning to whole home solutions for future offerings. Given this list, the next step is to develop test cases for each category, such as measurements of a Wi-Fi access point’s performance as the range (distance) between the station and access point increases. But, you can actually break that test into two discrete cases, where the range is fixed for each test point (that is, the user’s device isn’t moving), and a range that is increasing or changing throughout the test (that is, the user’s device is moving). Both cases will happen in real deployments and will have different performance metrics.
With our first two test scenarios, now we have to consider the implementation, which is where repeatability becomes critical. In today’s world, open-air wireless testing is nearly impossible, unless you have a lot of physical space you can keep everyone else (and their devices) away from, which is a far cry from a typical lab or office environment. Working in wireless isolation chambers can solve the issue of interference from the existing wireless networks, while also creating a new set of challenges. Those challenges include reflections inside the chamber, creating a realistic wireless channel between the access point and station, and adding deterministic levels of interference. At the UNH-IOL, we implement all our wireless testing in isolation chambers, which is key to the repeatability of our results.
We’ve worked closely with octoScope to develop test methods that realistically simulate real-world scenarios, such as those described above. These methods include use of a channel emulator, multiple near-field antennas, and individual controlled attenuators to create the multipath environment (Figure 1). This path (channel) models the typical home or small-business environment, while allowing precise control over the attenuation (distance) between the access point and station. It’s also possible to swap the station for test equipment which can act as multiple stations simultaneously, further increasing the realism of the test, while maintaining control over the test environment and operation.
In the range/throughput testing, throughput is measured, while also looking into the total latency and impacts on the different higher layer protocols. Specifically, performance of TCP and UDP will be different, depending on the physical-layer performance, latency, and jitter. Providing measurements on both protocols helps provide a complete picture of the system’s performance and can uncover potential problems. Ideally, this type of lab testing uncovers problems before devices reach the field, where troubleshooting costs increase. The lab testing can prove a device meets a predetermined set of performance requirements, across a wide variety of test conditions simulating the real world. With repeatable testing, it’s possible to quickly spot regression issues, or compare new devices to older models. This ensures users get access to the best possible devices and cuts down on the service providers’ support costs. A webcast described in a web-exclusive article1 provides more information.
Reference
1. Lavoie, Lincoln, “UNH-IOL addresses Broadband Forum’s Wi-Fi test plan,” EE-Evaluation Engineering Online, May 30, 2018.
About the author
Lincoln Lavoie is a senior engineer and acts as an industry lead for the executive steering body at the University of New Hampshire InterOperability Laboratory (UNH-IOL). In this role, he is responsible for the technical management of the broadband access technology grounds, including NFV, Wi-FI, DSL, Gfast, GPON, and PoE. In addition to his duties with the UNH-IOL, he participates in many industry organizations, including the Broadband Forum (BBF) and OPNFV.