Harness Robust Debugging Techniques To Improve Embedded Linux Systems

March 19, 2001
You can use Linux and still employ a methodology that includes all of the different phases of the ever-critical debugging process.

A growing number of embedded developers are experimenting with the Linux kernel and system services as a basis for new application development. But those developers embarking on the use of Linux as a target platform are faced with a number of debugging challenges. It's not easy to debug applications and drivers in a new operating-system environment and still achieve deployment deadlines. If they establish and follow a suitable debugging methodology, however, embedded developers can most efficiently use their time and meet time-to-market requirements.

Developers need a methodology for working at different phases of the debugging process and for providing software and hardware tools that support each phase (Fig. 1). The technique would incorporate native-code debugging, simulation, host/target debugging using the Joint Test Automation Group (JTAG) port or an in-circuit emulator (ICE), read-only-memory (ROM) monitor debugging, and debugging in conjunction with the real-time operating system (RTOS). Furthermore, the methodology should be fully supported with debugging tools from within a single software environment to give developers the power and convenience necessary to deliver fully reliable applications (Fig. 2).

A number of variants of Linux are available for embedded-system use, from standard kernels to those modified to support hard real-time applications. Of course, it's possible to perform certain kinds of embedded development by implementing some standard Linux distributions, such as Red Hat and Caldera. Individual developers, though, would have to customize the distribution to get the features and footprint appropriate for their projects, and possibly rewrite drivers or other code for better response.

At the next level of sophistication, several versions of Linux have been optimized for embedded systems, primarily by refining the standard Linux scheduler, tuning Linux device drivers, and slimming down the overall distribution to include only the most valuable features for embedded applications. These small-footprint embedded versions include variants derived from standard Linux, like Hard Hat Linux, ETLinux, LEM, and µClinux.

This category is best thought of as soft real-time operating systems. They serve a large part of the market's requirements through their ability to run in resource-limited environments with a measure of real-time response characteristics. For many, this represents an acceptable Linux solution.

The last method of adapting Linux for embedded systems is inserting a second operating-system kernel into the software architecture. This technique employs a small, dedicated kernel with time management in addition to the standard Linux kernel. Installed directly over the x86 processor, the dedicated kernel runs independently from the Linux kernel. Run on a higher level than the real-time kernel, the Linux kernel shares processor time with other tasks. In other words, Linux itself runs in the background and has to satisfy only soft real-time demands as long as no other task is activated. In this model, the real-time kernel takes over the tasks that require hard real-time responses.

This approach offers the potential of getting a hard real-time OS with many of Linux's advantages. Among the offerings available in this hard real-time category are RTLinux and the RTAI (for real-time application interface) Linux.

Not unexpectedly, the technique has both an advantage and a downside. On the positive side, it permits the integration of Linux features, like the user interface, communications protocols, and other services, into embedded systems while a separate, real-time kernel manages real-time capability. The downside of this technique is that adding a second OS, regardless of its putative real-time characteristics, introduces a second set of RTOS-specific application program interfaces (APIs) and potentially starves the applications running in the "normal" Linux environment.

GNU Tools May Fall Short As for development tools, the GNU (GNU's Not Unix) tools, which are already popular with embedded developers, are "naturals" for use with embedded Linux. These tools could be a good solution for many situations as they make rebuilding kernels and parts of applications easier. But if a de-veloper depends on strong support because the em-bedded system needs special tool features, things get complicated.

Unfortunately, little documentation is available for GNU tools in such cases, which means finding a solution might be extremely time-consuming. Support contracts with GNU-tool distributors are a potential remedy, but they're not cheap. Obviously, there's no way around a certain financial cost in this area. GNU tools are fine for standard applications on native systems, but they're far from optimal for embedded use.

Because of its desktop heritage, most embedded development takes place using the x86 architecture as the target. But traditional embedded processors, such as a Motorola ColdFire or 68K, are starting to gain popularity as an embedded target. In the vast majority of cases, the host development system is a Windows PC. In a few instances, it may be a Unix system (usually Sun Solaris), or even a Linux system. This relative homogeneity of host systems makes it possible to outfit a robust embedded tool chain for Linux target development.

Most embedded programmers start with the GNU debugger (GDB). It's a standard debugger that enables programmers to start programs (specifying anything that might affect program behavior), cause the program to stop on specified conditions, and examine what has happened, as well as change things in the program.

GDB wasn't designed as an embedded debugger, though, and it has a number of limitations, including a poor user interface and the inability to do effective host/target debugging. To help GNU users work around these issues, commercial vendors like CAD-UL have developed filters that permit developers using the GNU development tools to bypass GDB and use other debuggers instead (Fig. 3). Once the application has initially been written, a robust debugging cycle would consist of the following steps:

  • Native debugging on the host system
  • Simulation on the host system using a target simulation environment
  • Application debugging on the target
  • Debugging on the target with application in ROM
  • Debugging and testing with RTOS and all support software in place on the target

After generating the Linux-based application with the GNU development tools (such as the GNU C/C++ compiler, assembler, and linker), developers typically perform initial debugging on the host system in the development environment or on the host system itself.

Frequently when embedded development uses a proprietary RTOS, native debugging on the host simply isn't possible. But the use of Linux as the target means many options exist for host debugging. For example, if the development platform is Linux, a version of Unix, or other portable operating-system extension (POSIX) OS, at least parts of the application may be able to run natively. This can be useful as a quick reality test of the application, without respect to its performance and reliability on the target. Even if the host platform is Windows, it may be possible to utilize a multiple-OS solution, such as Lin4Win or VMWare, to run a Linux application in the native environment.

Some debugging technologies make it possible to conduct debugging with commercial tools by linking the application with native Linux libraries, preparing a portable executable file, and loading that file into the debugger. This lets the application run using the same calls and libraries as it would implement on the target Linux device—but with the convenience and control of the development platform.

Along these same lines, simulating the application behavior and performance on the host system offers a more detailed level of information on the application and concurrently retains control over the overall environment. If a target simulator exists for the host platform, developers can continue testing and debugging their application while still working on the host development platform.

This adds to native debugging by letting the application perform I/O with hardware that isn't yet available or is difficult to configure. The device simulator permits the development of software drivers when a target system isn't available. It enables the developer to simulate the behavior of the real hardware devices, such as a universal asynchronous receiver transmitter (UART), or transmission control protocol/Internet protocol (TCP/IP) connection. With commercial device simulators, users can select the appropriate device simulator for their application and choose different environments to run a wide variety of tests.

Simulators let developers write, compile and build, download, and debug code directly on a host system without a real target. Simulation software usually supports the handling of device simulators, including an I/O console. Many simulators permit users to generate data from a virtual device as input for their application. Users also can display values to check if their application is sending the right data to their peripheral device.

Simulation Validates Drivers I/O simulation works together with device simulation to validate an application's drivers. Developers, then, can implement a data generator or a data receiver to represent actual data transfers. By employing this feature, developers can write their own data analysis and statistics programs to examine errors that are normally hard to find on a real target. Additionally, hardware interrupts can be processed to simulate reality, allowing developers to test far more extensively in the simulated environment, prior to actually downloading to the target.

Once the Linux application has been built and tested natively and on a simulator, the next step in the debugging methodology is to download the created image to the target. The loadable, or ready-linked, program comes in the executable and linking format (ELF), or a.out format. In the Linux world, this format is considered standard. It includes debug symbol information that serves as an interface between the program counter on the processor and the relevant line on the C/C++ or assembler level. This debug symbol information is typically in the Stabs/Stabs++ format and serves as the basis for debugging the application on the target.

Typically, debugging on the target involves implementing a JTAG port or an ICE. For debuggers that support ICE devices, accessing ICE-specific information, such as ICE status information, overlay memory, or ICE firmware, is possible. The event system at the heart of an in-circuit emulator lets developers make real-time traces of executed code. Many embedded developers don't have access to ICE equipment, however, so they seek alternative ways to debug on the target. Many turn to interacting with the application on the target through the JTAG port.

When no serial interface or TCP/IP connection is available, the JTAG communication port lets developers debug embedded applications on the target side. Many embedded debuggers let developers request and set JTAG interface-specific information, like status information or JTAG interface firm-ware. Furthermore, features like memory tests can be defined and applied through the debugging tools directly on the target application. Such a debugging solution is the key to bridging a simulation with a full target debugging solution (Fig. 4).

The next step in the process employs monitor software. Monitoring tools normally download small agents to the target system to handle the communication between the debugger and the target application. A ROM monitor tool can connect to the embedded device, typically through a serial link, and provide complete control over the debugging process. It permits selective execution start and stop, and it includes full access to the peripheral control block registers and other processor features.

A ROM monitor on the target side is typically a high-priority interrupt service routine (ISR), which is invoked whenever an interrupt occurs, generated through the serial connection. This happens when the debugger on the host side sends a command to the target system where the ROM monitor is in-stalled. ROM monitors can be booted from either flash memory to electronically erasable programmable read-only memory (EEPROM) and can be downloaded in conjunction with the application. Alternately, the ROM monitor and the application can be linked together and loaded as one image from EEPROM or flash. However the monitor is used, it enables developers to attain more-detailed information of the application executing on the target.

Testing The Entire Build Because the embedded application usually has to interact with other applications and the Linux target OS, the final stage of debugging entails testing the entire build on the target system. This permits developers to observe how the application behaves under actual conditions, using the Linux scheduler and sharing cycles with other processes.

While virtually all of the bugs should be out of the application at this time, both performance and scheduling issues will come to light through robust testing of the entire image. Because many target devices have resource constraints, the developer also will be able to determine how the image behaves within the confines of those constraints. Poor performance or unexpected application failures often result from testing the entire image on the target—even if there's a high level of confidence in the reliability of the individual software components.

At this stage of debugging, developers should use a debugger that can provide information on operating-system and environment-specific behavior and code components, such as task lists, thread lists, semaphores, queues, mailboxes, and OS status information. During this process, the developer should be able to set breakpoints in conjunction with specific task IDs and stop the system at defined times, like when a specific task invokes a special function. This makes it easier to locate inconsistencies in the embedded application.

During the final testing process, one aspect that's frequently overlooked is code-coverage analysis. While many embedded programmers extensively test their code prior to release, there usually isn't a level of confidence as to how much code was tested. Code-coverage tools provide a way to determine which tests execute which code paths and how much of the actual source code has been tested.

In some cases, developers have run a full series of test suites on applications and later discovered through code-coverage analysis that only 50% or fewer of the code lines were exercised. This means that far too much code is re-leased without any testing. Code-coverage analysis provides a mechanism for quantitatively determining how well an application is tested and how stable the code is prior to its release.

While there's no readily available free or GNU code-coverage tool, the investment in a commercial tool yields a better understanding of the testing status of the code, especially in team-development environments where managing the testing process is critical. It's less valuable for a single developer working on a small application, but it's becoming necessary in a team environment because managers and developers need to monitor the progress of testing.

Individual parts of this overall methodology are practiced by many embedded developers today. But by using Linux as the target operating system, with one version of Linux as the target embedded-system OS, it's possible (and increasingly necessary) to develop a more robust and in-depth testing and debugging methodology. Employing an open-source operating system offers significant opportunities in flexibility and licensing costs, but at the expense of the reliability and increasing complexity of the overall system. It's important to manage these and other risks through a well defined methodology, supported by a comprehensive tool chain.

Some of these steps might seem less important to a specific application or end use, but there are no shortcuts to delivering reliable software on a new and complex operating-system platform. The advantages of Linux will be realized only by putting the tools and processes in place to see a project through to a successful completion.

Sponsored Recommendations

What are the Important Considerations when Assessing Cobot Safety?

April 16, 2024
A review of the requirements of ISO/TS 15066 and how they fit in with ISO 10218-1 and 10218-2 a consideration the complexities of collaboration.

Wire & Cable Cutting Digi-Spool® Service

April 16, 2024
Explore DigiKey’s Digi-Spool® professional cutting service for efficient and precise wire and cable management. Custom-cut to your exact specifications for a variety of cable ...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!