91432351 © Kittipong Jirasukhanont | Dreamstime.com
Ai Dreamstime L 91432351 652850219e9f4

As AI Matures, Demand Rises for New Edge Capabilities

Oct. 12, 2023
Artificial intelligence belongs on the edge, where it can do things that are beyond the scope of people.

This article is part of the TechXchange: AI on the Edge.

What you’ll learn:

  • Why AI is moving to the edge.
  • Fundamental AI/edge concepts and talking points.
  • Things to consider before deciding to attempt edge AI.

Edge computing has become a crucial part of the IT topology for many organizations. It handles locally the things that need split-second, real-time response and digests data first rather than taking up bandwidth, sending low value data to be processed elsewhere. Now, as AI applications rapidly proliferate, there’s a need to consider how to strengthen edge and equip it to deal with this new opportunity.

Use cases for edge AI range from image classification and object detection to language processing, content analysis, and moderation—even analysis and recommendations for resource extraction processes and mining operations.

Determining the Compute Destination

Edge computing, an overarching term for a wide range of specific types of computing deployments outside the data center, recognizes that neither the cloud nor the data center is the right place to handle computing for many specific applications. Factory automation, for example, often requires near-real-time decisions. And it can produce vast amounts of sensor data that’s costly and inconvenient to move and process centrally.

Therefore, the answer is to provide more compute capability where it’s needed, spoken of generally as the “edge” of the enterprise.

Many of the most promising artificial-intelligence (AI) applications are also centered far away from the data center and too remote from the cloud, due to latency, for that to be a viable option to deliver the necessary computing power. An oft-cited example is the autonomous vehicle. Efficiency and safety demand that many AI decisions, such as object avoidance or otherwise ensuring safety, can’t allow for any latency. They must be accomplished locally and in real-time at the edge.

AI at the edge includes examples like that as well as industrial automation decisions regarding when to best switch production between machines to accommodate needed maintenance. Or, in a retail context, what combination of offers to direct to a consumer’s smartphone or what on-site display to initiate to further engage that consumer.

AI Hardware and Connectivity at the Edge

The support of AI applications at the edge requires many of the same things as any kind of edge computing, but almost always needs to include additional and specialized computing.

For most organizations, this involves harnessing graphical processing units (GPUs), processors that are well adapted to, and widely used in, processing highly parallel streams of data such as graphics. So, GPUs aren’t completely exotic. They already exist in many architectures and environments and work well for many AI requirements.

Whether a GPU or more traditional CPU, AI edge demands appropriate processing power, storage, and connectivity, and in many cases, physical robustness not usually required in more traditional computing environments. That can start with raw sensor data, ranging from sounds and vibration to moisture, or data on heat and light.

To a greater extent than most computing, AI applications depend on huge amounts of data and algorithms that can digest such data. Within the edge, AI applications must efficiently gather data locally with perhaps some data from elsewhere (for example, continual updates about market demand to help an AI make optimal decisions).

What About Software and Memory?

On the software side, Linux, in the form of virtual-machine (VM) instances or Docker containers is a common option for deployment. VMs or containers are often provided by hardware companies that offer GPUs.

However, a CPU typically still runs the show, handling IO and feeding data to GPU(s). Unlike some edge applications that may not use or require the most powerful hardware, AI typically works best with more advanced processors.

On the memory side, with AI, much depends on dedicated GPU memory, though to some extent GPU and CPU can share memory. But dedicated visual RAM (VRAM) is especially important to achieve good AI results because it further reduces latency and supports parallelism.

The choice among specific GPUs is more complex, with many devices available from leading chipmakers. Each has a strength, which can play to the needs of a specific edge challenge.

CPU hardware that can oversee these more tactical activities as well as determine what data to summarize and apply in an AI application, or send on to the cloud or a data center, also needs to be robust. Blade computers or small towers may be perfect in office-like edge environments.

However, ruggedized machines with NVMe/SSD memory instead of spinning disks for storage and cooling systems that don’t depend on circulation of ambient air will better meet the realities of much of the edge.

Other Considerations

AI is usually data-intensive and depending on the specific application, may require greater bandwidth to communicate with storage, compute, and potentially, with sensors. Thus, systems with more USB ports and/or Bluetooth and Wi-Fi are desirable. In industrial settings, legacy communications may need to be accommodated, such as Ethernet, RS-232, RS-485, or RS-422, perhaps using protocols like Modbus, or EtherNet/IP.

Storage

Some edge applications, such as retail, “live” in a comparatively clean and stable environment and perform well with mainstream disk storage. However, many edge situations will benefit from the ruggedness and relative resilience in the face of power interruptions provided by persistent storage such as NVMe.

Reliability

AI applications that involve any high-risk system or activity should be made as fault-tolerant and robust as possible. This can be accomplished through inclusion of battery backup and a variety of failover schemes.

Thinking it Through for Edge AI

Plainly, AI can be successfully implemented at the edge. A robust ecosystem of hardware and software supports such a decision. The bigger question is whether it makes sense for your particular challenge. And if it’s to be implemented, what are the goals, purpose, and limits of the AI application.

As with other IT innovations that have been mistaken for panaceas, AI is powerful and can deliver valuable results. But be sure to look inside “the black box” to define the exact problem you hope to address.

Success, with AI or with more conventional computing, depends on having the right data. Is there sufficient data available to “feed” an AI application? If not, can that data be obtained reliably and consistently?

The available data also needs to have characteristics that will support training. Incomplete or non-typical datasets may lead to AI performance that’s unsatisfactory. Like the old IT adage, garbage in equals garbage out—AI systems are only as good as their training data. AI also needs to be effectively integrated with other systems. Have the means for accomplishing this been considered?

Answer these questions and weigh the costs and benefits carefully, and you will be well on your way to AI edge success.

Read more articles in the TechXchange: AI on the Edge.

About the Author

Alan Earls | Contributing Editor

Alan R. Earls has been reporting on and writing about technology for business and tech periodicals for more than 30 years. He is also a licensed amateur radio operator, KB1RLS.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!