Edge Ee Promo 6113d3f90d224

How to Choose an Edge AI Device

Aug. 11, 2021
With all the buzz surrounding edge computing these days, perhaps you're thinking it’s time to invest in intelligent edge technologies for your IoT network. What are the key factors in pinpointing the right platform for your system?

This article is part of TechXchange: AI on the Edge

What you’ll learn:

  • What is edge computing?
  • The benefits of adopting edge computing.
  • What to consider when choosing a platform.

Edge computing has become one of the most talked about trends in technology, and with all the buzz, perhaps you're thinking it’s time to invest in intelligent edge technologies for your IoT network. However, before opening a purchase order for new edge devices, let’s discuss what edge computing actually is and does, and if your application would even benefit from edge technologies.

Edge computing can add a great deal of flexibility, speed, and intelligence to IoT networks, but it’s important to understand that edge AI devices aren’t a panacea for every challenge facing smart network applications. At the conclusion of this article, after determining if edge technologies are a good fit for your application, we’ll discuss the top features and considerations that buyers should look for when evaluating edge AI devices.

What is Edge Computing?

Edge computing takes IoT to a higher level—at the edge of the cloud where raw data transforms into value in real-time. It elevates the importance and governance of connected nodes, endpoints and other smart devices, by redistributing the data processing effort throughout the network.

Edge computing is almost the exact opposite of cloud computing, where data flows in from distributed networks to be processed in centralized data centers, with the results often being transmitted back to the original distributed network to trigger an action or effect a change. However, there are costs associated with transferring large amounts of data over long distances. Those costs can be measured financially, but also in other critical ways such as in power or time.

This is where edge computing steps in. When power, bandwidth, and latency really matter, edge computing may be the answer. Unlike centralized cloud computing, where data may travel hundreds of miles to be processed, edge computing enables the processing of data at the same network edge location where the data is sensed, created or resides (Fig. 1). This means that processing latency becomes almost negligible, and requirements for power and bandwidth are usually also greatly reduced. 

One of today’s major enablers of edge computing is the way semiconductor manufacturers are increasing processing capability without large increases in power consumption. This means the processors sitting at the edge can do more with the data they acquire without consuming more power. This allows more data to stay at the edge, rather than being transferred to the core. As well as lowering the total system power, this increases response times and improves data privacy.

Some of the technologies that benefit from this development include artificial intelligence (AI) and machine learning (ML), but these too depend on lowering the cost of data acquisition while increasing the level of data privacy. Both cost and privacy can be addressed through edge processing.

In terms of emerging trends such as AI and ML, such technologies have traditionally required immense resources, far more than would normally be available in an endpoint or a smart device. Now, thanks to advances at both the hardware and software level, it is also possible to embed these enabling technologies into smaller, more resource-limited devices that sit at the edge of the network.

Evaluating Edge AI

Choosing a platform able to perform edge processing—which may include running AI algorithms or ML inference engines—requires careful evaluation. Simple sensors and actuators, even those that are part of the IoT, can be realized using relatively modest integrated devices. Increasing the amount of processing carried out at the edge will require a more powerful platform, probably using highly parallel architectures. Often, that means a GPU, but if the platform is too powerful it will become a burden on the limited resources that exist at the network’s edge.

It's also important to remember that an edge device is fundamentally an interface to the real world, so it will likely need to support some common interface technologies, such as Ethernet, GPIO, CAN, serial, and/or USB (Fig. 2). It may also need to support peripherals, such as cameras, keyboards, and displays.

The edge can also be a very different environment from a comfortable, climate-controlled data center. The edge device may be exposed to extremes in temperature, humidity, vibration or even altitude. This will have an impact on the device selected, as well as how it is packaged or housed.

Another important aspect to consider is regulatory requirements. Any device that uses radio frequency (RF) to communicate will be subject to regulations and possibly require a license to operate. Some platforms will comply "out of the box," but others may need more effort. Once in service, it's unlikely that they will receive a hardware upgrade, so processing power, memory, and storage must all be carefully determined during the design cycle to provide scope for future increases in performance.

This includes software upgrades. Unlike hardware, it's possible to deploy software updates while a device is in the field. These over-the-air (OTA) updates are very common now and it's likely that any edge device will need to be designed to support OTA updates.

Choosing the right solution will involve a careful assessment of all these general points, as well as a close look at the application’s specific demands. Does the device need to process video data, or maybe audio? Is it dealing only with temperature, or is it also monitoring other environmental aspects? Does it need to be always-on, or will it sleep for long periods of time? Will it be triggered by an external event? Many of these questions apply to all technologies deployed at the edge, but as the level of processing increases and the expectation on the output rises, it will be necessary to expand the list of requirements.

Benefits of Edge Computing

It's now technically possible to put AI and ML into edge devices and smart nodes, opening up major opportunities. It means that not only is the processing engine closer to the data source, but that engine can do much more with that data it collects.

There are real benefits to doing this. First, it can increase productivity, or the efficiency with which data is used. Secondly, it simplifies the network architecture, as there is less data being moved around. Thirdly, it makes proximity to the data center less critical. That last point may not seem too important if the data center is in a city and close to the action, but it makes a lot of difference if the edge of the network is a remote location, such as a farm or water treatment plant.

There's no denying that data moves quickly on the internet. Many people may be surprised to learn that your search query may travel twice around the globe before the results are presented on your screen. The total elapsed time may be a fraction of a second which, to us, is virtually instantaneous. But to machines and the other intelligent devices that make up the internet of connected, smart and often autonomous sensors and actuators, every part of a second can feel like an hour.

This round-trip latency is a real concern for manufacturers and developers of real-time systems. The time it takes for data to travel to and from a data center is not inconsequential and certainly doesn’t appear to be instantaneous. Reducing this latency is a key objective of edge computing. It's working in conjunction with faster networks, which is where 5G will play its part. But the rollout of faster networks will not be able to compensate for the accumulated network latency we can expect, as more devices come online.

Analysts predict there could be as many as 50 billion connected devices online by 2030. If each of those devices demanded wide bandwidths to a data center, the networks would be permanently congested. If many of them operate in a pipeline, waiting for data to arrive from the previous stage, the total delay will soon become very apparent. Edge computing really is the only practical solution to relieve congested networks.

However, while there is a definite need for edge computing in general, the specific benefits of edge computing will still be very much dependent on the application (Fig. 3). This is where the laws of edge computing apply. These laws will help engineering teams decide if edge computing is the right option for your specific application.

The 4 Laws of Edge Computing

The first law is the law of physics; this one is immutable. RF energy travels at the speed of light, as do photons in a fiber-optic network. That’s the good news. The bad news is they cannot travel any faster. So, if the round-trip time is still not fast enough, then edge computing may be the right choice.

The Ping test provides a simple way of measuring how long it takes a packet of data to travel between two endpoints on a network connection. Online games are often hosted on more than one server, and gamers will ping the servers until they find the one with the lowest latency, meaning at the fastest the data can travel. That is how crucial even fractions of a second can be to time-sensitive data.

The latency is not entirely dependent on the transport mechanism either. There are encoders and decoders at each end, physical layers that need to convert electrons into whatever energy form is being used and then convert them back again. All of this takes time and even with processors running at speeds measured in the GHz, the time is finite and dependent on the amount of data being moved around.

The second law is the law of economics. This one is perhaps a little more flexible, but with demand for processing and storage resources soaring, it is also less predictable. Margins are always slim, but if the cost of processing data in the cloud suddenly goes up, it could prove the difference between making a profit or a loss.

The cost of cloud services starts with the cost of buying or renting the server, rack, or blade. This can vary based on the number of CPU cores, the amount of RAM or permanent storage needed, and the service level. Guaranteed uptime costs more than a service level with no guarantee. Network bandwidth is essentially free, but if you need a minimum level of bandwidth, you should expect to pay for it, which needs to be taken into account when evaluating cost.

That said, processing data at the edge isn’t subject to this kind of variable cost. Once the initial cost of the device has been incurred, the additional cost of processing any amount of data at the edge is virtually zero.

Data has value because it means or represents something. This highlights the third law, which is the law of the land. Anyone who captures information may now be subject to the data privacy laws that exist in the region where that data was captured. This means that even if you are the legal owner of the device capturing the data, you may not be allowed to move that data across geographical borders.

This would include the EU Data Protection Directive, the General Data Protection Regulation (GDPR) and the Asian-Pacific Economic Cooperation Privacy Framework, for example. The Personal Information Protection and Electronic Documents Act in Canada is in compliance with the EU data protection law, while the Safe Harbour Arrangement in the U.S. demonstrates similar compliance.

Edge processing can overcome this. By processing the data at the edge, it doesn’t need to leave the device. Data privacy is increasingly important in portable consumer devices; facial recognition on mobile phones uses local AI to process the camera image, so the data never leaves the device. The same can be true for CCTV and other security surveillance systems. The use of cameras to monitor public spaces would typically mean the images are transferred and processed by cloud-based data servers, which comes with data privacy concerns. Processing the data in the camera is both faster and more secure, possibly removing or simplifying the need for data privacy measures.

Lastly, we need to consider Murphy’s Law, the law that states if something could go wrong, it will go wrong. Of course, there is always something that can go wrong, even in the most carefully designed systems in the world. Edge processing can take out a lot of the possible points of failure associated with moving data over a network, storing it in the cloud and relying on data centers for the processing power.

Asking the Right Questions About Edge Computing

If an application can technically benefit from edge processing, there are still questions to be asked. Here's a short list of the most relevant:

  1. What processor architecture does your application run on? Porting software to a different instruction set could be costly and introduce delay, so moving up shouldn’t mean moving out.
  2. What kind of I/O do you need? This could be any number of wired and/or wireless interfaces. Adding them later would be inefficient, so this needs to be addressed early.
  3. What is the operating environment? Is it extremely hot, extremely cold, or both? The Mars mission is a good, if extreme, example of “edge processing” where the operating environment is hugely variable!
  4. Does your hardware need to comply with regulations, or be certified? The answer is almost certainly “yes,” so choosing a pre-certified platform can save time and money.
  5. How much power will it need? System power is expensive, in terms of unit cost and installation, so knowing how much is “enough” can be very beneficial.
  6. Is the edge device constrained to a form factor? This is more important in edge processing than many other deployments, so it needs to be considered early in the design cycle.
  7. What is the time-in-service? Is this going into an industrial application where it may need to operate for many years, or is the lifecycle measured in months?
  8. What are the system performance requirements, in terms of processing power, perhaps frames per second? What are the memory requirements? What is the language of the application?
  9. Is there a cost consideration? This is a tricky question, as the answer is always “yes,” but knowing what that cost limit is will help in the selection process.

Conclusion

Edge processing is enabled by IoT, but it's much more. It's driven by higher expectations than earlier examples of connected devices. At a low level, there are commonalities; the device will probably need to be low power, it will probably need to be low cost, but it may now also need to provide a higher level of intelligent operation without conflicting with power and cost.

Choosing the right platform is made easier by choosing the right technology partner. ADLINK has a broad portfolio of edge processing solutions and partners with a large number of companies offering complementary technologies. Entering into an ecosystem that has been developed around edge computing can give you the best chance of choosing the right edge computing platform for your AI-enabled application.

Read more articles on this topic at the TechXchange: AI on the Edge

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!