Robot 609151a8a573d

The Growing Potential for AI in the Cloud and IoT

May 5, 2021
With continued advances in technology as well as the increase in embedded sensors, artificial intelligence (AI) is increasingly becoming a standard in engineering applications and projects.

The Cloud and IoT have transformed how we treat devices and the applications they support. Among the many pressures on the development of IoT devices and their related support structures and functionality is the relentless voracious demand for processing power at the Edge. This need exists in both user devices and “Fog” Edge-level servers and infrastructures.

Implementing Artificial Intelligence (AI) and/or machine-learning systems in computing solutions is one way to maximize processing power, especially in applications where image recognition and other pattern-based solutions are required. However, the implementation of AI brings with it an additional level of complexity that can be challenging to a development team. In addition, the industry is still grappling with related issues ranging from what exactly is AI, to where does it truly add value.

Designing with AI

To address the added complexity in AI-enabled systems, engineers need to create an approach that takes into account the entire design process. Practical and successful AI implementation requires focus on data preparation, modeling, simulation, and test, along with deployment as one complete workflow.

One of the companies addressing the AI integration issue, offering engineers tools that supplement their own knowledge while incorporating AI into the design is MathWorks, who creates tools and solutions to aid developers in this difficult development process. We recently spoke with Johanna Pingel, product marketing manager, about why engineers should focus on the complete AI workflow, the importance of each step in that workflow, and the tools that can support them in their AI process.

EE:  Johanna, people have a concept of AI, and it's kind of like the blind men and the elephant. It's based upon what aspect they're touching and what their perception of that tiny piece of the big picture is. What are your thoughts on that?

Johanna Pingel: I like that. I like the idea that AI can mean different things to different people. At MathWorks, we focus more on the engineer and what AI means to engineer, but it can definitely vary depending on your experience with AI and your level of comfort with AI as well.

EE: Well to some, AI is a software issue. To others, AI is a hardware issue. To some others, it's not as much about the AI as it is the enhancement of the edge computing aspect, which is both a hardware and RF issue, involving bandwidth, latency, and such.

Pingel: I think it's definitely a combination of everything. What we tend to do is really cater our conversations around AI based on the person that we're talking to at the moment. So if you want to talk about software, we'll talk about algorithms and we'll talk about models and we'll talk about model management. And if you want to talk about hardware, we'll talk about GPUs, CPUs, FPGAs, wherever you want the software to land. I think the key for engineers to keep in mind is that the AI normally lives in a bigger system. So while a lot of people focus on the model starting out, it turns into a larger system very quickly and you have to really have those considerations, the bigger picture in your mind from the start.

I think that testing and requirements are key to the whole story. My colleague Heather Gorr actually wrote about streaming specifically, and all of the considerations that you have to have just based on streaming and all of the nuances that you have to take care of in those particular situations as well, where timing is everything. And then machine learning and deep learning is secondary almost to making sure that the streaming comes in and that the machine works as you expect.

However, the testing of that is the most important aspect because you have to know that your machine is going to probably mess up. It's going to make mistakes. There's going to be issues in latency. There's going to be issues with data coming in. Your data's going to be messy. And all of those things you have to consider before deploying your system. So at MathWorks, we really talk about testing as one of the key components of the AI workflow, rather than just focusing on modeling and rather just focusing on deployment and speed and all of those other considerations, it's really about the testing to make sure that you don't have to go back and start from scratch, that you're really testing continuously throughout the process.

EE: Well, you're preaching to the choir when you talk about tests with the Evaluation Engineering audience. So then let's look at the aspects. Because when we think about creating a workflow, there's a workflow both in front of and after deployment, are you talking about one side or both?

Pingel: Oh boy, I think it's definitely both, but when we talk about testing, we really want to emphasize to engineers, especially on the software side, that they need to test prior to deployment, that everything needs to be figured out so that you're not introducing any errors after deployment. Of course, once the system is deployed, then again you should be continuously monitoring and updating that software and making sure that your software can last for years and decades to come.

However the testing and the simulation really talks about prior to deployment, making sure that you understand all of the components that come together and making sure that all of those components work exactly as you expect in every scenario and situation. And that's where simulation really comes into play. So at MathWorks, Simulink is a core component of our software, and we pride ourselves on the fact that Simulink will simulate all of the conditions that you need in order to make the entire system work as you expect it to. So you can really have that confidence before you're deploying that everything's going to work as you expect.

EE: We see the way test and measurement and evaluation is migrating, because once upon a time, test and measurement was a thing you did as a step in the process. Now, as you point out, if I'm designing in a simulation environment and then I'm breadboarding with basically modules that I've already had predesigned and ordered from the various distributors and the like because I went through my simulation phase, then I'm doing a constant test during automated manufacturing for Six Sigma. So now in the case of the design philosophy of any product, be it a software product or a hardware product, I'm going to be simulating it as I'm thinking of it, I'm going to be testing it as I'm manufacturing it, and then for the higher end manufacturers in the field, I'm going to be doing over-the-air software updates and monitoring that product until it runs out of warranty and I can't sell them another one.

Pingel: So testing throughout the process is really important and just keeping in mind the requirements from the beginning. So even when you're collecting data and even when you're modeling and just trying out some prototypes with models, keep in mind the end result throughout and make sure that you're documenting all of those cases. So the other thing to keep in mind when you're talking about documentation and testing is the fact that you're going to go through a lot of iterations of models, and you're not going to get it right the first time at all. So it really comes down to model management as well as data management, and making sure that you have all of those components accounted for throughout the entire process so that you don't have to go back and say, "Okay, which system worked? Which model worked?"

So at MathWorks, we created a tool, a point-and-click tool that helps you model management, and model manage, it's called Experiment Manager, and it really keeps track of all of the data, all of the parameters, and all of the models that you created to really go back and say, "Okay, which one is going to be my final product? Which one is the one that gave me the best results? And this is the one I should move forward with it as well."

For example, let's talk about Caterpillar. What's interesting about Caterpillar is when we're talking about the AI workflow, we start with data preparation and data pre-processing, and this is in my opinion, probably the key component to AI is to make sure that the data that you're going to put into your model is clean and it works as you expect. So what Caterpillar did is they had all of this data, all of this visual data that they wanted to be able to label and crop. The problem is that, of course, this takes a really long time to do this and get good results.

But it needs to be done. That's the thing that you have to keep in mind, is that in AI, you have to clean and prepare your data because if you don't, you're going to spend so much more time and then eventually you're just going to have to go back and do it. So what Caterpillar did is that they wanted to help automate that labeling of data, we call that ground truth data, and that reduces the need for people to actually go and to do that by hand. An operator has to do that visually without the help of AI. So with the help of MathWorks, they're able to help automate that process and be able to create clean, labeled data that they can then use for AI modeling.

Another example can be found at Voyage, who is developing a self-driving taxi, one completely different than what Caterpillar was using our tools for. So what I think is important here is that our customers are really using our tools for a variety of different things. And you can use it for the entire workflow, you can use it for a piece of the workflow, and you can really combine and mix and match however best suits your needs. So in the case of Voyage, they're using this more for rapid design, iteration, and testing.

They were able to deploy a level-three autonomous vehicle in less than three months using Simulink, and the way that they were able to do this is through simulation. They were able to take their AI model and put it into their complete system and be able to simulate that before deploying it onto hardware that would eventually run on their autonomous vehicle. So that's just a really interesting story that the compressed time of testing and the compressed time of being able to move from prototype to production. The other really interesting thing about that story is that they started out with an example in the automated driving toolbox as a starting point for their prototype. So they were able to just start with an out of the box example, they added in their data and their scenarios, and then they were able to use that as a starting point and not have to start from scratch.

EE: That's beautiful, because one of the aspects of intelligence in systems is intelligence in design, and that's finally becoming apparent to people. And there's an aspect that I would like to talk about. I think those two, since they're such nice, separate examples are enough as far as a case. I don't want to talk too long because I don't want the piece to get too long, but I would like to talk a little bit about the aspect of collaborative development.

The industry has been for years selling collaborative development technology, but it was mostly for large multinationals or remote professionals. It wasn't really a solution that everybody thought was necessary. And then COVID came and as it were the perfect storm for collaborative software environments, and just think of all of these solutions that might still be fighting for an audience if it weren't for the fact that circumstances have forced us to use these solutions. So how has collaborative development changed in your view in, say, the last year and a half?

Pingel: I think it's interesting. I think more and more we're seeing that engineers are working alongside with IT and other machine learning experts, data scientists, and they all have to work together and collaboratively. So I think that that's a growing trend. Chances are, if you're designing the deep learning models, you're not doing that by yourself in a vacuum by any means. So you do need tools that allow you to collaborate with other aspects of the process. So let me give you an example of that too.

Just recently Lockheed came out with a user story where they're using AI and deep-learning across their organization, across many different machine learning, deep learning applications. Of course they have so many different organizations within an organization that they really wanted to standardize and make sure that they had governance over all of the AI and deep learning that was happening.

 So Lockheed made the decision to partner with MATLAB and also Domino, which is a data management/model management company among other things. And through the standardization, they were finding that they're able to help with the collaboration, the model management and the data management so that people across the entire workflow can help and work together. So I think that's a really exciting ... Although that is a larger company, of course, but I think it's just really exciting that people are moving in that direction and understanding the need for tools that allow you to help collaborate not just across your team, but across the organization as well.

EE: So then, Johanna, do you have any final thoughts on AI development that you'd like to share with the audience?

Pingel: Absolutely. I've got two that I would like to make sure that people are aware of. The first is that there's a lot of talk about open source and there's a lot of talk about different platforms in which people are developing and executing models. The one thing that I want to make sure that everyone is aware of is the concept of interoperability across platforms. So MathWorks takes this very seriously, and we want to make sure that we're developing a collaborative platform, not just within our organization, but across multiple platforms.

So the ability to work with TensorFlow and other models that are created in other platforms. One example that we're really seeing here is that models may be developed in open source, but then once again, bringing this back to simulation and tests, they need a way to actually be able to simulate this across an entire system. So what our customers want to do is bring in open-source models and then run them within Simulink. That's a very popular use case that we're seeing now, and we have the tools that allow the flexibility to move back and forth between platforms.

The final thing I want to talk about is simply getting started with deep learning and AI for the first time. Sometimes that can feel daunting, simply because there are so many resources out there. So MathWorks has decided to create online free trainings, especially for the people working from home, and that's of importance to us to get your feet wet. They're completely free. And then it helps you understand the basics and then where to move forward from there. So I think it's really important to know that engineers and anyone who wants to learn deep learning can learn deep learning and be successful with AI.

Some links to a few of the case studies Johanna mentioned:

About the Author

Alix Paultre | Editor-at-Large, Electronic Design

An Army veteran, Alix Paultre was a signals intelligence soldier on the East/West German border in the early ‘80s, and eventually wound up helping launch and run a publication on consumer electronics for the US military stationed in Europe. Alix first began in this industry in 1998 at Electronic Products magazine, and since then has worked for a variety of publications in the embedded electronic engineering space. Alix currently lives in Wiesbaden, Germany.

Also check out his YouTube watch-collecting channel, Talking Timepieces

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!