Wisconsinart_Dreamstime_48570384
Promo Wisconsinart Dreamstime Xxl 48570384 647e15351ac9b

Lions, Chatbots, and Generative AI, Oh My

June 5, 2023
The population at large is enamored—and worried—about generative AI. Are you?

This article is part of the TechXchange: Generating AI.

What you’ll learn

  • Where are we in the AI spectrum?
  • What do we really have to worry about when it comes to chatbots?
  • Should you be taking advantage of generative AI?

The challenge with artificial intelligence (AI) since its inception has been one of hyperbole versus actuality. AI in literature abounds with things like Asimov’s three laws of robotics and positronic brains. Androids and intelligent machines were once stuff of science fiction, but these days we at least to appear to have this capability.

For instance, we have Working Dog[Bot]s that can operate autonomously (see the video below). The Ghost Robotics Vision 60 quadruped robot can climb stairs and even slide across ice. It’s able to recognize objects and knows how to find the charger when its battery runs low.

There are humanoid robots that can converse in multiple languages while maintaining eye contact. We have smart speakers that can answer questions and pull from a massive cache of data to let you know what time it is in Marrakesh.

The problem is that we are still light years away from anything close to an android like Data from Star Trek: The Next Generation or C3PO from Star Wars. This isn’t to say we couldn’t build something that closely resembles those androids, but the result is rather disappointing once you try to challenge them with even simple tasks.

Worrying About Generative AI

Machine learning (ML) is a subset of AI, although many use the terms interchangeably. I do, often to the chagrin of readers. ML tools cover a lot of ground, and different machine-learning models and methodologies have been created to address a host of different tasks. Models that work well to identify things in images don’t necessarily work well when it comes to understanding the spoken word.

Utilization of focused ML models has changed the way we look at visual problems like advanced driver-assistance systems (ADAS), self-driving cars, and autonomous robots. Recognizing objects in real-time and providing this type of information is relatively simple these days. However, acting on that information is a bit more of a challenge, though it’s being addressed albeit with different models and approaches.

Generative AI is one of these approaches. Generative pre-trained transformer (GPT) models are also referred to as large language models (LLMs). They’re collectively known as chatbots, one of which is the well-known ChatGPT.

The approach can be used for many applications, but those of note tend to be based on very large data sets, often scraped from the internet, that can interact with queries posted by users. They provide a similar interactive interface that harkens back to the ELIZA computer therapist computer program created by Joseph Weizenbaum from MIT back in the 1960s. It didn’t use a GPT model but rather a ruleset.

ELIZA garnered lots of interest due to its interactive nature and it could be convincing, to a point. Chatbots today are more effective and have a better understanding of some of the material they’re trained with, but only within the limits of the model and training. The results from interacting with programs like ChatGPT range from insightful to silly.

Keep in mind that chatbot applications these days are doing quite a bit, from performing natural language processing of input, to analyzing a request, to finding and forming a response and presenting it in a natural language format. It would be trivial to make this work with audio inputs and outputs, which would wow some users. From a technical standpoint, though, it’s just a design exercise.

Jobs at Risk?

The more focused the application, training, training data, and models, the better the results. One worry carried by many is that a suitable training model could replace them. It could be anyone from a programmer to a tech support specialist to a bond trader.

It’s not just generative AI that’s at issue when considering whether to worry about AI taking over the world. Many “AI applications” are actually collections of different models, tools, and regular programming that perform a job. This might involve creating a new music score or software program, or handling an order for burgers and fries. The thing is that someone helped develop the application and used AI-related tools to create the application or incorporated the AI support into it to make it work.

Those creating the application tend to have an understanding about AI that ranges from minimal to expert level, but at the least, they understand the limits and capabilities of the system. Users and managers often have a much different view and understanding of these limits and capabilities and their potential impact.

I tend to be much more worried about the human side of the equation when it comes to utilization and deployment of AI tools and models due to the lack of understanding of those simply using a tool or application. It’s akin to the meme “It’s on the internet, so it must be true.”

Unfortunately, most AI is not analogous to a calculator that will always provide a correct answer to a basic problem like adding numbers. AI usually is related more to statistics, and we know how well most people understand statistics.

Fact or Fiction?

The challenge is that many will wind up using tools for less than noble reasons. The ability to generate a paper on a topic or create an image or video by specifying incorrect facts can result in data like a video that’s completely false or skewed far from the truth. In fact, ML models are often used for optimization—optimizing for a lie works as well as optimizing for the truth.

There seems to a great deal of worry surrounding the misuse of AI technology, and that we should somehow slow down research in this area. I contend that the problem of people using the technology is at issue rather than the technology itself.

Creating videos of events that didn’t occur is what most movies are all about. Writing stories that are fiction is the norm. The ability to do these things in the past was possible, but it was typically time-consuming and often required a level of expertise that would be beyond most people. Generative AI tools have simply lowered the bar of entry while drastically improving the quality of the results.

Not only are these tools being made available on a regular basis. but they’re on platforms like smartphones. Editing someone out of a picture or adding someone else is now a tap or two away.

Taking Advantage of Generative AI

I regularly receive press releases about new products touting the fact that they utilize or present ML tech, especially generative AI or chatbots. This isn’t surprising given the hype. However, it’s also about the capabilities of these tools, since it’s not just smoke and mirrors. Generative AI can be very useful and augment the users’ use of an application or improve how the application works.

From a developer’s perspective, the how and why become important. For example, can an interactive chatbot improve areas of a particular application? General help support comes to mind, but a host of other features could be readily apparent, from reducing the number of options presented to the user to finding possible solutions within a large list of possibilities (e.g., finding the right transistor or chip for a particular task).

Keep in mind that one needn’t utilize generative AI tools or application support if it’s not warranted. However, taking a new look at your development process or application with a better understanding of what’s possible may be very worthwhile.

There are multiple areas in which a developer may find ML useful. These include application development, design, implementation, delivery, tech and user support, as well as long-term maintenance. Much of this tends to be application-specific, although the development and design is often more standard and utilizes third-party tools.

Incorporating ML into an application or its support requires a much better understanding of the ML support that’s available. GPT may not be a good choice, but other ML models and tools might be better options.

Other significant costs are involved as well, from startup, to training of developers, to training of models that come into play. These can be justified if they improve the quality or functionality of the development process or application. If not, they may be a gold ring that’s not worth wearing.

Developers need to keep in mind not only the possibilities that tools like GPT can provide, but also the potential limitations. For example, what will be the false positive and negative rates? Will they be significant? Will a customer be faulted, hurt, etc. because of results that might be subpar? Is there sufficient training data? Will training be an ongoing process? How will using ML and GPT affect support and security? How will the system be tested and verified?

The latter brings into play details like explainability, the ability for an ML model/system to provide a description of the results, how they were arrived at, and how they’re justified.

P.S. I still haven’t figured out how to get a lion into the article so that I could use it in the title. Guess I should have asked a chatbot.

Read more articles in the TechXchange: Generating AI.

About the Author

William G. Wong | Senior Content Director - Electronic Design and Microwaves & RF

I am Editor of Electronic Design focusing on embedded, software, and systems. As Senior Content Director, I also manage Microwaves & RF and I work with a great team of editors to provide engineers, programmers, developers and technical managers with interesting and useful articles and videos on a regular basis. Check out our free newsletters to see the latest content.

You can send press releases for new products for possible coverage on the website. I am also interested in receiving contributed articles for publishing on our website. Use our template and send to me along with a signed release form. 

Check out my blog, AltEmbedded on Electronic Design, as well as his latest articles on this site that are listed below. 

You can visit my social media via these links:

I earned a Bachelor of Electrical Engineering at the Georgia Institute of Technology and a Masters in Computer Science from Rutgers University. I still do a bit of programming using everything from C and C++ to Rust and Ada/SPARK. I do a bit of PHP programming for Drupal websites. I have posted a few Drupal modules.  

I still get a hand on software and electronic hardware. Some of this can be found on our Kit Close-Up video series. You can also see me on many of our TechXchange Talk videos. I am interested in a range of projects from robotics to artificial intelligence. 

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!