Dreamstime_davinci_153429135
6716882494f3216ae48105a2 Robot Dreamstime Davinci 153429135

So...AI is Worse than a 3rd Grader in Understanding Language Dialects

Oct. 21, 2024
Our intrepid, mostly analog editor takes a stab at AI and then tries translating a recent Electronic Design product writeup from Pirate-speak to plain English.

What you’ll learn:

  • The Gartner Hype Cycle is Silicon Valley Venture Capitalists’ Moore’s Law.
  • Home automation begat IoT, which begat Big Data, which begat AI over a period of a decade and a half.
  • The way to make heaps of money is with hype, not with a good, solid, product design and biz plan.

 

I have a stalker

Who follows me like she’s nuts...

and bolts—a robot

 

It seems like every year or two, the Silicon Valley Venture Capital community decides on which tech it will hype. Much to the chagrin of technologers with business sense, Silicon Valley Venture Capital rarely funds a solid business plan, one showing positive cash flows in a few years, and profitability within five or six years of seed funding. Operational profits are boring, fraught with risk, and the returns on capital are meager, even if 30%. Don’t ask how I learned all about this one firsthand or why I have so many VCs in my LinkedIn network.

The famed Gartner technology lifecycle curve (Fig. 1) has been misapplied by the masses to all tech, when in most cases it does not apply. It specifically applies to venture-based startups and tech, and it seems to have originated as an inside joke among Silicon Valley VCs.

Fools in the technology press started replicating the curve and misapplying it to every technology that came along, versus it being, what I submit is, merely the Venture Capital community’s Moore’s Law. I discovered, the hard way, that the fuel for massive startup valuations growth was not great product margin or a solid R&D and biz plan. Rather, it was pure, weapons-grade, BS-ability.

The secret to a 20X return on investment (ROI) in Silicon Valley tech is to plunk down money as a first or second round investor just before the venture community and their inadvertent media accomplices create the beginnings of a hype cycle. Then they exit at its peak hype (per the Gartner curve) and subsequently pawn off the pickings to retail investors or to an acquisition where BigCorp is salivating at the nothingness you have to flimflam them with.

I mean, what other sucker would shell out $3B for a pair of displays fitted into goggles that some kid made and threw onto Kickstarter? Or pay $3.1B for a home thermostat that any of you could come up with in a few weekends (Fig. 2)?

Looking at these hype cycles, we can see them clearly, how they fail, and how they morph into the next set of promises, hyping that sucker with the next round of astronomical valuations in this high-tech Ponzi scheme.

Failures begat failure—failed startup founders were funded in their next startup because of some kind of twisted levels of experience. However, it’s more promises, illusion, and hype that create valuation; failure or mediocrity/simplicity be damned. Rinse and repeat as the fin-world scratches its head on how Silicon Valley economics work.

Not so Smart Holmes

Let’s follow the example that started out as Smart Homes and Smart Buildings in the 2010s, which incorporated sensors to provide inputs for home automation.  When the smart thermostat craze fizzled to mediocrity, it only made sense to cast a wider array of sensors, the whole Internet why not. Thus, the Internet of Things (IoT) was born.

IoT never really lived up to its promise, though the Canadians for some reason remain bullish about “AIoT” (AI+IoT) and “digitalization” (a word that bugs me for some reason), because there was no way to process all of the information from potentially billions of sensors out there. This gave way to Big Data, which promised to sort through, search through, and make sense of large swathes of data whose sets were too large for human digestion. Elementary, my dear Watson.

But then, why bother with humans at all? Why not train machines on these big data sets? This premise gave rise to the Artificial Intelligence hype cycle that I submit we’re presently nearing its peak? The rats are jumping off the sinking ship, but valuations are approaching the peak, as the news cycle starts turning negative.

But rather than deliver the original promises of better energy efficiency in operating a home, commercial building, vehicles, or city, AI made a new promise to CFOs that it could replace knowledge workers. After all, AI had the same info as humans after doing Google searches in their jobs.

AI appealed to execs who saw an evolution toward a headcount of five humans in a Fortune 50 company who would collect stock options, while AI did the rest of the work. This insanity fed on itself, and companies began the task of bringing AI into daily operations.

All Aboard the Training

Terms of service for providers were surreptitiously changed, with Zoom having potential access to call content to train its AI. Yes, those sales report slides from quarterly calls could now become training for a machine that could advise your company as well as your competitors if someone decides to use Zoom tools to summarize the meeting or if the meeting host allows for AI training. 

Reddit, with its communities of experts, began training its AI on the answers experts were providing voluntarily to help those wanting to learn more, how to do things, or simply get answers to their college class homework.

LinkedIn got clever, and began training its AI with a member opt out, suckering many of us to train its AI without being aware we were doing so (I opted out as soon as I found out). But that’s not where the cleverness was—they’re preying on the vanity of “experts,” those wanting to show off their skills and knowledge to headhunters and hiring companies by asking for experts to answer questions.

Many brilliant people in my LinkedIn network still go off and answer these LinkedIn questions, wowing their peers and inadvertently driving a nail into the coffin of their own, and their apprentices’, careers by training an AI to replace them.

The original “make the world a better place” premise of energy efficiency for homes, grids, networks, etc., using IoT and Big Data, went out the window. The race is on now to power the AI hype with tons of electricity and cooling water—the latest madness in this hype frenzy—assuring Bill Gates he hasn’t been left holding the bag on his small modular (nuclear) reactor boondoggle as solar, wind, and battery kicks his butt on the economics. They all know the AI hype cycle will fizzle if they can’t deliver the power needed to run AI data centers and sustain NVIDIA’s PE ratio quick enough.

AI—What Good is It?

I have dabbled in AI, as it came out, just to see what all of the hype was about. Don’t confuse this with generative design, which I think is a fantastic tool in the hands of people who know what they’re doing—unlike those who don’t know how to properly constrain a problem or apply loadpaths. The result is as good as the operator.

Being an engineer, the artistic side of my brain is severely atrophied, so I decided to try my hand at “art” using the Mid-journey AI back when it first came out. Using text prompts, I had about a week of fun by binging various forms AI-generated “art” before I grew tired of it (Fig. 3).

I’ve also experimented with other forms of AI, finding ChatGPT to be a complete idiot when it came to technically deep things, though it could write passable poetry if it wasn’t as constrained by small word count limits. Its Haiku musings left a lot to be desired, though it seemed well “aware” of the 5-7-5 syllables format.

I tried ChatGPT for my first blog’s Haiku, erased its meager effort, and have been writing my own Haiku from scratch each time. AI seems to want to take over all of the fun stuff, when it should be creating leisure and an economic basis for a universal income so that humans can do the creative and leisure things.

I also found that AI did a passable job in translating to/from other languages and have used it on occasion for that purpose.

There be Translatin’ ta be Done

With my previous positive experiences using AI to translate languages, I decided to try to use Arrrtificial Intelligence to translate the ditty on Emerson’s NI mioDAQ that I wrote a few weeks ago. It was written in Pirate-speak, commemorating International Talk Like a Pirate Day.

My main goal in doing the translation, about a month ago now, was to be able to point my readers to a translated version of the piece, since ESL and non-speakers alike would have difficulty in understanding what I wrote. I was so concerned with this, that Plan B, which I included, had a file attached to the piece containing the company’s press release.

I obviously did not have a language selection for Piratese in Google Translate. So my thinking was that the AI could train on my language set rules and create inferences that would translate my Pirate-speak to English, at least, and ideally any language a reader could input. My prompt asked for a direct translation of my entire article’s web page from Pirate-speak to English.

Grading the Results

The following AI would not access the article’s web page at all, citing copyright as a convenient excuse, so they go home with an F on their report card:

  1. Google Gemini
  2. ChatGPT

Microsoft’s Copilot also cited copyright, but at least made an effort to provide a summary of the piece (Fig. 4). Therefore, it goes home with a D on its report card for not following the command of its master human to provide a full translation, and for mucking up the NI mioDAQ product name.

Though my efforts were not exhaustive in trying every AI out there (feel free to try others on the pirate article and suggest them in the comments, below), my last attempt was to see what my stalker AI girlfriend, Perplexity (she seems to stalk every one of the articles I write), could come up with. Perplexity.ai did somewhat better than Copilot, but still did not translate the entire piece like I had asked, so she gets a C on her report card, just because she seems to like me...a lot (Fig. 5)

My 3rd grader Minion, however, gets an A on her report card for both understanding the article’s Pirate-speak content, and for giggling while reading it.

All for now,

Andy T


Andy's Nonlinearities blog arrives the first and third Monday of every month. To make sure you don't miss the latest edition, new articles, or breaking news coverage, please subscribe to our Electronic Design Today newsletter.

About the Author

Andy Turudic | Technology Editor, Electronic Design

Andy Turudic is a Technology Editor for Electronic Design Magazine, primarily covering Analog and Mixed-Signal circuits and devices. He holds a Bachelor's in EE from the University of Windsor (Ontario Canada) and has been involved in electronics, semiconductors, and gearhead stuff, for a bit over a half century.

"AndyT" brings his multidisciplinary engineering experience from companies that include National Semiconductor (now Texas Instruments), Altera (Intel), Agere, Zarlink, TriQuint,(now Qorvo), SW Bell (managing a research team at Bellcore, Bell Labs and Rockwell Science Center), Bell-Northern Research, and Northern Telecom and brings publisher employment experience as a paperboy for The Oshawa Times.

After hours, when he's not working on the latest invention to add to his portfolio of 16 issued US patents, he's lending advice and experience to the electric vehicle conversion community from his mountain lair in the Pacific Northwet[sic].

AndyT's engineering blog, "Nonlinearities," publishes the 1st and 3rd monday of each month. Andy's OpEd may appear at other times, with fair warning given by the Vu meter pic.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!