270485841 © Ruslan Batiuk | Dreamstime.com
ai_dreamstime_l_270485841

Hallucinations, Blind Spots, and Unintended Behaviors: Why Today's AIs Still aren't Very Intelligent

May 9, 2024
Dr. Aliana Maren, founder of the Themesis Institute, suggests that AI's recurrent issues are deep architectural flaws, not mere bugs. In this exclusive interview, she explains her solution.

What you’ll learn:

  • Why the so-called generative AI systems in use today fail miserably at so many seemingly simple tasks.
  • Why some of the problems experienced by the latest generation of AIs may be due to fundamental flaws in their architectures and shortcomings in the models they’re based on.
  • Recent research into the limitations of large language models and new techniques for tuning neural networks.

 

Google, Open AI, and other tech giants continue to reassure us that overcoming the head-scratchingly weird instances where the technology has completely failed to "understand" a simple, often trivial-seeming aspect of human cognition is simply a matter of fixing a few "bugs" and improving the quality of the datasets that they’re being trained with. 

But what if those embarrassing digital gaffes aren’t programming errors. What if they’re actually signs of some fundamental flaws in the architectures that the present generation of AIs are built on?

That's precisely the question Dr. Aliana Maren, Founder and Director of Themesis Inc., and I kicked around during her talk "How to Build an AGI In Your Spare Time, Using Tools and Parts from Around the House," given on March 16 at the 2024 Trenton Computer Festival (see video below).

What are the Main Causes for AI’s Erratic Behavior?

According to Maren, the large language models (LLMs) and other tools currently being used to build AI applications have inherent structural problems. They cause these applications to “hallucinate,” be susceptible to "poor judgement," and, in Dr. Maren's opinion, limit their usefulness in applications that require a full-fledged Artificial General Intelligence (AGI).

During her talk, Dr. Maren explored a few of the most significant structural deficiencies in the form of a brief history of AI from the 1940s to the present day. In the process, she showed, among other things, how today's neural nets are being trained to focus too narrowly, ignoring important results generated from different parts of the network. In addition, Dr. Maren explained the 50-year disconnect between the symbolic knowledge graphs used to structure the information within an AI system and the neural nets that do the actual processing. 

A New Approach to AI Design: CORTECON

To close this gap, Maren developed a new fundamental AI component called a CORTECON (COntent-Retentive, TEmporally-CONnected network) that enables the integration of three important neurophysiology-based properties into an AGI architecture.  This architecture is based on three neurophysiology-based elements:

  • Advanced neural models 
  • Advanced free energy formulation (Kikuchi cluster variation method) 
  • Feedback control loops that can influence transitions between metastable state

Together, these new capabilities make it possible for an AI to manifest a wide range of new behaviors that are substantially more nuanced than those currently available. Although some of the concepts can be difficult for someone not deeply familiar with AI to understand, Dr. Maren does a great job at using plain English to explain as much as possible.

So, whether you're an AI-savvy power user, or a newcomer who's fascinated by the technology, it’s a good bet that watching our talk will be informative and possibly even entertaining (if you don’t mind a few corny jokes).

Check out more articles/videos in the TechXchange: Generating AI.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!