Saturday, July 13, 2024

AI is not about Intelligence

 I am now working on AI topics for several years. As you all know, the current enthusiasm is bot mind-blowing and terrifying.  Laymen often tell me what they think AI is all about. In most cases, they assume AI algorithms, in particular LLMs, are smart in a human sense.

No, they are not smart like humans. Their behavior makes us think they are. I can understand why people are surprised whenever  LLMs provide some elaborate and sophisticated answers.  In reality all of their replies are based on statistics and giant sets of training data. 

It is the same for artificial neural networks (ANN). An ANN is trained with large datasets  in a process called supervised learning.  The outcome of each inference is a probability function. If you teach a CNN network how a cat or dog looks like, it will find some commonalities respectively patterns of each class (such as dog or cat). Given a picute it has not seen before  it just estimates how close the subject in this picture resembles a cat, a dog or anything else. When you feed it with a picture of a cat, it'll only be able to respond that this could be a cat with a probability of 91.65%.

The same holds for transformer models (encoders, decoders) in LLMs. They are trained with a huge amount of documents to create embeddings. These are just vectors that describe in which context a specific fragment is typically being used. To create answers, LLM implementations need to understand the meaning of the prompt,and eventually to create a reply, word by word, where each succeeding word is determined by a probability function. Actually this actual process is much more complex and sophisticated, but the principle remains the same.

What is missing in AI to call them smart in a human sense?

  • Lack of proactivity: AI algorithms only react to input. They are not capable of proactive behavior.
  • No consciousness: They have no consciousness and cannot reflect on themselves. 
  • Lack of free will: This is a consequence of AI lacking proactivity and consciousness. An AI provides answers but makes no decisions.
  • No emotions: AIs can recognize the emotions of humans, for example, by performing a sentiment analysis or by observing gestures. However, they cannot experience their own emotions such as feeling empathy.
  • Learning from Failure: AI is not able to learn from its own errors. And obviously there is no way to interactively teach an AI about its mistakes so that it can dynamically adapt. Errors or biases can only be eliminated by changing the training data or the algorithms which at the end of the day results in a new AI.
  • Constraints: An AI is constrained by the input it receives. It is not able to observe its enviroment outside of its cage.
  • Fear of death: An AI does not care about whether it lives or not. This might sound rather philosophical but is a valid aspect, given the way intelligent life behaves. 
Unfortunately, the Turing test is not able to decide whether an AI is intelligent. It can only figure out whether an AI seems to be intelligent. 

What do you think? How could an appropriate test look like?