Do LLMs Understand Our Languages

Last week I was walking home with a friend of mine, and he asked me a question, “Should we give LLMs human rights?”. Without a second thought, I answered no. At the time, I believed that LLMs are just statistical models with no actual knowledge of human languages and no understanding of context. However, as the discussion went deeper, it became difficult for us to distinguish the human ways of learning languages and LLMs’. Surely we don’t need an entire internet of documents to grasp a language, but the essential way we approach it is also through statistics and human feedback. We observe how other people engage in conversations and receive feedback from others when we say something inappropriate or incorrect. So, what’s the difference?

You may have already spotted the crux of the matter, but at that time, I was puzzled and my thoughts were disorganized. After I got back home, I took some time to ponder this question. In this article, I will share my findings and opinions on whether LLMs understand human languages and discuss some interesting properties of LLMs that contribute to this understanding.

LLMs don’t have senses

LLMs don’t have senses. Our languages are expressions of one’s feelings, grounded in the interactions we had with the world we perceived.

  • The model’s understanding of concepts is solely based on the text it has been exposed to, with no instinctive or acquired preferences, bodily sensations, or autobiographical recollections to draw upon. They would never understand words such as “warm” and “cold”. To help visualize this, imagine a person with an unusual combination of disabilities and extraordinary abilities. This person would be incredibly well-read, yet completely deaf and blind, unable to sense touch, taste, or smell, entirely dissociated from their body, and incapable of experiencing visceral responses. They would also suffer from total amnesia, living in what has been poetically referred to as a “permanent present tense.”

  • In this state, like an LLM, the individual would never have set foot on an island, but could still engage in meaningful conversations based on the vast knowledge they’ve accumulated from texts. To keep the dialogues sensible and specific, they would have to invent plausible answers to questions such as “What is your favorite island in the world?” and maintain a consistent narrative by tracking any previously generated fabrications.

Consistency

As we train LLMs to predict the next word in a sequence, their responses to the same question may vary because they rely on probability. This intriguing aspect of language models underscores the significance of consistency in communication.

  • Consistent words and actions construct a shared reality, form the basis of trust, and are generally required of any agent whose actions can have real-life consequences. In the case of LLMs, this consistency is not only crucial for their ability to provide accurate and reliable information, but also serves as a prerequisite for allowing them to safely interact with other parties in one’s social environment, beyond the confines of a private, one-off chat. The balance between probabilistic responses and the need for consistency is an ongoing challenge that must be addressed as AI systems become more deeply integrated into our lives.

Reference