LLM Intentionality: A Multi-Layered Perspective
Research#LLMs👥 Community|Analyzed: Jan 26, 2026 11:30•
Published: Sep 11, 2023 16:59
•1 min read
•Hacker NewsAnalysis
This article explores the potential for Large Language Models (LLMs) to possess intentionality at multiple levels, challenging the notion that their actions are solely driven by a "chat game" mentality. The author argues that even if the underlying LLM is simply predicting the next token, it may be summoning "simulacra" with their own distinct intentions, similar to how Searle's Chinese Room functions.
Key Takeaways
- •LLMs might exhibit intentionality beyond the superficial 'chat game' through the actions of summoned "simulacra."
- •The article proposes an interpretivist view of intentions, focusing on how well behavior is predicted and explained.
- •Arguments against LLM understanding, such as lack of sensory input, are countered by drawing parallels to how the human brain processes information.
Reference / Citation
View Original"I think the mistake Keith is making is to restrict his analysis to the level of the LLM itself and to fail to consider that there may be a distinct agent supervening on top of it in much the same way as The Chinese Room supervenes on top of Searle."