I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.
Any good examples on how to explain this in simple terms?
Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?
There’s the Chinese Room argument, which is a bit related:
https://en.wikipedia.org/wiki/Chinese_room
I always thought the Chinese Room argument was kinda silly. It’s predicated on the idea that humans have some unique capacity to understand the world that can’t be replicated by a syntactic system, but there is no attempt made to actually define this capacity.
The whole argument depends on our intuition that we think and know things in a way inanimate objects don’t. In other words, it’s a tautology to draw the conclusion that computers can’t think from the premise that computers can’t think.
This is what I was going to point to. When I was in grad school, it was often referred to as the Symbol Gounding Problem. Basically it’s a interdisciplinary research problem involving pragmatics, embodied cognition, and a bunch of others. The LLM people are now crashing into this research problem, and it’s interesting to see how they react.