No.
In an LLM, words are correlated with other words, but nothing else. For example, the word "apple" is not correlated with actual apples in an LLM, because that would require a model of reality, and LLM's have no concept of reality. We know the meaning of the word "apple" not because we know it correlates with the word "pie" but because we know it correlates with actual apples. We have a model of reality in our minds, and meaning comes from the correlation of words to that reality, not merely to other words.
You don't have apple in your head. You only have concepts. Concept of an apple. Yes, you also know how apple looks, tastes, smells and feels. But that's just more relations between concepts, it's nothing fundamentally different.
Model of reality is the whole set of concepts linked by relations. So of course LLM does have model of reality.
Think about concepts you haven't seen, haven't heard, haven't touched .. like imaginary numbers. There are not "actual" imaginary numbers. Yet people do understand them, pretty much the same as apples. And so do LLMs. That's not really the difference.
LLMs have limited scope of attention, they can't tell how well they know something, they so far have very vague concept of humor and can't tell if from facts (but then, in many cases, people can't either).
There is also one thing I noticed recently, but haven't heard talking about it anywhere. I call it "semantic resolution". LLMs often can't tell two concepts apart, if they are similar. Like dog and cat. Animals, pets, similar relations. ChatGPT 4 is well sized and well experienced in cats and dogs, it won't make this mistake.
But ask about that one movie with Samuel L. Jackson where Neo fights the robots .. and he will tell you it's Matrix, and list other movies where
Laurence Fishburne acted. It simply has troubles telling actors from 2000s movies apart. And once you spot it, it's everywhere.
The latent space has usually several hundreds of dimensions, and it uses typically 32bit float numbers. Sometimes as little as single bytes, for faster evaluation. That's a lot. But number of human language concept is also a lot.
There's a limit of how little two concepts can differ. Also concepts can be simply not placed into latent space too precisely, if there is just few examples in the training data. And then confusion happens. And with typical LLM style, confusion happens unnoticed. It will insist on it, spin other responses of the mistake, and end up with complete nonsense.