• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

For example, ask it if it takes one hour to dry one towel on a clothes line, how long does it take to dry three hours on a clothes line, it may answer three hours.

Three towels not three hours, right?

Of course, there are certain assumptions involved, like that that the towels are equally wet.

I did test it worded that way just to see:

You
Sent by you:
if it takes one hour to dry one towel on a clothes line, how long does it take to dry three towels on a clothes line?
Copilot
Sent by Copilot:
If it takes one hour to dry one towel, it will take one hour to dry three towels as well. The drying time remains the same for each towel. 🌞👕👕👕

I know what you mean though. Like it will give some incorrect answers to simple arithmetic problems where a simple calculator would give the correct answer every time. But also, the answer will be pretty close to the right answer. It isn't designed for doing arithmetic.
 
Three towels not three hours, right?

Of course, there are certain assumptions involved, like that that the towels are equally wet.

I think the big assumption is about the size of the clothes line. If it takes three hours to dry a towel on a clothes line that's large enough to hang 10 towels then it will take three hours to dry three towels because the clothes line has excess capacity and they can be dried in parallel. But it won't take three hours to dry 20 towels, presumably it will take six hours.
 
I think the big assumption is about the size of the clothes line.

Humans make assumptions to answer the question. ChatGPT does not. Even when it states assumptions in its answer, ChatGPT isn't actually making any assumptions. It doesn't know what an assumption is, or what any of the assumptions one might make actually mean. It knows that the word "assumption" is correlated with other words people give as answers to such problems, but that's it. It doesn't go any deeper than that even when it looks like it does.
 
Humans make assumptions to answer the question. ChatGPT does not. Even when it states assumptions in its answer, ChatGPT isn't actually making any assumptions. It doesn't know what an assumption is, or what any of the assumptions one might make actually mean. It knows that the word "assumption" is correlated with other words people give as answers to such problems, but that's it. It doesn't go any deeper than that even when it looks like it does.

Knowing what the word is correlated with means it knows what it means .. that's the meaning of "meaning".
 
Maybe not so much an assumption but rather a conclusion.

If the question is " how long to dry X number of towels on a clothes line ", the conclusion ( assumption? implication? ) would be that the towels would fit on the clothesline.
 
Knowing what the word is correlated with means it knows what it means .. that's the meaning of "meaning".

No.

In an LLM, words are correlated with other words, but nothing else. For example, the word "apple" is not correlated with actual apples in an LLM, because that would require a model of reality, and LLM's have no concept of reality. We know the meaning of the word "apple" not because we know it correlates with the word "pie" but because we know it correlates with actual apples. We have a model of reality in our minds, and meaning comes from the correlation of words to that reality, not merely to other words.
 
No.

In an LLM, words are correlated with other words, but nothing else. For example, the word "apple" is not correlated with actual apples in an LLM, because that would require a model of reality, and LLM's have no concept of reality. We know the meaning of the word "apple" not because we know it correlates with the word "pie" but because we know it correlates with actual apples. We have a model of reality in our minds, and meaning comes from the correlation of words to that reality, not merely to other words.

Your breakthroughs in neurology are astonishing, one will look forward to reading your land breaking research into human cognition!
 
Your breakthroughs in neurology are astonishing, one will look forward to reading your land breaking research into human cognition!

Nothing I said constitutes a breakthrough of any kind, and nothing I said about humans requires any particular insight into neurology or anything else. All of it is obvious to anyone who spends any time at all thinking about their own thoughts. The only thing which may not be obvious to an intelligent reader is a bit of basic knowledge about how LLM's work, since not everyone is familiar with how they work.

If you think I'm wrong, offer something more substantive than snark as a counter-argument.
 
No.

In an LLM, words are correlated with other words, but nothing else. For example, the word "apple" is not correlated with actual apples in an LLM, because that would require a model of reality, and LLM's have no concept of reality. We know the meaning of the word "apple" not because we know it correlates with the word "pie" but because we know it correlates with actual apples. We have a model of reality in our minds, and meaning comes from the correlation of words to that reality, not merely to other words.

You don't have apple in your head. You only have concepts. Concept of an apple. Yes, you also know how apple looks, tastes, smells and feels. But that's just more relations between concepts, it's nothing fundamentally different.
Model of reality is the whole set of concepts linked by relations. So of course LLM does have model of reality.
Think about concepts you haven't seen, haven't heard, haven't touched .. like imaginary numbers. There are not "actual" imaginary numbers. Yet people do understand them, pretty much the same as apples. And so do LLMs. That's not really the difference.
LLMs have limited scope of attention, they can't tell how well they know something, they so far have very vague concept of humor and can't tell if from facts (but then, in many cases, people can't either).

There is also one thing I noticed recently, but haven't heard talking about it anywhere. I call it "semantic resolution". LLMs often can't tell two concepts apart, if they are similar. Like dog and cat. Animals, pets, similar relations. ChatGPT 4 is well sized and well experienced in cats and dogs, it won't make this mistake.
But ask about that one movie with Samuel L. Jackson where Neo fights the robots .. and he will tell you it's Matrix, and list other movies where
Laurence Fishburne acted. It simply has troubles telling actors from 2000s movies apart. And once you spot it, it's everywhere.
The latent space has usually several hundreds of dimensions, and it uses typically 32bit float numbers. Sometimes as little as single bytes, for faster evaluation. That's a lot. But number of human language concept is also a lot.
There's a limit of how little two concepts can differ. Also concepts can be simply not placed into latent space too precisely, if there is just few examples in the training data. And then confusion happens. And with typical LLM style, confusion happens unnoticed. It will insist on it, spin other responses of the mistake, and end up with complete nonsense.
 
You don't have apple in your head. You only have concepts. Concept of an apple. Yes, you also know how apple looks, tastes, smells and feels. But that's just more relations between concepts, it's nothing fundamentally different.

Yes, actually it is fundamentally different, because sensory input isn't just concepts, it connects to reality (yes, even when our senses deceive us). We are not brains in jars, we are physical beings. Our ideas and concepts tie to our physical reality on a pretty basic level, and our brains have evolved to deal first and foremost with our physical existence. Even abstracted ideas like imaginary numbers still connect back to our sensory experiences, even if we don't think about those connections very much, and even if those connections get more convoluted as the ideas get more abstracted. But they aren't even very convoluted in the case of imaginary numbers.
 
Humans make assumptions to answer the question. ChatGPT does not. Even when it states assumptions in its answer, ChatGPT isn't actually making any assumptions. It doesn't know what an assumption is, or what any of the assumptions one might make actually mean. It knows that the word "assumption" is correlated with other words people give as answers to such problems, but that's it. It doesn't go any deeper than that even when it looks like it does.

No.

In an LLM, words are correlated with other words, but nothing else. For example, the word "apple" is not correlated with actual apples in an LLM, because that would require a model of reality, and LLM's have no concept of reality. We know the meaning of the word "apple" not because we know it correlates with the word "pie" but because we know it correlates with actual apples. We have a model of reality in our minds, and meaning comes from the correlation of words to that reality, not merely to other words.

Nothing I said constitutes a breakthrough of any kind, and nothing I said about humans requires any particular insight into neurology or anything else. All of it is obvious to anyone who spends any time at all thinking about their own thoughts. The only thing which may not be obvious to an intelligent reader is a bit of basic knowledge about how LLM's work, since not everyone is familiar with how they work.

If you think I'm wrong, offer something more substantive than snark as a counter-argument.

Credit where credit is due. We never rarely agree, but this is all spot-on. Like, totally.
 
I mean, if Dr.Sid wants to believe he's a mindless large language model, hallucinating his way through life via rote autocomplete pattern-matching, without any thought or awareness behind it, who are we to gainsay him? Who are we to tell a p-zombie that they're not a p-zombie?
 
What we've actually proven is that intelligence is easier to fake than we thought. And LLMs only started to look like intelligence once they became good enough at faking it. Before that point no one even entertained the idea, because they were absolutely horrible at the faking part.
 
I mean, if Dr.Sid wants to believe he's a mindless large language model, hallucinating his way through life via rote autocomplete pattern-matching, without any thought or awareness behind it, who are we to gainsay him? Who are we to tell a p-zombie that they're not a p-zombie?

Hey, I'm just want to stay on good terms with AIs .. :D

But indeed LLMs works vastly different to brains. LLMs (most architectures) have fixed structure, they have linear and non-linear parts, but they are still just large complex continuous functions, which is important for training. Concepts are points in multi-dimensional space, and relations are weights in the attention layer.

Brain structure is fluid. It changes with training. Connections are created, or removed. Not to talk about how training and decision making isn't separated. AFAIK we don't have the slightest idea how concepts or relations are encoded.

Yet IMHO the semantic logic of understanding are the same. People thinks too much about themselves.
 
Last edited:
Also an LLM doesn't have any inner life or thought process outside of the process it goes through when responding to prompts. Therefore, inasmuch as it does think, or performs a process that mimics thinking, it only thinks about things that it has been prompted to think about. It doesn't contemplate anything that it isn't asked to contemplate, and once it has output its response, it isn't thinking about anything until it receives the next prompt.

(By the way, this is a key difference between LLM chatbots and the AI in the movie Her. The latter definitely spent time thinking about things on its own initiative, without being asked to by a human.)
 
Last edited:
Also an LLM doesn't have any inner life or thought process outside of the process it goes through when responding to prompts. Therefore, inasmuch as it does think, or performs a process that mimics thinking, it only thinks about things that it has been prompted to think about. It doesn't contemplate anything that it isn't asked to contemplate, and once it has output its response, it isn't thinking about anything until it receives the next prompt.

(By the way, this is a key difference between LLM chatbots and the AI in the movie Her. The latter definitely spent time thinking about things on its own initiative, without being asked to by a human.)

Not entirely. LLM does take its previous responses into account. Also the response only stops when stop token is emitted, which is based on model configuration and training.
Modern models have hundreds of thousands token context length. And they can be configured to give really long responses. And long monologue is very similar to thinking. The only difference is you can see what it thinks.
Free services have hard cap on response length, as they have to service lot of requests, but paid services or offline models the limits are more lenient.
You can for example prompt to contemplated about god, and with maximum response length it might spew few pages at once. It will usually stop anyway, but then you can just say "tell me more", and it will continue to expand on its previous response.
It can also follow several lines of reasoning at once, it can for example generate dialogue or argument between two fictional characters. It can also distinguish between what fictional character thinks, says, and does. But that's another level, simulation inside simulation.
 
I don't know if this counts as AI or just another crazy deepfake tool, but some scary stuff is possible:



You can change your face, change your voice. I am aware that some of this stuff has been available for a few years now. It's just that the level of verisimilitude keeps improving.
 

Back
Top Bottom