• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

It's not "if it looks the same, it's the same".

It's: "can you Instrument tell the two apart?"

If we can't articulate why we think a parrot is intelligent in a way that a LLM is not, then the problem is not the LLM, it is our concept of intelligence and the way we meassure it.
 
It's not "if it looks the same, it's the same".

It's: "can you Instrument tell the two apart?"

If we can't articulate why we think a parrot is intelligent in a way that a LLM is not, then the problem is not the LLM, it is our concept of intelligence and the way we meassure it.

I don't care much about our concept of intelligence. I don't need to claim a LLM (or a parrot) is intelligent or not. I like to look for differences and similarities, and you can experiment lot more with LLMs than with parrots.

We certainly do project many human qualities into LLMs, where they are not. And it will only get worse once they are more personified, like when they use voice for both input and output. Voice goes a long way.
 
How do you know they're not? :p
I think a lot of what we assumed was required for sentience and intelligence is being shown to not require it, now that doesn't mean we do things the same way but it should raise doubts for those people so self assured ;) that we aren't p-zombies.
 
It's not "if it looks the same, it's the same".

It's: "can you Instrument tell the two apart?"

If we can't articulate why we think a parrot is intelligent in a way that a LLM is not, then the problem is not the LLM, it is our concept of intelligence and the way we meassure it.
Yep, we should be reevaluating what we consider sentience and how ours might arise.
 
No .. I don't claim humans are p-zombies.
With the arguments you're making about consciousness, you should be claiming that.

I claim lot of human consciousness is just information process. Not all, but a lot. And I'm not saying LLMs are the same, I'm saying there are similarities. Often surprisingly deep ones.
I think you're imagining deep similarities where none exist.

As for the reaction to previous stimuli, I meant it's current function is affected by what happened in the past. I can ask what is the capital of England, and get London. I can then talk about cats, make it angry .. and maybe not get the answer next time.
This must be a language barrier thing. Nothing I know about computers, LLMs, or emotions leads me to believe you can make an LLM "angry" by changing the subject, or goad it into choosing nonresponsiveness in a fit of pique.

It's true that LLM doesn't have any underlying happy/unhappy mechanism. But the state of the prompt can be happy or unhappy. Yes, it is just words which us humans knowing the language interpret as happy or unhappy .. but the representation is there, the LLM is affected by it, it can express how it is affected. IMHO that's fascinating.
I don't this this is true at all.
 
But again, you don't seem to be willing to accept the definitional premise that acting annoyed isn't the same as being genuinely annoyed. This is the fake-it-and-thereby-make-it attitude of AI proponents that I mentioned earlier. You might argue that if what the program outputs is the only possible way to know what's going on "inside" it, then there's no difference between acting and being, and then decide it's valid to assume being by default.

But that argument is separated from reality. As a human, I know that acting without being is possible, because I can do it. I can speak and physically act as if I am in severe pain, without actually being in any pain at all. The difference is that when someone (whose input I care about) tells me to stop acting like I'm in pain, I can do so immediately. Snap of a finger, and suddenly my speech and behavior changes completely. That's not something I could do if I were actually in severe pain, even if I tried very hard.

But since chatbots can't be in pain (or annoyed, or what have you), they're invariably acting. You can "train" a chatbot to speak as if it's annoyed, but the instant someone with the authority tells the chatbot "from now on you will not act as if you're annoyed anymore", it will obey - it cannot do otherwise. And that goes for any emotion that you prompt the chatbot to mimic.

How is this different from forms of psychopathy? There are a group of people amongst us - who maintain apparently intimate and intense relationships with other people yet we claim they aren't experiencing those emotions, they have learned to mimic the behaviour associated with those emotions. Isn't their mimicry the same - albeit across more than just text?

Again because people will keep missing this - I'm not saying we learn the same way or we compute answers the same way as the LLMs do but am bringing up that some of the apparent lacks in LLMs being sentient can be applied to fellow humans. Are those people less sentient?

If you ask me to describe an apple on a tree I can describe a juicy, succulent red apple that is suspended by its stalk from a brown branch which has a rough dusty texture, the apple is crimson to the deepest red, on its skin are glistening dew drops sparkling in the early morning sunlight that streaks through the gaps made by branches and leaves. How is that different from asking an LLM?

(I asked copilot - its description is much better than mine: "Sure! Imagine a vibrant apple tree standing tall in an orchard. Among its lush green leaves, you spot a beautiful apple. The apple is round and plump, with a smooth, shiny skin that glistens in the sunlight. Its color is a rich, deep red, with hints of green near the stem. The apple hangs from a sturdy branch, swaying gently in the breeze. The tree itself is full of life, with branches spreading out wide, providing a perfect canopy of shade. The scene is serene and picturesque, capturing the essence of nature's bounty." It certainly Englishes better than me.... ;) )
 
If you ask me to describe an apple on a tree I can describe a juicy, succulent red apple that is suspended by its stalk from a brown branch which has a rough dusty texture, the apple is crimson to the deepest red, on its skin are glistening dew drops sparkling in the early morning sunlight that streaks through the gaps made by branches and leaves. How is that different from asking an LLM?
I find it impressive that you can describe it this vividly despite not having a mental picture of it.
 
I find it impressive that you can describe it this vividly despite not having a mental picture of it.

And by a coincidence that must mean we are in a simulation…


https://www.theguardian.com/commentisfree/article/2024/aug/27/sensory-memory-inner-life

Have you ever had the experience where a smell or a taste pulls you into a world of memory? One bite of a cookie of a similar kind to those in your old school cafeteria, and suddenly you can practically see the linoleum floors and hear the squeak of plastic chairs. Most people can have these sudden reveries – I can’t.

When I have come across descriptions of this phenomenon – Proust’s madeleine scene, for instance, or the memory bubbles in the movie Inside Out – I’ve always assumed that it was some kind of metaphorical device. I had no idea that most people actually re-experience moments from their pasts in some sensory detail, even if it’s a bit shaky or faint.

I have come to understand that my version of reminiscing is not nearly as richly textured. Just now, I heard a song I once played in my high-school orchestra, and it reminded me of the time when a (lesser) violinist named Barbara almost punched me after I corrected her bowing.

But I don’t remember what she looked like, how the band room smelled, or the fear I must have felt when I noticed her little fists balling up. All I remember is the story – a tale I must have recounted immediately afterwards, and then told and retold until it wore a groove into my brain.

Sensory memories that you can replay are called episodic memories, while remembered facts and stories are known as semantic memories. This may seem like a subtle difference, but these two types of memory rely on different brain networks.


…snip…
 
I find it impressive that you can describe it this vividly despite not having a mental picture of it.

No shade on Darat, but it's a trick so simple even an LLM chatbot can do it. Human children can do it pretty much from the moment they develop language. Darat's been practicing his entire life. It would be impressive, in a sense, if he couldn't.
 
No shade on Darat, but it's a trick so simple even an LLM chatbot can do it. Human children can do it pretty much from the moment they develop language. Darat's been practicing his entire life. It would be impressive, in a sense, if he couldn't.


It is interesting how anything an LLM can do is inherently simple, so if a human can do it, it must also be simple.

As I see it, humans make such descriptions from experience other such descriptions. Only the most talented could do it without ever having seen a similar description (not necessarily of apples). An LLM does exactly the same thing.

When a human learns to do something, it is usually done by observing how others do it, and through experience with similar tasks. LLMs learn in exactly the same way, but because they don’t have eyes or limbs, their experience is exclusively gained by collecting texts of others doing this or similar tasks.

LLMs work by statistically finding the most likely answers given the input. It is certainly not sentience, but we are forced to conclude that neither is human problem solving.

Humans can get annoyed, angry, happy etc because of biological processes going on in the brain or body. Obviously, LLMs can’t have these emotions, even if they they can tell you every detail of how such emotions feel. But then, emotions are not a sign of sentience. You can have humans who feel few emotions because of psychological disorders, but we still deem them sentient.

I believe that self-awareness is necessary for sentience, and I do not really think that LLMs display self-awareness, even if they can act so. But then, self-awareness is itself not well-defined, as we have seen here on ISF in the JREF days when there was one user who pointed out that even simple measurement instruments can be self-aware.

So we are left with the old saying that we know sentience when we see it, and this is obviously dependent on our bias.

So everything humans do is complicated and a sign of sentience, whereas everything LLMs do is simple, and obviously not a sign of any degree of sentience.
 
I just went to the State Fair art show, and while photography used to be minimally represented, now it's about half the entries. I suspected that many of them were AI generated because they're just too "perfect". (I have done some extensive photography myself.) Oddly, I wouldn't object to them creating the image in AI, then painting it on paper. At least that takes some effort.
 
How is this different from forms of psychopathy?

I would say the key difference is that sociopaths don't act the way they do because someone told them to act that way, and they won't stop behaving the way they do when someone tells them to stop.
 
It is interesting how anything an LLM can do is inherently simple, so if a human can do it, it must also be simple.

Rote recital of remembered information, without needing any insight or understanding, is inherently simple. Unlike, say, differential calculus, it's a task that can be taught to small children.

Not everything a computer can do is inherently simple. What LLMs do, though, is inherently simple. Darat doesn't need to visualize an apple to describe an apple. Neither does an LLM.
 
I would say the key difference is that sociopaths don't act the way they do because someone told them to act that way, and they won't stop behaving the way they do when someone tells them to stop.

That they are closed to inputs at the end of the prompt- both internal and external is a programming decision, not a constraint in type/class.
 
Rote recital of remembered information, without needing any insight or understanding, is inherently simple. Unlike, say, differential calculus, it's a task that can be taught to small children..
The reason why people are swooning over the capabilities of LLMs is that they are writing theses, poetry, jokes etc. which is not immediately recognisable as “rote recital”, even if it is not always brilliant. And it is certainly not something you can teach small children.

The demand for “understanding” is strange, because it is normally just assumed that people who write a well-written text “understand” their subject, and nobody has given a good definition of “understanding” anyway.
 
Last edited:
I could probably teach a child the copy/paste function.

Oh, but LLM isn't copy/paste.

I know, that's why my child's plagiarism will be much better written.
 
The demand for “understanding” is strange, because it is normally just assumed that people who write a well-written text “understand” their subject, and nobody has given a good definition of “understanding” anyway.

But it can't be assumed that ChatGPT "understands" the subject it's outputting, because we know for a fact the output is determined by the statistical likelihood of certain word combinations given the presence of other word combinations, and not by comprehension of facts and concepts or logical associations, because that's how the program was designed to operate.

Put less purple-y: ChatGPT will return 4 as the answer to 2 + 2 because in its training data the character set "2 + 2 =" was most commonly followed by the character 4. A five-year-old child will give 4 as the answer to 2 + 2 because one time in school she placed two little blocks on her desk, and then placed two more little blocks on her desk next to the first two, and then counted the blocks all together and discovered there were four blocks in total.
 
you could argue that a 3 year old will put 2 and another 2 bocks together to make 4 because someone told the kid that that makes 4 - it might take a while for it to realize why it does.
An LLM will never get on its own, but there might be other A.I. that could extrapolate rules out of the patterns an LLM identifies.
 
But it can't be assumed that ChatGPT "understands" the subject it's outputting, because we know for a fact the output is determined by the statistical likelihood of certain word combinations given the presence of other word combinations, and not by comprehension of facts and concepts or logical associations, because that's how the program was designed to operate.

Put less purple-y: ChatGPT will return 4 as the answer to 2 + 2 because in its training data the character set "2 + 2 =" was most commonly followed by the character 4. A five-year-old child will give 4 as the answer to 2 + 2 because one time in school she placed two little blocks on her desk, and then placed two more little blocks on her desk next to the first two, and then counted the blocks all together and discovered there were four blocks in total.

Simple math questions were a failure point for older generation LLMs and I think that did show a lack of "understanding". The classic (and in this area that meane it lasted a generation) was ask one which is larger 9.8 or 9.11, they would come back with 9.11 being larger than 9.8. The reasoning they would give is that 11 is larger than 8. Now as I said that I think did show a lack of what we would call human understanding however at a charity quiz night about a month ago I was chatting with two ex school teachers one was a math teacher that taught 11-16 and the other had taught 6 to 11 year olds and I learned something surprising. That is a common mistake for kids to make when learning about maths and decimals and some will still not understand it at the age of 16! So again LLMs are closely mimicking human behaviour, even as to what they don't initially "understand" and have to be taught specifically.
 
Last edited:

Back
Top Bottom