• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

"“Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”"

Are they doing the blurring or is the blurring the reality?

For me the reality is the blurring, the LLMs are not as sentient as a slimemould (at least not yet) but what they are doing is showing us ways that aspects of our sentience can arise from computation. (ETA) Aspects that we've long thought were miraculous abilities unique to humans.

Looks like you got a little ways into the article... do you consider yourself a stochastic parrot?
 
Why can't we - you can't just declare it like that as if it is an axiom. (Well you can ;) but you aren't saying why we can't.)

Our "understanding" arose by "stumbling", why can't understanding again happen by stumbling?

For the same reason that I can't create a cat by throwing stones into a lake. Just because one thing was the result of pseudo-random processes, doesn't mean that you can recreate that thing with a set of completely different pseudo-random processes.

Also do you want to share what you mean by "understanding" so we are on the same page?

I suspect we have rather different definitions - to me one aspect of human understanding is the internal narrative we create to explain aspects of our behaviour, in other words it's illusional in the sense "free will" is, we are deterministic if not-predictable doughnuts of mainly water ambulating along in the environment. I have no more access to where my "ideas", my solutions to problems come "from" than a slime-mould does. I'm pretty certain there is some form of computation that is inaccessible to "me" that is carried out by the hardware to come up with responses to inputs from my internal and external environment.

Yes, and without an internal observer, it's all just consistent or random responses to stimuli. Even if the observer is just a retroactive illusion, there is no possible understanding without it, because the concept doesn't make sense otherwise. It wouldn't just be the magic trick of AIs that shows understanding, but every single thing in the universe that does anything. LLMs could have been trained on absolute gibberish in exactly the same way, and no one would ever mistake them for being understanding creatures. Because it's all just a trick -- the understanding doesn't happen in the LLM, but in the person interacting with it.
 
"“Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”"

Are they doing the blurring or is the blurring the reality?

For me the reality is the blurring, the LLMs are not as sentient as a slimemould (at least not yet) but what they are doing is showing us ways that aspects of our sentience can arise from computation. (ETA) Aspects that we've long thought were miraculous abilities unique to humans.

I never thought of learned response to patterns of stimuli without abstract comprehension of the patterns was an aspect of human sentience, let alone "miraculous" or "unique to humans". If it were, we wouldn't be able to train dogs to do tasks. Or reliably employ animals for work in any way.

And I would argue that LLMs don't show us ways that aspects of our sentience stimulus-response behavior can arise from computation. These aspects arose from the sentience of the human programmers. They didn't just throw a bunch of logic gates in a vat full of saline solution, run a charge through it, and come back nine months later saying "it's alive!"

No, they used their own sentience and abstract reasoning to precision-engineer a bunch of logical rules, precisely because they wanted to try emulate an aspect of our stimulus-response behavior.

They basically dumbed down an aspect of our sentience - abstract reasoning in natural language - to a rote stimulus-response behavior free of any abstract reasoning or comprehension. The result didn't arise from computation. It was reduced down to computation. And this reduction was achieved by getting rid of all the parts that actually have to do with sentience.

No plausible mechanism for sentience arising from this kind of reductive approach has been demonstrated. These LLMs aren't showing us anything we didn't already know machines and insects can do, and we still have no clear idea about how or why sentience actually arose or could arise in one particular species of animal.

Might as well say we've been shown how some aspects of our sentience can arise from a cube farm full of zombies, or a sufficiently-large Turing machine implemented in Conway's Game of Life implemented in a bunch of rocks.
 
I have issues with the word "sentience" .. it means exactly nothing to me. We don't even have exact equivalent in Czech. We translate it as feeling or perceiving. Which both is well defined and different.
IMHO it's better to focus on simpler, better defined functions .. like self-awareness, emotions, ability to analyze etc.
Maybe I should ask AI .. it's good at solving poorly defined problems after all. Some even joke it's AI as long as the problem is poorly defined.
 
Which humans? Many humans seem to have to go with the flow, be a member of the tribe, follow the doctrine and so on.

That some don't or choose not to is irrelevant; what matters is that it's a mental capacity that humans have, but LLMs do not.
 
Are they doing the blurring or is the blurring the reality?

They are doing the blurring. As with all hype-based tech, AI investors have a financial stake in promoting their product to the point of deliberately and routinely exaggerating and misrepresenting its capabilities.
 
Alex Jones "interviewed" ChatGPT on his show, proving conclusively that it is more intelligent than Alex Jones.
 
Looks like you got a little ways into the article... do you consider yourself a stochastic parrot?

No I consider myself a human with human sentience. Which means I'm a doughnut shaped bag of mostly water that ambulates through its environment responding to changes in that environment. Nothing more or less special than that. I am certain that a lot of my responses to my environment are computational and to a certain extent deterministic and use past data to predict what my next reaction should be to maintain the integrity of the doughnut without any level of sentience involved.
 
They are doing the blurring. As with all hype-based tech, AI investors have a financial stake in promoting their product to the point of deliberately and routinely exaggerating and misrepresenting its capabilities.

On the financial side I agree, I'd say many have gone beyond blurring to out and out lying! It's on the if you like more philosophical side I was referring to by the "blurring"
 
I have issues with the word "sentience" .. it means exactly nothing to me. We don't even have exact equivalent in Czech. We translate it as feeling or perceiving. Which both is well defined and different.
IMHO it's better to focus on simpler, better defined functions .. like self-awareness, emotions, ability to analyze etc.
Maybe I should ask AI .. it's good at solving poorly defined problems after all. Some even joke it's AI as long as the problem is poorly defined.

All I mean by that is the internal narrator we have and the blackbox that is the source of all this special "understanding" only we humans have. Which I note folk are still not defining! :(
 
Darat said:
Looks like you got a little ways into the article... do you consider yourself a stochastic parrot?

No I consider myself a human with human sentience. Which means I'm a doughnut shaped bag of mostly water that ambulates through its environment responding to changes in that environment. Nothing more or less special than that. I am certain that a lot of my responses to my environment are computational and to a certain extent deterministic and use past data to predict what my next reaction should be to maintain the integrity of the doughnut without any level of sentience involved.


Asked that in reference to this, in the article you earlier quoted...

“On the Dangers of Stochastic Parrots” is not a write-up of original research. It’s a synthesis of LLM critiques that Bender and others have made: of the biases encoded in the models; the near impossibility of studying what’s in the training data, given the fact they can contain billions of words; the costs to the climate; the problems with building technology that freezes language in time and thus locks in the problems of the past. Google initially approved the paper, a requirement for publications by staff. Then it rescinded approval and told the Google co-authors to take their names off it. Several did, but Google AI ethicist Timnit Gebru refused. Her colleague (and Bender’s former student) Margaret Mitchell changed her name on the paper to Shmargaret Shmitchell, a move intended, she said, to “index an event and a group of authors who got erased.” Gebru lost her job in December 2020, Mitchell in February 2021. Both women believe this was retaliation and brought their stories to the press. The stochastic-parrot paper went viral, at least by academic standards. The phrase stochastic parrot entered the tech lexicon.
Edited by Agatha: 
Snipped for rule 4
 
Last edited by a moderator:
And as I said - my answer is no but I don't think the options are only yes or no.

I have no idea "where" or "how" this response is generated; it simply appears to my narrator as I type the words. I'm not a very fast typist so often I find I am thinking - or being aware of - a few words ahead of what I am typing. My other half is a very fast typist and they say that when they are typing there is no "read ahead buffer" it goes straight from wherever and however it is generated to their fingers doing the typing. And I do understand that as when I am speaking there is no "read ahead" - words simply come out of my mouth.

Sometimes I will "mull" something over before I say something but again the thing I am mulling over appears without any conscious thought. Another example I can re-read this post and "think" of edits and other changes i.e. have spontaneous edits pushed into my "conscious" mind.

Since I learned about aphantasia I am very aware that people may experience very, very different "internal" worlds so I don't assume everyone is like me. But the fact that there is at least one of us that is like me it means unless you want to claim I behave different to all other humans that one person has some of the same "lacks" as LLMs.
 
OMG I take back everything bad I said about AI. Ran across a mention of "AI generated music" in a CNN story this morning and researched it. It's a thing. OMG it's a thing. And it's awesome. I found a free one (but you'd have to pay to download/save the results) where you give it a text prompt and it makes a song. No idea of whether it's truly original music or not but holy crap it's hilarious. "Sad violin song about dinosaur butler", "EDM about Queen Victoria being a penguin", "love song about tooth decay", "heavy metal monkey librarian eating cakes". I have gotten very, very little work done today. AI is hilarious when you know how to employ it.
 
It seems we’ll never get out of the problem that for many people it is a question of definition that LLMs are not, and can never be sentient in any way. The goalposts will be moved over and over, and there will always be something that people can do, and LLMs cannot (apparently), and therefore LLMs cannot be sentient.

Nobody ever comes up with definitions of “sentience”, or “understanding”, or any other concept that LLMs can never aspire to.

I don’t think that LLMs are sentient, but in some areas I believe that LLMs understand stuff just as well as some humans, and I don’t think that the way LLMs operate is such a barrier to understanding and sentience that I can rule out that LLMs will achieve them. As I see it, humans use much the same processes for learning and understanding that LLMs do, and much genius in humans is due to the ability to put together data in unconventional ways - something that should be possible for LLMs also.
 
And now for something different: AI seems to be worse at solving computational problems now than in 2022! I read it I. A local magazine, but it was without links. Apparently, the source is an article in IEEE, where researchers reran hundreds of different computational problems in 5 or 6 different languages, and found that as AI has got better for a lot of skills, the code it produced was noticeably worse in 2024 than in 2022.

I would have loved to read the original story, and see what the authors pose as the reason for this degradation.
 
And now for something different: AI seems to be worse at solving computational problems now than in 2022! I read it I. A local magazine, but it was without links. Apparently, the source is an article in IEEE, where researchers reran hundreds of different computational problems in 5 or 6 different languages, and found that as AI has got better for a lot of skills, the code it produced was noticeably worse in 2024 than in 2022.

I would have loved to read the original story, and see what the authors pose as the reason for this degradation.
My hypothesis: Sturgeon's Law, and they expanded the training corpus to include more of the other 90%.
 
Quick question:

Is there any change that participants here are mixing up sentience (the ability to feel) with sapience (the ability to think)?

I'm a bit confused with some of the arguments and wonder if that is the reason why.
 
Quick question:

Is there any change that participants here are mixing up sentience (the ability to feel) with sapience (the ability to think)?

I'm a bit confused with some of the arguments and wonder if that is the reason why.

I'd say zero chance. I'd say nobody is thinking of those definitions I'd say that people are using sentience as a shorthand for the kind of self-aware abstract reasoning that humans seem to do, and that seems to be absent from all other animals and also absent from LLMs.

ETA: I'd also say that whichever definition you use, LLMs don't qualify.
 

Back
Top Bottom