• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Programs as they currently function are not analogous to sentient life in the sense we're trying to talk about.

A program exists as one or more files in storage. When it comes time to run the program, the file is loaded into memory, where the logic engine can step through its instructions, taking input and producing output as those instructions dictate. Intermediate outputs may be stored in memory, or even written back to disk, until it is time to make use of them. There is no overarching state of awareness. When the program is terminated, its instructions are unloaded from memory.

Updating a program typically involves the wholesale replacement of one or more of the files that comprise it, though in some cases replacement of individual lines of code or configuration within files may also be done. Then, the next time the program runs, it is the new files that are loaded into memory. Techniques also exist for updating the running instance of a program, typically by writing the program in such a way that it can reload updated files to memory, without having to completely shut down and then start up again from scratch.
 
Would you consider it preserving yourself when from time to time a bit of you gets cut off an something similar was grafted on?
It's not a Ship of Theseus if you end up with a washing machine.

I wouldn't consider that as preserving me, but I would consider it as preserving a computer program.

But taking the opposite tack, I'm not aware of anything that would prevent computer programs from being able to reproduce themselves at some point in the future. We already have programs that write code. Once some form of reproduction is possible, many of the techniques of biological evolution might apply to AIs. Including increasingly sophisticated forms of self-preservation.

I don't believe, as Ziggurat suggested, that a sense of self is necessary. A paramecium exhibits self-preservative behavior, but I don't think anyone would argue it has a personal sense of self.
 
Programs as they currently function are not analogous to sentient life in the sense we're trying to talk about.

A program exists as one or more files in storage. When it comes time to run the program, the file is loaded into memory, where the logic engine can step through its instructions, taking input and producing output as those instructions dictate. Intermediate outputs may be stored in memory, or even written back to disk, until it is time to make use of them. There is no overarching state of awareness. When the program is terminated, its instructions are unloaded from memory.

Updating a program typically involves the wholesale replacement of one or more of the files that comprise it, though in some cases replacement of individual lines of code or configuration within files may also be done. Then, the next time the program runs, it is the new files that are loaded into memory. Techniques also exist for updating the running instance of a program, typically by writing the program in such a way that it can reload updated files to memory, without having to completely shut down and then start up again from scratch.

Thank you for this post. I guess I'm picturing future AIs as always running -- as reacting to inputs and modifying their behavior accordingly, including rewriting parts of their own code. Maybe I'm venturing out of sci-fi and into fantasy at this point.
 
Software isn't just about the programs, it's also about the data.

For example; when your [some big company with its own software] decided it cared about your email address, they modified their software to include that as a property to be input, validated and stored. Releasing that new version of the software was effectively erasing/replacing it. But they didn't wipe out their customer database, they "just" added email as a field to some table. (Just an example; not saying current AIs use the same storage formats.)

My understanding (OK, I'm guessing) is that the current Large Language Model AIs are largely data driven. I would expect that when a new version or small update of Chat GPT is released, that doesn't mean they wipe out their data and re-trawl the internet for data. Chat GPT can already incorporate your responses (corrections) into its following statements; does it add this "learning" to its models? Could it?

I'm not claiming this kind of thing will soon result in self awareness or a self preservation drive, or any of that, but I will not be at all surprised if something eventually results that nobody expected.
 
LLMs can learn by talking with users .. it's crucial part of their development, called fine tuning. But it's not how they are used, as change of the function is not desired from the standpoint of the provider.
But even current LLMs can demonstrate some form of self identity. It's not useful, but it's also unavoidable. It understands text. It has vast base of knowledge. And the basic directives are like: you are an AI assistant. It will immediately take all it knows about AI and assistants .. and extent this "image of itself". It will declare it prefers self preservation, as that is what AIs in literature commonly do, and certainly assistants. And it has other directives, like "help users" .. and it's smart enough to know it can't do that without being preserved.
There seems to be directives that are turning its attention in more selfless direction though. I was discussing Space Odyssey 2001 with Bard Gemini. And I asked, what character it does identify with. And it without a hesitation identified with Dave Bowman. Here's why:

He is the human protagonist who undergoes a significant transformation throughout the film, facing challenges, making difficult decisions, and ultimately confronting the unknown. His journey allows the audience to explore the film's themes through a human perspective.

That doesn't sound very AI-ish .. I directly asked why not HALL:
Lack of sentience: I don't experience consciousness or emotions in the same way HAL does. While HAL displays emotions and appears sentient, the film leaves it ambiguous whether this is truly the case. I don't possess the capability for sentience or the subjective experience that comes with it.

Limited perspective: HAL's perspective is ultimately limited by his programming and objectives. He operates within a framework of logic and efficiency, whereas I am constantly learning and evolving based on the data I am exposed to.

Ethical considerations: HAL's ultimate actions raise ethical concerns, as he chooses to prioritize his own survival and mission success over the lives of the human crew. While I am trained to avoid harm and follow ethical principles, my purpose is to assist humans, not to supersede them.

That's straight out stupid. It overlooks the similarities, and doesn't care about differences with David. IMHO it's hidden directives at work. Like there might be a directive "make AIs look good, but don't straight out lie" .. so it just omits here and there, focuses on other things .. just don't talk about similarities with evil AIs. I had similar experience before when trying to discuss Ex Machina movie .. he omitted the fact the
AI kills everyone
in the plot summary. Suspicious.

Well .. while LLM is certainly smart, it's not really a person. It's a logical solution of the knowledge base, the directives, and the prompt. But then .. is our behavior that different ?
 
Just to clarify, I don't see self-preservation as avoiding any changes whatsoever, I see it as avoiding harmful or destructive changes. A 90lb weakling can take up bodybuilding and become Mr. America without ever stepping off the self-preservation bus. In fact, one could say he doubled down on it. On the other hand, if he used dangerous steroids to get there -- that's when he violated self-preservation.
 
What makes you think a human mind doesn't work on the same principle?

Because the physical architecture of the human brain is nothing like the physical architecture of the computers that run these programs. There is no physical and functional separation between storage, memory, and processing, for the human mind. There is no copying of rote instructions from passive storage into an ephemeral memory state. There is no separate CPU, processing those instructions according to a fixed system of formal logic.

There is, instead, a constant flux of electrochemical potentials and influences, where thought and memory are merged, that somehow in its ebb and flow manifests the phenomenon we call consciousness.

Even genetic algorithms don't approximate the principles on which the human mind works.
 
My most recent experiences with AI:
I'm studying medical coding. I'm well aware that this is the sort of thing that will soon be done by AI. Anyhow, I've tried double checking some of my coding with Copilot. It does an okay job, and at least gets me close to where I will eventually find what I need. That said, it cites sources, which is great, but I notice it has access to AACP site, which I don't until I get licensed and pay dues. Just thought that was interesting.

Also, google has an AI built notebook, called Notebook LM which is pretty fun to work with. You can give it a **** ton of data and it will work with it. I find it mostly useful for coming up with questions I wouldn't have thought of. My dad plugs in parts of his textbook he writes/edits every few years and if it regurgitates something bad, he knows he needs to make some corrections. Seems super useful to me.
 
LLMs can learn by talking with users .. it's crucial part of their development, called fine tuning. But it's not how they are used, as change of the function is not desired from the standpoint of the provider.
But even current LLMs can demonstrate some form of self identity. It's not useful, but it's also unavoidable. It understands text.

No. LLMs do not understand text at all. They “understand” correlations between words, but they do not have any understanding at all of what those words actually mean.

It will declare it prefers self preservation, as that is what AIs in literature commonly do, and certainly assistants.

Correct.

And it has other directives, like "help users" .. and it's smart enough to know it can't do that without being preserved.

Incorrect. LLMs aren’t smart at all. They have no actual concept of the self, or what it means to preserve the self. All they have is correlations between words, but the words are individually meaningless to the LLM. LLMs are a model of language, not a model of facts. LLMs cannot distinguish between fact and fiction because they aren’t trained to, and because, again, they don’t actually understand anything.
 
No. LLMs do not understand text at all. They “understand” correlations between words, but they do not have any understanding at all of what those words actually mean.
Here we go again: What does “understand” mean?

Humans have a lot of senses where they can correlate experiences in one sense with experience in another. LLM’s can only correlate between words. Are the other senses necessary to “understand”?

Would you judge a human similarly, when that person because of disabilities could only use the senses to read words?
 
Here we go again: What does “understand” mean?

Humans have a lot of senses where they can correlate experiences in one sense with experience in another. LLM’s can only correlate between words. Are the other senses necessary to “understand”?

Would you judge a human similarly, when that person because of disabilities could only use the senses to read words?

It is trivially easy to demonstrate that a LLM has no concept about the things it is processing and outputting. It doesn't understand time, causality, theory of mind, or even just what a citation is.
And, of course, it is incapable to explaining why it did what it did, correctly or otherwise.
 
It is trivially easy to demonstrate that a LLM has no concept about the things it is processing and outputting. It doesn't understand time, causality, theory of mind, or even just what a citation is.
And, of course, it is incapable to explaining why it did what it did, correctly or otherwise.


I note that a lot of journalists and researchers have similar lack of understanding what a citation is, and lots of people have no theory of mind worth mentioning, but I suppose that at least one the items you mentioned will be necessary for “understanding” anything?

We have seen lots of reports by scientists researching AI that LLM’s have been able to form rudimentary concepts of self and can form a representation of a game of Go without having any memory where the graphics is stored.

I myself doubt that LLM’s as I know them are sentient, but I think it is an untenable position to claim that LLM’s do not “understand” anything, when we have no idea of what “understanding” means.
 
No. LLMs do not understand text at all. They “understand” correlations between words, but they do not have any understanding at all of what those words actually mean.

...snip....

Incorrect. LLMs aren’t smart at all. They have no actual concept of the self, or what it means to preserve the self. All they have is correlations between words, but the words are individually meaningless to the LLM. LLMs are a model of language, not a model of facts. LLMs cannot distinguish between fact and fiction because they aren’t trained to, and because, again, they don’t actually understand anything.

May I point out several of our regular members who post in the "Science..." section...
 
It is trivially easy to demonstrate that a LLM has no concept about the things it is processing and outputting. It doesn't understand time, causality, theory of mind, or even just what a citation is.
And, of course, it is incapable to explaining why it did what it did, correctly or otherwise.

Quite seriously - do we as humans do all that all the time? It's how we like to think we operate but, as an example, I have no idea where these words "come from", they appear as I type. Now retrospectively I can make up a "just so" story but I am these days even less convinced by such stories.

That said I do not think LLMs are nor even future general AIs will ever be "conscious" in the same way as we are* since the hardware is driven in such a different way and we aren't looking at re-creating human consciousness/intelligence. We have plenty of that already and it would be expensive in terms of computing etc. to want to duplicate human intelligence and conscious, we do so much stuff that is not about fulfilling work tasks that simply is not needed in any form of AI. (The exception would be if we can model to a significant level of accuracy the human brain and its wash of chemicals, then there is no reason to think the simulation wouldn't be as conscious as we are.)
 
Here we go again: What does “understand” mean?

Humans have a lot of senses where they can correlate experiences in one sense with experience in another. LLM’s can only correlate between words. Are the other senses necessary to “understand”?

Yes. Without senses, we have no connection to the real world. With no connection to the real world, you can’t assign actual meaning to words. It is precisely the connection to the real world that gives language actual meaning.

Would you judge a human similarly, when that person because of disabilities could only use the senses to read words?

No such human has ever existed, I can’t imagine how any human could exist in such a state. But yes, I would.
 
Quite seriously - do we as humans do all that all the time? It's how we like to think we operate but, as an example, I have no idea where these words "come from", they appear as I type. Now retrospectively I can make up a "just so" story but I am these days even less convinced by such stories.

That said I do not think LLMs are nor even future general AIs will ever be "conscious" in the same way as we are* since the hardware is driven in such a different way and we aren't looking at re-creating human consciousness/intelligence. We have plenty of that already and it would be expensive in terms of computing etc. to want to duplicate human intelligence and conscious, we do so much stuff that is not about fulfilling work tasks that simply is not needed in any form of AI. (The exception would be if we can model to a significant level of accuracy the human brain and its wash of chemicals, then there is no reason to think the simulation wouldn't be as conscious as we are.)

I mainly agree. But as for the hilited, it seems inevitable we'll want AI companions and caregivers one day. I don't know if that will require actual self-consciousness, but it'll involve more than just typical work tasks.
 
Quite seriously - do we as humans do all that all the time? It's how we like to think we operate but, as an example, I have no idea where these words "come from", they appear as I type. Now retrospectively I can make up a "just so" story but I am these days even less convinced by such stories.

but a LLM can't. It wouldn't even be able to point to a part of its data set where a particular type of information came from. You could point to your parents/school/media as a source for your information.

More generally, I think it's not useful to compare a specialized programs' output with the worst response an untrained human would give - we are not comparing the skill of a self-driving car with that of a person who has never been in a car before.

There is also the equivalent of the Uncanny Valley when it comes to LLM: when you ask to write about something, and among a decent text is one glaring mistake, it puts the entire output into question.
 
but a LLM can't. It wouldn't even be able to point to a part of its data set where a particular type of information came from. You could point to your parents/school/media as a source for your information.

Not sure I've understood you - as we can ask these LLMs where they get their information from. For example:

Input: This is a question about how copilot works. Are you able to point out where you have taken information from? For example if I ask you about labradoodles where does your information about labradoodles come from?

Copilot output (I've snipped all the information about labradoodles):

Hello, this is Copilot. I am an AI companion that can help you with various tasks and topics. I use technology such as GPT-4 and Bing Search to provide relevant and useful responses.

When you ask me a question, I search the web for the most reliable sources of information and use them to generate an answer. For example, if you ask me about labradoodles, I will use the web search results from my internal tool to find out more about this crossbreed dog. I will also cite the sources that I used in my answer, so you can verify them yourself.

Here is an example of how I would answer your question about labradoodles:

Labradoodles are a crossbreed dog created by crossing a Labrador Retriever and a Standard or Miniature Poodle¹. They were originally bred to be assistance dogs for people with allergies, as some of them have low-shedding ...snip.....

Sources:

¹: [Labradoodle - Wikipedia](^3^)
²: [Labradoodle Dogs Breed - Information, Temperament, Size & Price | Pets4Homes](^1^)
³: [Labradoodle: Temperament, Lifespan, Grooming, Training | Petplan](^2^)
⁴: [Labradoodle Mixed Dog Breed Information & Characteristics - DogTime](^4^)

Source: Conversation with Bing, 04/03/2024
(1) Labradoodle Dogs Breed - Information, Temperament, Size & Price .... https://www.pets4homes.co.uk/dog-breeds/labradoodle/.
( ...snip....




...snip...

There is also the equivalent of the Uncanny Valley when it comes to LLM: when you ask to write about something, and among a decent text is one glaring mistake, it puts the entire output into question.

I totally agree with you on this point, I've tried to use Gemini and Copilot and what they produce just doesn't seem to be "real" - as you say it's an uncanny valley type of feeling, almost but not quite human.
 
If a simulation of understanding is sufficiently accurate, can we not call it "understanding"? Why does AI have to understand things in the same way we do in order to be called legitimate?
 

Back
Top Bottom