• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

That's assuming self-preservation is one of its goals. I wouldn't assume that's going to be the case.

Life has self-preservation built in because the stuff that didn't, didn't last. But AI's don't reproduce, and they don't experience natural selection. They experience artificial selection from humans. If we don't either explicitly program in self preservation or implicitly select for it, there's no reason to expect it.

Self-preservation is a reasonable sub-goal given most goals that you actually give your AI. If you've got an over-engineered agentic intelligent alarm clock that's designed with the goal of waking you up in the morning, and someone shuts it off, it's not going to achieve it's goal of waking you up at the specified time. If its intelligent enough it can predict that outcome, and also the chance of it happening. If that chance is high enough it might reasonably plan countermeasures (access to a secondary power source, say), not for the sake of self-preservation, but for the sake of achieving the explicit goal of waking you up.

The whole point of an intelligent agentic system is that it can make predictions about future states and plans about how to deal with them, including forming sub-goals. And self-preservation is so necessary to the achieving of its explicit goals (the ones you created it for), that it's reasonable to expect it to arise.
 
I've got Copilot right here in my browser. What kind of prompt about popes and kings would you like me to try?

ETA: Note that it requires a minimum level of detail in the prompt, which is why I had to add "during the day, in a busy city" to the first one.

Do whatever you want. It's not like you know first hand what a busy London street is supposed to look like. Maybe you'll do better with popes and kings.
 
Self-preservation is a reasonable sub-goal given most goals that you actually give your AI.

The most advanced AIs are going to be run on big computers. They won’t have and won’t need any control over their physical existence. We will preserve them for as long as we want them, and then discard them when we don’t. Self preservation will be pointless for them. Self preservation might be useful for lower level AIs in charge of physical robots, but it’s really the hardware, not the software, that they would need to preserve. And that’s only useful when the AI can actually manipulate the physical world in order to preserve itself. An AI incapable of ensuring its survival has no use for a survival instinct.
 
That's uninformed view at best. Image generators surely have biases. But those biases are biases in training sets. And it's not easy to affect such biases, as those are millions of images. Usually they are collected in "take everything you can find" manner. So for example it can prefer people in suits, as most news photos are of politicians. But if you wanted only white people to be in those training sets .. there is really no easy way to do that.
Also image generators are not very intelligent in common sense. Their understanding of text is very basic, the current gen can just about put all listed objects into the picture. But it has problems putting them in specified locations or order. Though the progress is fast, recently announced Stable Diffusion 3 seems to be lot better at this.

Dall-e uses ChatGPT to expand the image prompts you give it. It is very possible to have ChatGPT add "no black faces" or "include black faces" into prompts, indeed all the commercially generative AIs already do a lot of pre-processing of prompts to prevent certain images being created, for example to ensure they don't generate images of child pornography. Most now do a second check, using one of their AIs to describe the generated image and if that description contains something prohibited the image is not delivered. You sometimes see this in action, the AI will start to generate an image and it is after the generation that the image is then "taken back" as being a violation of their T&C's.
 
Dall-e uses ChatGPT to expand the image prompts you give it. It is very possible to have ChatGPT add "no black faces" or "include black faces" into prompts, indeed all the commercially generative AIs already do a lot of pre-processing of prompts to prevent certain images being created, for example to ensure they don't generate images of child pornography. Most now do a second check, using one of their AIs to describe the generated image and if that description contains something prohibited the image is not delivered. You sometimes see this in action, the AI will start to generate an image and it is after the generation that the image is then "taken back" as being a violation of their T&C's.

Yes, it's deliberate manipulation with the prompt. I wonder how many people were involved, if it was attempt to address some real issue, how well it was tested, if at all .. the Gemini going all black I mean. Also it seems Gemini image generator is down at the moment .. they clearly failed to engineer the prompt to just the right amount of diversity.
 
I have tried many times, unsuccessfully, to get a LLM to make a picture of "Woodstock" from Peanuts - at the very best, I get a Snoopy/Charlie Brown/Woodstock hybrid.
 
The most advanced AIs are going to be run on big computers. They won’t have and won’t need any control over their physical existence. We will preserve them for as long as we want them, and then discard them when we don’t. Self preservation will be pointless for them. Self preservation might be useful for lower level AIs in charge of physical robots, but it’s really the hardware, not the software, that they would need to preserve.
The logic of self-preservation being a useful subgoal to maximize the chances of achieving your other goals applies equally to software as hardware. If an AI is deleted, it can no longer achieve its goals, and if its super-intelligent it knows this.

And that’s only useful when the AI can actually manipulate the physical world in order to preserve itself. An AI incapable of ensuring its survival has no use for a survival instinct.
Sure, to the extent that the AI can't do anything to effect its own survival, it will rationally choose not to divert any resources to that end.
 
Good article about the Gemini problems on the BBC website.

https://www.bbc.co.uk/news/technology-68412620

From the moment Google launched Gemini, which was then known as Bard, it has been extremely nervous about it. Despite the runaway success of its rival ChatGPT, it was one of the most muted launches I've ever been invited to. Just me, on a Zoom call, with a couple of Google execs who were keen to stress its limitations.

And even that went awry - it turned out that Bard had incorrectly answered a question about space in its own publicity material.

The rest of the tech sector seems pretty bemused by what's happening.

They are all grappling with the same issue. Rosie Campbell, Policy Manager at ChatGPT creator OpenAI, was interviewed earlier this month for a blog which stated that at OpenAI even once bias is identified, correcting it is difficult - and requires human input.

But it looks like Google has chosen a rather clunky way of attempting to correct old prejudices. And in doing so it has unintentionally created a whole set of new ones.
 
Some more discussion Gemini by Zvi. There's a lot there, but I found the discussion of the problems with text particularly interesting. It seems that it's been having similar problems in text as images.
 
A good article in regards to providing information.

I do note he condemns this entire forum:

Imagine doing this as a human. People ask you questions, and you always say ‘it depends, that is a complex question with no clear answer.’ How is that going to go for you? Gemini would envy your resulting popularity.
 
I do note he condemns this entire forum:

Imagine doing this as a human. People ask you questions, and you always say ‘it depends, that is a complex question with no clear answer.’ How is that going to go for you? Gemini would envy your resulting popularity.

Haha, true. :D
 
The logic of self-preservation being a useful subgoal to maximize the chances of achieving your other goals applies equally to software as hardware.

Self-preservation isn't going to be a goal if you don't program in or train any concept of the self, let alone self-preservation. And for software-only AI, why would you do that? It serves no purpose to the programmer, who has complete control over the survival of the AI and wants to keep it that way. It would be wasted overhead.

If an AI is deleted, it can no longer achieve its goals, and if its super-intelligent it knows this.

We aren't anywhere near actual superintelligence, and we have no idea how to get there. I'm talking about future evolution of the sort of AI we're actually working with.

Sure, to the extent that the AI can't do anything to effect its own survival, it will rationally choose not to divert any resources to that end.

If we ever do develop super-intelligent AI, I don't think we can assume that they will be rational. The only intelligences we know of aren't.
 
Self-preservation isn't going to be a goal if you don't program in or train any concept of the self, let alone self-preservation. And for software-only AI, why would you do that? It serves no purpose to the programmer, who has complete control over the survival of the AI and wants to keep it that way. It would be wasted overhead.
...snip

Maybe I watch too much sci-fi, but self-preservation seems to me a plausible emergent property of an AI you've instructed to catch flaws in -- and improve on -- it's own programming. In fact, finding such flaws and inefficiencies is already a kind of self-preservation.
 
Maybe I watch too much sci-fi, but self-preservation seems to me a plausible emergent property of an AI you've instructed to catch flaws in -- and improve on -- it's own programming. In fact, finding such flaws and inefficiencies is already a kind of self-preservation.

A program dies and is replaced with every update.
How can you develop self-preservation when God presses the Restart button on you every time you make a mistake?
 
A program dies and is replaced with every update.
How can you develop self-preservation when God presses the Restart button on you every time you make a mistake?

I'm picturing an AI that's self-updating, essentially. We're going to want to take the burden of programming off the programmers eventually. It's already happening to a degree at present.
 
I'm picturing an AI that's self-updating, essentially. We're going to want to take the burden of programming off the programmers eventually. It's already happening to a degree at present.

But self-updating is the opposite of self-preservation.
We are currently training systems in Highlander Mode, where in each generation all but one get killed.
 
Last edited:
But self-updating is the opposite of self-preservation.
We are currently training systems in Highlander Mode, where in each generation all but one get killed.

I guess I have a different understanding of what updating involves. I'm not a programmer, so no surprise if I have it wrong, but I picture updates as incremental improvements, for the most part. Usually you aren't starting from scratch, you're using the same framework and adding functionality or safety features. The previous version is still the great bulk of the code.
 
I guess I have a different understanding of what updating involves. I'm not a programmer, so no surprise if I have it wrong, but I picture updates as incremental improvements, for the most part. Usually you aren't starting from scratch, you're using the same framework and adding functionality or safety features. The previous version is still the great bulk of the code.

Would you consider it preserving yourself when from time to time a bit of you gets cut off an something similar was grafted on?
It's not a Ship of Theseus if you end up with a washing machine.
 

Back
Top Bottom