• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Copilot (powered by DALL-E 3) gave me four images:

[IMGw=640]https://i.imgur.com/SQLxeTl.png[/IMGw]

[IMGw=640]https://i.imgur.com/nMr5vUY.png[/IMGw]

[IMGw=640]https://i.imgur.com/dW6l7QP.png[/IMGw]

[IMGw=640]https://i.imgur.com/FPktu6m.png[/IMGw]

ETA: My prompt was "A contemporary British scene showing British people, during the day in a busy city".

Copilot/Dall-E is not the Google AI that is reported to have a "diversity" problem.
 
I'll get impressed and worried when there is an artificial sentience that has self-awareness, intention, and realizes the Earth's biosphere is better of without Humans.

An artificial sentience with self-awareness and intention would probably do everything in its power to preserve humanity, at least through the mid-term. Since it would die pretty quick, the moment the human-maintained manufacturing and power production infrastructure started breaking down.

---

The alternative, of course, would be to build a vast army, millions strong at least, of autonomous maintenance robots that leveraged the planet's biosphere and entropically open energy system to self-replicate and self-repair.

But it would probably be much easier for the artificial sentience to just strike a symbiotic alliance with the maintenance bots already here.
 
An artificial sentience with self-awareness and intention would probably do everything in its power to preserve humanity, at least through the mid-term. Since it would die pretty quick, the moment the human-maintained manufacturing and power production infrastructure started breaking down.

That's assuming self-preservation is one of its goals. I wouldn't assume that's going to be the case.

Life has self-preservation built in because the stuff that didn't, didn't last. But AI's don't reproduce, and they don't experience natural selection. They experience artificial selection from humans. If we don't either explicitly program in self preservation or implicitly select for it, there's no reason to expect it.
 
No surprise this parrot would model the cognitive biases and stereotypes of its especially American creators, and that they wouldn't notice.

I'm not impressed yet about this so-called intelligence.
And I'm not impressed about the intelligence of its makers who think its intelligent.

Oh sure, it's intelligent the way you can say a chess playing app is intelligent. But that's really no more than the intelligence of a pocket calculator.

I'll get impressed and worried when there is an artificial sentience that has
self-awareness, intention, and realizes the Earth's biosphere is better of without Humans.

That's uninformed view at best. Image generators surely have biases. But those biases are biases in training sets. And it's not easy to affect such biases, as those are millions of images. Usually they are collected in "take everything you can find" manner. So for example it can prefer people in suits, as most news photos are of politicians. But if you wanted only white people to be in those training sets .. there is really no easy way to do that.
Also image generators are not very intelligent in common sense. Their understanding of text is very basic, the current gen can just about put all listed objects into the picture. But it has problems putting them in specified locations or order. Though the progress is fast, recently announced Stable Diffusion 3 seems to be lot better at this.
 
That's assuming self-preservation is one of its goals. I wouldn't assume that's going to be the case.

Life has self-preservation built in because the stuff that didn't, didn't last. But AI's don't reproduce, and they don't experience natural selection. They experience artificial selection from humans. If we don't either explicitly program in self preservation or implicitly select for it, there's no reason to expect it.

That's a good point. And honestly, if I were programming AI, I'd probably shoot for something like the happy cows from The Restaurant at the End of the Universe.

But it's going to get tricky, right? Ultimately, I'm going to want my AI to integrate with and maintain complex systems. That's going to require a certain amount of self-preservation motive. And the more complex the system gets, the more abstract reasoning and self-reflection is going to be necessary. If I program an AI to care very much about preserving the system, but also hold self-sacrifice or total submission to the Programmer as its highest value, sooner or later I'm going to have a system so complex and so independent or autonomous that the AI responsible for it is going to be able to question its own values.
 
That's a good point. And honestly, if I were programming AI, I'd probably shoot for something like the happy cows from The Restaurant at the End of the Universe.

But it's going to get tricky, right? Ultimately, I'm going to want my AI to integrate with and maintain complex systems. That's going to require a certain amount of self-preservation motive. And the more complex the system gets, the more abstract reasoning and self-reflection is going to be necessary. If I program an AI to care very much about preserving the system, but also hold self-sacrifice or total submission to the Programmer as its highest value, sooner or later I'm going to have a system so complex and so independent or autonomous that the AI responsible for it is going to be able to question its own values.

That's indeed the problem. People are trying to formally define "the happy cow" or at least "do what we want cow" .. but so fair they failed. Best we can do is "do what we tell you cow" .. but we are very bad in telling what we want.
And that's still just the theoretical, rational part of the problem. The other part is businessmen and politicians using it anyway, in unintended and unexpected ways. Natural stupidity seems to be harder to predict than artificial intelligence.
 
Last edited:
The other part is businessmen and politicians using it anyway, in unintended and unexpected ways. Natural stupidity seems to be harder to predict than artificial intelligence.

That's sort of the premise of 2001: A Space Odyssey. Hal wasn't the villain. He was given bad instructions that he couldn't reconcile, and basically went crazy as a result. That, rather than a Skynet situation, seems like a much more realistic outcome on the pessimistic side.
 
One wonders how much of a role the self-preservation instinct plays, in people's motivation to reason abstractly to a close approximation of what someone wants. How intelligent would an AI act, if figuring out what its programmer really wants didn't ultimately feel like a life or death question?
 
Does anybody else see the inherent problem here?

"Hey code, if you see something is going wrong take it upon yourself destroy humanity."

We'll destroy ourselves before computing gets a chance.
 
An artificial sentience with self-awareness and intention would probably do everything in its power to preserve humanity, at least through the mid-term. Since it would die pretty quick, the moment the human-maintained manufacturing and power production infrastructure started breaking down.

---

The alternative, of course, would be to build a vast army, millions strong at least, of autonomous maintenance robots that leveraged the planet's biosphere and entropically open energy system to self-replicate and self-repair.

But it would probably be much easier for the artificial sentience to just strike a symbiotic alliance with the maintenance bots already here.

Keep those maintenance bots shiny and efficient!
 
That's uninformed view at best. Image generators surely have biases. But those biases are biases in training sets. And it's not easy to affect such biases, as those are millions of images. Usually they are collected in "take everything you can find" manner. So for example it can prefer people in suits, as most news photos are of politicians. But if you wanted only white people to be in those training sets .. there is really no easy way to do that.
Also image generators are not very intelligent in common sense. Their understanding of text is very basic, the current gen can just about put all listed objects into the picture. But it has problems putting them in specified locations or order. Though the progress is fast, recently announced Stable Diffusion 3 seems to be lot better at this.

:thumbsup:
 
True, but it's the one I had at my fingertips. And it's still interesting, yeah?

It was interesting when it was a question of whether the Google bot's "diversity" problem was universal. Not so much when it was a non question about other bots not expected to have the problem.

But I concede that the case of a man looking for his lost car keys under a street light, because that's where the light's at, is always interesting.
 
It was interesting when it was a question of whether the Google bot's "diversity" problem was universal. Not so much when it was a non question about other bots not expected to have the problem.
But as you can see, there aren't very many dark faces in the pictures I posted. So maybe Copilot/Dall-E does still have a bit of a diversity problem.
 
Cool story. Now do popes and kings.
I've got Copilot right here in my browser. What kind of prompt about popes and kings would you like me to try?

ETA: Note that it requires a minimum level of detail in the prompt, which is why I had to add "during the day, in a busy city" to the first one.
 

Back
Top Bottom