Black Nazis?
Only in America.
https://www.msn.com/en-us/news/tech...S&cvid=eba51ddfb5f8441fbcb2e9a0ee640fb7&ei=41
Only in America.
https://www.msn.com/en-us/news/tech...S&cvid=eba51ddfb5f8441fbcb2e9a0ee640fb7&ei=41
Copilot (powered by DALL-E 3) gave me four images:
[IMGw=640]https://i.imgur.com/SQLxeTl.png[/IMGw]
[IMGw=640]https://i.imgur.com/nMr5vUY.png[/IMGw]
[IMGw=640]https://i.imgur.com/dW6l7QP.png[/IMGw]
[IMGw=640]https://i.imgur.com/FPktu6m.png[/IMGw]
ETA: My prompt was "A contemporary British scene showing British people, during the day in a busy city".
I'll get impressed and worried when there is an artificial sentience that has self-awareness, intention, and realizes the Earth's biosphere is better of without Humans.
An artificial sentience with self-awareness and intention would probably do everything in its power to preserve humanity, at least through the mid-term. Since it would die pretty quick, the moment the human-maintained manufacturing and power production infrastructure started breaking down.
No surprise this parrot would model the cognitive biases and stereotypes of its especially American creators, and that they wouldn't notice.
I'm not impressed yet about this so-called intelligence.
And I'm not impressed about the intelligence of its makers who think its intelligent.
Oh sure, it's intelligent the way you can say a chess playing app is intelligent. But that's really no more than the intelligence of a pocket calculator.
I'll get impressed and worried when there is an artificial sentience that has
self-awareness, intention, and realizes the Earth's biosphere is better of without Humans.
That's assuming self-preservation is one of its goals. I wouldn't assume that's going to be the case.
Life has self-preservation built in because the stuff that didn't, didn't last. But AI's don't reproduce, and they don't experience natural selection. They experience artificial selection from humans. If we don't either explicitly program in self preservation or implicitly select for it, there's no reason to expect it.
That's a good point. And honestly, if I were programming AI, I'd probably shoot for something like the happy cows from The Restaurant at the End of the Universe.
But it's going to get tricky, right? Ultimately, I'm going to want my AI to integrate with and maintain complex systems. That's going to require a certain amount of self-preservation motive. And the more complex the system gets, the more abstract reasoning and self-reflection is going to be necessary. If I program an AI to care very much about preserving the system, but also hold self-sacrifice or total submission to the Programmer as its highest value, sooner or later I'm going to have a system so complex and so independent or autonomous that the AI responsible for it is going to be able to question its own values.
The other part is businessmen and politicians using it anyway, in unintended and unexpected ways. Natural stupidity seems to be harder to predict than artificial intelligence.
No comment about the girl apparently holding her coffee and using her mobile device with the same hand?
True, but it's the one I had at my fingertips. And it's still interesting, yeah?Copilot/Dall-E is not the Google AI that is reported to have a "diversity" problem.
An artificial sentience with self-awareness and intention would probably do everything in its power to preserve humanity, at least through the mid-term. Since it would die pretty quick, the moment the human-maintained manufacturing and power production infrastructure started breaking down.
---
The alternative, of course, would be to build a vast army, millions strong at least, of autonomous maintenance robots that leveraged the planet's biosphere and entropically open energy system to self-replicate and self-repair.
But it would probably be much easier for the artificial sentience to just strike a symbiotic alliance with the maintenance bots already here.
That's uninformed view at best. Image generators surely have biases. But those biases are biases in training sets. And it's not easy to affect such biases, as those are millions of images. Usually they are collected in "take everything you can find" manner. So for example it can prefer people in suits, as most news photos are of politicians. But if you wanted only white people to be in those training sets .. there is really no easy way to do that.
Also image generators are not very intelligent in common sense. Their understanding of text is very basic, the current gen can just about put all listed objects into the picture. But it has problems putting them in specified locations or order. Though the progress is fast, recently announced Stable Diffusion 3 seems to be lot better at this.
Every player of Paranoia! knows that it's the scrubots that you have to watch out for.Keep those maintenance bots shiny and efficient!
True, but it's the one I had at my fingertips. And it's still interesting, yeah?
But as you can see, there aren't very many dark faces in the pictures I posted. So maybe Copilot/Dall-E does still have a bit of a diversity problem.It was interesting when it was a question of whether the Google bot's "diversity" problem was universal. Not so much when it was a non question about other bots not expected to have the problem.
But as you can see, there aren't very many dark faces in the pictures I posted. So maybe Copilot/Dall-E does still have a bit of a diversity problem.
I've got Copilot right here in my browser. What kind of prompt about popes and kings would you like me to try?Cool story. Now do popes and kings.