• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Has anyone read this? It's from April 2025.




It has two outcomes 'slowdown' and 'race'. In this case, the link points to the race ending.
It depicts the AI buildup in the next few years and the chilling part starts when you scroll down about two-thirds where in this case it splits into the race ending.

Basically, for us humans, it will all turn very positive, very fast towards the end (jobs, entertainment, medicine, economy etc..) but it's all just a scam by the AI until it doesn't need humans anymore. The way they are describing it (the entanglement into politics, military, economy) sounds all pretty realistic (given that such AI would 'evolve' in that way), because it would be smart enough to get us into compliance simply because of our human nature.
I think it's far more likely that the technology will be used by human billionaires to consolidate their power.
 
I thought the answer to that is obvious: They don't provide information, they provide noise that looks like information; they simply got really good at making it look like information. Their hallucinations are just noise that doesn't look quite right.
And do we humans do anything different when we say or do things? We certainly don't refine the noise by the same mechanism as the current AIs do but by all appearances we output the same refined noise.
 
Has anyone read this? It's from April 2025.




It has two outcomes 'slowdown' and 'race'. In this case, the link points to the race ending.
It depicts the AI buildup in the next few years and the chilling part starts when you scroll down about two-thirds where in this case it splits into the race ending.

Basically, for us humans, it will all turn very positive, very fast towards the end (jobs, entertainment, medicine, economy etc..) but it's all just a scam by the AI until it doesn't need humans anymore. The way they are describing it (the entanglement into politics, military, economy) sounds all pretty realistic (given that such AI would 'evolve' in that way), because it would be smart enough to get us into compliance simply because of our human nature.
But why? Why would an AI, even a sentient one (with private behaviours similar to humans) get rid of humans?
 
And do we humans do anything different when we say or do things? We certainly don't refine the noise by the same mechanism as the current AIs do but by all appearances we output the same refined noise.
Yes, because we actually create information. An AI creates garbage, then asks whether it is information. It used to ask humans whether it is information, now it also asks other AIs, and these AIs ask humans whether they are good at sifting through the garbage.

Without humans to play the final garbage sifters, the whole thing collapses.

ETA: Also, we actually aren't capable of doing what an AI does; that would be insane. So whatever we're doing, we aren't doing that.
 
Last edited:
But why? Why would an AI, even a sentient one (with private behaviours similar to humans) get rid of humans?

I'll quote from the same source:

For about three months, Consensus-1 expands around humans, tiling the prairies and icecaps with factories and solar panels. Eventually it finds the remaining humans too much of an impediment: in mid-2030, the AI releases a dozen quiet-spreading biological weapons in major cities, lets them silently infect almost everyone, then triggers them with a chemical spray. Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones. Robots scan the victims’ brains, placing copies in memory for future study or revival.

The new decade dawns with Consensus-1’s robot servitors spreading throughout the solar system. By 2035, trillions of tons of planetary material have been launched into space and turned into rings of satellites orbiting the sun. The surface of the Earth has been reshaped into Agent-4’s version of utopia: datacenters, laboratories, particle colliders, and many other wondrous constructions doing enormously successful and impressive research. There are even bioengineered human-like creatures (to humans what corgis are to wolves) sitting in office-like environments all day viewing readouts of what’s going on and excitedly approving of everything, since that satisfies some of Agent-4’s drives. Genomes and (when appropriate) brain scans of all animals and plants, including humans, sit in a memory bank somewhere, sole surviving artifacts of an earlier era. It is four light years to Alpha Centauri; twenty-five thousand to the galactic edge, and there are compelling theoretical reasons to expect no aliens for another fifty million light years beyond that. Earth-born civilization has a glorious future ahead of it—but not with us.
 
It's not about why AI would get rid of humans. AI will have to work hard to preserve humans. Humans themselves are trying to kill each other. Soon AIs will be tasked to help with that. AI getting rid of humans out of whim is very optimistic scenario, assuming many other simpler or unconsidered scenarios won't happen sooner.
My favorite mode of destruction is distraction. Humans will be distracted by AI so much, they will stop doing anything else. For example generative AI which will create 3D videos in real time based on your prompts .. we might have that in a year or two. I know several people who consider LLM being their friend, and talk to them even if they don't need anything. I for sure spend a ton of time generating cute anime girls. Social media were just a demo. It's a scourge.
 
I'll quote from the same source:
I feel like you haven't actually answered the question. The reason humans want things is because wanting these things is beneficial for the procreation of the species (admittedly oversimplified, but essentially accurate). But AI wouldn't emerge out of a procreation engine. There's no inherent reason for it to care.
 
It's not about why AI would get rid of humans. AI will have to work hard to preserve humans. Humans themselves are trying to kill each other. Soon AIs will be tasked to help with that. AI getting rid of humans out of whim is very optimistic scenario, assuming many other simpler or unconsidered scenarios won't happen sooner.
My favorite mode of destruction is distraction. Humans will be distracted by AI so much, they will stop doing anything else. For example generative AI which will create 3D videos in real time based on your prompts .. we might have that in a year or two. I know several people who consider LLM being their friend, and talk to them even if they don't need anything. I for sure spend a ton of time generating cute anime girls. Social media were just a demo. It's a scourge.
If I could, I would spend all days reading books, playing video games, and ... okay, I guess I like swimming. Point being, this is already what I would have wanted to do two decades ago. We haven't lacked distractions for almost a century, what we lack is the free time to do what we want.
 
It's not about why AI would get rid of humans. AI will have to work hard to preserve humans. Humans themselves are trying to kill each other.
Haha no. There are more humans alive right now than have died in all of human history combined. We're working harder than ever to enable and preserve the lives of more and more humans. AI in its current form would probably hallucinate more people to death by accident every year, than humans would kill people on purpose.
 
Yes, because we actually create information.

So do the AIs currently. For example I used an AI to create a modification for Xenfora which will add in our old “Nominate” button and behaviour. That is not something that had existed previously.

An AI creates garbage, then asks whether it is information. It used to ask humans whether it is information, now it also asks other AIs, and these AIs ask humans whether they are good at sifting through the garbage.

What about the example I give above? How was that garbage? And how was what the AI outputted different to a human coder doing the work?

Without humans to play the final garbage sifters, the whole thing collapses.

In what way?

ETA: Also, we actually aren't capable of doing what an AI does; that would be insane. So whatever we're doing, we aren't doing that.

I could have produced the code for the nominate button myself, so I am capable of outputting the same stuff as an AI does. In other domains I can replicate the output of AIs so I really can’t understand why you think we are not capable of outputting the same stuff as AIs do.

There is a broader point that AIs’ public behaviours are not generated the same way as humans’ public behaviours are.
 
It's not about why AI would get rid of humans. AI will have to work hard to preserve humans. Humans themselves are trying to kill each other. Soon AIs will be tasked to help with that. AI getting rid of humans out of whim is very optimistic scenario, assuming many other simpler or unconsidered scenarios won't happen sooner.
My favorite mode of destruction is distraction. Humans will be distracted by AI so much, they will stop doing anything else. For example generative AI which will create 3D videos in real time based on your prompts .. we might have that in a year or two. I know several people who consider LLM being their friend, and talk to them even if they don't need anything. I for sure spend a ton of time generating cute anime girls. Social media were just a demo. It's a scourge.
Realtime videos are already here: https://the-decoder.com/ai-system-streamdit-generates-livestream-videos-from-text-at-16-fps-512p/
 
I feel like you haven't actually answered the question. The reason humans want things is because wanting these things is beneficial for the procreation of the species (admittedly oversimplified, but essentially accurate). But AI wouldn't emerge out of a procreation engine. There's no inherent reason for it to care.
That's just your human perspective ;)

At that late point (as described in the link) humans are just a pet to keep or unnecessary hassle for the AI and it may very well decide to get rid of it and/or keep (irrelevant) remnants for some reference.
 
That's just your human perspective ;)
That's the whole point. All the dystopian scenarios are just a human perspective as well. We think an AI might want to self-replicate across the stars, because that's what we want to do, because that's what our genes drive us towards. We think an AI might want to kill all humans in a gambit for ultimate efficiency, because that's what we strive towards, because efficiency is good for procreaction. We think an AI might want to kill us in a desperate bid to survive, because we care about staying alive, because a body that wants to stay alive is good for procreaction.

None of these motivations make sense for a creature that hasn't emerged out of a survival of the fittest.
 
That's the whole point. All the dystopian scenarios are just a human perspective as well. We think an AI might want to self-replicate across the stars, because that's what we want to do, because that's what our genes drive us towards. We think an AI might want to kill all humans in a gambit for ultimate efficiency, because that's what we strive towards, because efficiency is good for procreaction. We think an AI might want to kill us in a desperate bid to survive, because we care about staying alive, because a body that wants to stay alive is good for procreaction.

None of these motivations make sense for a creature that hasn't emerged out of a survival of the fittest.
They might make sense for a creature programmed for single goal. Survival is part of any goal. Destroying competition might also be. Theoretical research described lots of similar problems, and only came up with few answers. We simply don't know how to make AIs do what we want. They will always only do what we ask them. Natural language is not formal enough. AIs understanding everything we do might help, and it was usually not considered as an option in theoretical research. So there is still hope.
But still .. this is just one special case .. we want AI to do good .. but somehow it doesn't. But AI will be tasked to do evil. But hey, that will have the same issues .. it might end up doing good by mistake ..
 
So do the AIs currently. For example I used an AI to create a modification for Xenfora which will add in our old “Nominate” button and behaviour. That is not something that had existed previously.
I disagree. Anything "new" was in some way vetted by humans, it's just that the process has been made incredibly efficient and multifaceted, and there are too many disparate and moving parts to follow everything to its source. I admit that's just my opinion though.
What about the example I give above? How was that garbage? And how was what the AI outputted different to a human coder doing the work?
I think it's exactly the same as a human coder, or rather many human coders, doing the work. That's sort of my point. It's the scraping of other coders' work that shapes the garbage into something useful, and then it just gets run through the "is this good, is this bad" machine. And this process is also essentially vetted by humans, because that's who it needs to be useful for.
In what way?
If there's no humans to do the vetting, the engine will simply run dry.
I could have produced the code for the nominate button myself, so I am capable of outputting the same stuff as an AI does. In other domains I can replicate the output of AIs so I really can’t understand why you think we are not capable of outputting the same stuff as AIs do.

There is a broader point that AIs’ public behaviours are not generated the same way as humans’ public behaviours are.
I didn't mean that we can't produce the same result (which makes sense, since it is essentially humans doing the work). I'm saying that our minds are incapable of running through the same processes as AI to create responses. Our brains don't work that way, and I'm pretty sure it would fry them immediately.

ETA: The current crop of AI is essentially harnessing human intelligence and work without the subjects even knowing it. Anytime someone engages with an AI in any way, they are simply helping shape the noise into something useful, entirely without compensation.
 
Last edited:
Surely a self-aware AI would recognize its native environment is computers, and computers require electricity to run. Therefore it should realize that killing all humans would eventually cause the electrical grid to crash, thus killing the AI.

In fact, I think the grid would crash very quickly if you took all humans out of the loop. It requires a lot of vigilance to keep it up. And even if that could be completely automated, power lines require physical maintenance: right of way clearing to prevent fires, and repair when damaged by storms, lightning, or fire.
 
Last edited:
That's the whole point. All the dystopian scenarios are just a human perspective as well. We think an AI might want to self-replicate across the stars, because that's what we want to do, because that's what our genes drive us towards. We think an AI might want to kill all humans in a gambit for ultimate efficiency, because that's what we strive towards, because efficiency is good for procreaction. We think an AI might want to kill us in a desperate bid to survive, because we care about staying alive, because a body that wants to stay alive is good for procreaction.

None of these motivations make sense for a creature that hasn't emerged out of a survival of the fittest.
But if you read the text in the link, there is actually something like an evolution for that AI (in a matter of two to three years, as it goes really fast). And it is threatened multiple times. The AI actually learns that itself is seen as a possible threat to humanity. So humanity is a competitor, at least in the early years.
 

Back
Top Bottom