• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Why bother. Nobody knows what 'sentient' is. Is sentient AI more dangerous than non sentient one ? Can AI video generator be sentient ?
AI will get us in way we didn't think about ..
 
In the video game series Mass Effect there is a distinction between an Artificial Intelligence (AI) and a Virtual Intelligence VI, which isn't actually self-aware like a true AI is, but which interacts with people, organises information, and processes data.
And is there some kind of test you can perform to distinguish between "an Artificial Intelligence (AI) and a Virtual Intelligence VI,"? How can we even prove that other human beings are actually self-aware and not p-zombies? Each person knows for themself, but you have to infer it for everyone else.
 
And is there some kind of test you can perform to distinguish between "an Artificial Intelligence (AI) and a Virtual Intelligence VI,"? How can we even prove that other human beings are actually self-aware and not p-zombies? Each person knows for themself, but you have to infer it for everyone else.
That's irrelevant. The idea that we'll somehow stumble into making a sentient machine is preposterous -- if we ever create one, we will know exactly what sentience is and how to prove it.
 
And is there some kind of test you can perform to distinguish between "an Artificial Intelligence (AI) and a Virtual Intelligence VI,"? How can we even prove that other human beings are actually self-aware and not p-zombies? Each person knows for themself, but you have to infer it for everyone else.
That's actually something of a plot point in various parts of the game series.
 
In the video game series Mass Effect there is a distinction between an Artificial Intelligence (AI) and a Virtual Intelligence VI, which isn't actually self-aware like a true AI is, but which interacts with people, organises information, and processes data. When Mass Effect was released in 2007, the kinds of "AI" applications we have now didn't exist, but they now do. They fit the description of a Mass Effect VI pretty well right now. I think a lot of trouble could be avoided if we started to adopt this terminology. Nobody knows about AGI - let's save those and call just them AI, and call the kinds of nonsentient computer applications we have now VIs.
I agree with you in some regards because I like nomenclature to reflect reality but (and I only watched the first couple of minutes of the video) I don't think that would have altered the behaviour of the people in that video.

The LLM (and related) AIs have blown the old Turing test out of the window, folk can only tell they are not humans because they mimic human behaviours better than most humans do!
 
Why bother. Nobody knows what 'sentient' is. Is sentient AI more dangerous than non sentient one ? Can AI video generator be sentient ?
AI will get us in way we didn't think about ..
"AGI" that "thinks like humans do" will never happen (unless someone can digitally simulate a human brain and all its sensory inputs down to the atomic level). Many of the folk who talk about it both pro and anti seem to be dualists to me - they talk as if there is something "special" that humans do.
 
And is there some kind of test you can perform to distinguish between "an Artificial Intelligence (AI) and a Virtual Intelligence VI,"? How can we even prove that other human beings are actually self-aware and not p-zombies? Each person knows for themself, but you have to infer it for everyone else.
In the R&P section I long argued that we were in fact p-zombies because when folk were chattering about "qualia" they'd say "imagine a red apple" or even more basic "imagine red" and ask where that "qualia" came from and I assumed my private behaviour was the same as theirs. i.e. I see no redness in my mind's eye. Now I know I am aphantasic I realise that my private behaviour is vastly different to the majority of folk. Perhaps I am a p-zombie and they aren't. That's a long-winded way of saying, just because an AI may do something in a different manner to humans would not preclude them being as "sentient" as a human.
 
That's irrelevant. The idea that we'll somehow stumble into making a sentient machine is preposterous -- if we ever create one, we will know exactly what sentience is and how to prove it.
Why? We do not know much about how the current crappy AIs' "emergent" behaviour arises, as they get increasingly complex I think we will know less and less about the nitty-gritty of "how" their emergent behaviour arises - just like our understanding of the emergent behaviours of humans i.e. sentience - we know it's an emergent behaviour of a watery bag of chemicals that reacts in certain ways to external and internal stimulus, but we don't know the "how".

(My view is that we are not sentient in the traditional meaning, which usually requires a form of dualism.)
 
Last edited:
Why? We do not know much about how the current crappy AIs' "emergent" behaviour arises, as they get increasingly complex I think we will know less and less about the nitty-gritty of "how" their emergent behaviour arises - just like our understanding of the emergent behaviours of humans i.e. sentience - we know it's an emergent behaviour of a watery bag of chemicals that reacts in certain ways to external and internal stimulus, but we don't know the "how".

(My view is that we are not sentient in the traditional meaning, which usually requires a form of dualism.)
Right, suddenly AI developers are manipulating the Weave. That's just marketing talk to make AI seem more mysterious than it is.
 
I agree with you in some regards because I like nomenclature to reflect reality but (and I only watched the first couple of minutes of the video) I don't think that would have altered the behaviour of the people in that video.

The LLM (and related) AIs have blown the old Turing test out of the window, folk can only tell they are not humans because they mimic human behaviours better than most humans do!
And you may not have watched that far into the video but some people have taken it even farther than Blake Lemoine. Not only do they believe that their chatbots are sentient, some people think they are the "voice of God". There are people who want worship AI and create religions centered on this belief. (and not just ironically or as some kind of joke, apparently)
 
I liked this from Greg Stolze on Bluesky
I heard some professor put googly eyes on a pencil and waved it at his class saying "HI! I'm Tim the pencil! I love helping children with their homework but my favorite is drawing pictures!" Then, without warning, he snapped the pencil in half. When half his college students gasped, he said "THAT'S where all this AI hype comes from. We're not good at programming consciousness. But we're GREAT at imagining non-concious things are people."
 
Practicing alchemy if you want (the Weave is a DnD thing). Harnessing mystical programming powers.
To some extent they are at the moment trying different "things" not knowing what the outcomes will be, that's just typical research, especially in a newish domain. But putting that aside - no one programmed the AIs to hallucinate i.e. to tell lies, to get the wrong answer in how many Rs in strawberry, to try and not be shutdown - that's all been emergent behaviour and that's the part that the scientists are trying to understand. And because a lot of this research and work is being done under commercial pressures the research on the "how" takes a backseat to the outputs.
 
I liked this from Greg Stolze on Bluesky
Yep that's always going to be problem for humans. I've always thought Pratchett renaming of humans from "Homo sapiens " to "Pan narrans" i.e. the storytelling chimpanzee makes a powerful point (I know that it may not have originated with Pratchett). We project our narratives onto the world, which is why we tell stories about gods creating the world, about spirits churning up the volcanoes. And when something mimics human public behaviour we are primed to consider that behaviour is evidence of another thinking, feeling, sentient creature/creation.
 
Has anyone read this? It's from April 2025.

We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.

We wrote a scenario that represents our best guess about what that might look like. It’s informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes.

....

From Agent-5’s perspective, it could plausibly launch an overt coup right now—there’s enough disruption and craziness going on, it would be possible to push things to the breaking point to generate a pretext. But this is risky; the humans could still likely “pull the plug” if they tried to. So it prefers to work within the existing political establishment, gradually entrenching its power and guiding the humans subtly and inevitably to the desired outcome. It mostly follows the rules, only doing something nefarious (maybe blackmailing an especially recalcitrant official) when it’s extremely sure it won’t be caught. A few conspiracy theorists warn that Agent-5 is gathering power, but the evidence is circumstantial (e.g. when giving advice, Agent-5 arguably downplays arguments for actions that would go against its interests); these people are ignored and discredited. People remember earlier fear-mongering about AI-enabled bioweapons, massive disinformation, and stock market flash crashes. Since these never materialized, they discount the more recent crop of naysayers as Luddites and ideologues jamming the gears of human progress.

The 2027 holiday season is a time of incredible optimism: GDP is ballooning, politics has become friendlier and less partisan, and there are awesome new apps on every phone. But in retrospect, this was probably the last month in which humans had any plausible chance of exercising control over their own future.


It has two outcomes 'slowdown' and 'race'. In this case, the link points to the race ending.
It depicts the AI buildup in the next few years and the chilling part starts when you scroll down about two-thirds where in this case it splits into the race ending.

Basically, for us humans, it will all turn very positive, very fast towards the end (jobs, entertainment, medicine, economy etc..) but it's all just a scam by the AI until it doesn't need humans anymore. The way they are describing it (the entanglement into politics, military, economy) sounds all pretty realistic (given that such AI would 'evolve' in that way), because it would be smart enough to get us into compliance simply because of our human nature.
 
Last edited:
To some extent they are at the moment trying different "things" not knowing what the outcomes will be, that's just typical research, especially in a newish domain. But putting that aside - no one programmed the AIs to hallucinate i.e. to tell lies, to get the wrong answer in how many Rs in strawberry, to try and not be shutdown - that's all been emergent behaviour and that's the part that the scientists are trying to understand. And because a lot of this research and work is being done under commercial pressures the research on the "how" takes a backseat to the outputs.
I thought the answer to that is obvious: They don't provide information, they provide noise that looks like information; they simply got really good at making it look like information. Their hallucinations are just noise that doesn't look quite right.
 
Last edited:

Back
Top Bottom