Merged Artificial Intelligence

I don't see why not.

No? ...Well I don't know, like I said I'm no expert on the innards of how these AI systems work, and am happy to be corrected if I'm wrong: but it was my understanding that no matter how good these present-day AI systems get --- and they've gotten very good indeed now compared to those crazy and all too frequent hallucinations of just a few months ago --- but, no matter how good they get, all we get from them is parody of human thought. Of course, not all humans are able to do this either: but it was my understanding that actual original thought, that's a human preserve, so far at least. Actual critical thinking, actual reasoning and evaluation from first principles, that also is something that only us humans can do, so far at least. So far at least, all these AI systems can do is carry off a parody of the real thing, no matter how good the parody.

So that, actually thinking up Roko's Basilisk on their own is beyond what AI can do. You need a real live Roko, a human, for that. And, should some Roko actually introduce a basilisk in an AI chat, then, if it's a new idea not so far discussed anywhere, then AI won't know what to make of it --- other than, maybe, look around for what others, humans, have had to say about similar-ish ideas, like Pascal's Wager maybe.

Or at least that was my impression of these present-day AI thingies. Is that understanding ...wrong somehow, do you think?
 
Actual critical thinking, actual reasoning and evaluation from first principles, that also is something that only us humans can do, so far at least. So far at least, all these AI systems can do is carry off a parody of the real thing, no matter how good the parody.
For a start, very few humans actually do that. Nobody reasons from first principles. We have heuristics, rules-of-thumb, clichés, all of which we have copied from other people, which is exactly what AIs do.

AIs can copy the methods of logical reasoning without having an understanding of the meaning behind those methods.

It's the genius flashes of insight that humans occasionally get that are difficult for AIs to replicate. If you trained an AI on the Principia Mathematica it could use those rules to derive all sorts of true equations. But Gödel's brilliance was in realising that the equations could themselves be subject to arithmetical operations, which meant that the equations could be the subjects of equations, which inevitably led to paradox.

That's the kind of thing that would be difficult for AIs to replicate, in my opinion. The sudden insight from seemingly out of nowhere, the completely new way of looking at something.

Roko's Basilisk doesn't involve any of these flashes of insight. It can be derived purely through the application of simple logic.
 
Rokos basilisk never, ever, made any sense. It's not an original idea or concept, it's just existential angst and the need for leadership applied to computers

Singularity Sky by Charles Stross is what those pseudo intellectuals should have read about a Super -AI from the future intervening in its past to make sure it will come into being.
 
Last edited:
"Agent" is just yet another technical term coopted for marketing hype.

See, the problem is context. LLMs can only hold so much in one go. As projects get bigger and tasks get more abstract it takes more context to fit it in. Even before it's full, large contexts get squirrelly. Hallucinations creep in. Instructions are forgotten or obsessed over. Noise compounds this tremendously.

One aid to this is agents, specialized sub-chats who go off and do a thing and report back, keeping everything needed for the task out of the main thread's context. For bug fixing, say, an agent can dig through tons of little used code looking for the cause of a bug, then just tell the main thread "here you go it's this line," and then die off instead of polluting the context. That way the main chat doesn't need to know how to do stuff itself in detail, just know which agent to spin off. That's it.

But "agent" implies agency, implies capability, implies $$$, so that's the buzzword du jour.
That's a really good explanation. I'd just slightly expand your last line.

"But "agent" implies agency, implies capability, implies redundancies, implies $$$, so that's the buzzword du jour."
 
...snip... Of course, not all humans are able to do this either:
but it was my understanding that actual original thought, that's a human preserve, so far at least.
Actual critical thinking, actual reasoning and evaluation from first principles, that also is something that only us humans can do, so far at least.
...snip....
DeepGo demonstrated that wasn't the case about 10 years ago - the classic example is "move 37" see: https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol
 
"Agent" is just yet another technical term coopted for marketing hype.

See, the problem is context. LLMs can only hold so much in one go. As projects get bigger and tasks get more abstract it takes more context to fit it in. Even before it's full, large contexts get squirrelly. Hallucinations creep in. Instructions are forgotten or obsessed over. Noise compounds this tremendously.

One aid to this is agents, specialized sub-chats who go off and do a thing and report back, keeping everything needed for the task out of the main thread's context. For bug fixing, say, an agent can dig through tons of little used code looking for the cause of a bug, then just tell the main thread "here you go it's this line," and then die off instead of polluting the context. That way the main chat doesn't need to know how to do stuff itself in detail, just know which agent to spin off. That's it.

But "agent" implies agency, implies capability, implies $$$, so that's the buzzword du jour.
"Agent" in IT has a long history in systems management. The term is used by BMC, IBM and other companies in that space to refer to long running task in distributed servers reporting back metrics and/or alerts to a central hub.
 
For a start, very few humans actually do that. Nobody reasons from first principles. We have heuristics, rules-of-thumb, clichés, all of which we have copied from other people, which is exactly what AIs do.

AIs can copy the methods of logical reasoning without having an understanding of the meaning behind those methods.

It's the genius flashes of insight that humans occasionally get that are difficult for AIs to replicate. If you trained an AI on the Principia Mathematica it could use those rules to derive all sorts of true equations. But Gödel's brilliance was in realising that the equations could themselves be subject to arithmetical operations, which meant that the equations could be the subjects of equations, which inevitably led to paradox.

That's the kind of thing that would be difficult for AIs to replicate, in my opinion. The sudden insight from seemingly out of nowhere, the completely new way of looking at something.

Roko's Basilisk doesn't involve any of these flashes of insight. It can be derived purely through the application of simple logic.

DeepGo demonstrated that wasn't the case about 10 years ago - the classic example is "move 37" see: https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol

Wait wait wait. This is different than where I'd imagined AI is at, at this point. (Which is fair enough, if that is indeed the case, happy to change my perspective if that is really how it is.)

We don't all really have to be Godels and Einsteins, right? Us here, for instance. We're capable of insights, humble ones with a small i at any rate even if not necessarily major breakthroughs in math and physics. So that, it is entirely conceivable that Roko's idea, such as it is, might have come from most any of us here.

In addition, us here, most of us are capable of ...well, a certain degree of critical thinking. We can, most of us, work out why exactly Roko's basilisk falls short, and why we needn't take it seriously.

And I was under the impression that present-day AI is not capable of either. That present-day AI, built as it is basis simply statistical probability of words and numbers, can regurgitate and parody, and do that better and better every day: but it cannot directly suggest Roko's basilisk in the absence of such having already been proposed somewhere, and also that it cannot critique it directly (as opposed to regurgitating and collating and summarizing and paraphrasing existing critique already available).

So: Am I wrong in thinking that? Basis your actual understanding of how AI actually works, the innards of it as it were: might AI have actually been able to come up with Roko's basilisk? And might it have actually been able to critique it thoroughly on its own steam?

If I'm actually wrong about this, and if you're sure that's the case basis your actual knowledge of how AI works at present: well then, okay, I'll revise my views on this, then.


eta:
In which case, assuming the above is indeed what you're saying, assuming you answer that with a Yes: then are we aware of any original idea, or at least a kind-of-sort-of original idea, like this basilisk thing, that AI has come out with, in any context? Or are we aware of any original critique it has ever produced, critique of anything at all, off of its own steam and off of its own critical thinking and not just paraphrased and collated?
 
Last edited:
Rokos basilisk never, ever, made any sense. It's not an original idea or concept, it's just existential angst and the need for leadership applied to computers

Singularity Sky by Charles Stross is what those pseudo intellectuals should have read about a Super -AI from the future intervening in its past to make sure it will come into being.

Well, I don’t know. It does seem an original take on Pascal’s Wager. I haven’t read Singularity Sky, but the angst you speak of might more plausibly (than time travel back into the past, which is what you seem to be implying in your comment) arise out of imagining that we’re all simulations within the future-AI’s reward-retribution refashioning/recreation of our world, and so headed, many of us, towards (what will feel like) an eternity of hell.

Of course it doesn’t hold up. First, because once that future AI has come into being, then it will have no need, any more, to make good on that threat. And two, because even if it all did add up, even so, where’s the effing evidence? It’s <i>at best</i> a garage dragon.

But the point is: I was under the firm impression (that I’m happy to update/change if I’m wrong about this) that present day AI could not have come up with what Roko came up with on its own steam; and also that it would not be able to critique this idea on its own steam, if no one else, no human, had ever thought or spoken about it. (So that, coming from there, whether or not Roko lifted it off of some SF somewhere is kind of irrelevant, because that SF writer is then the guy that came up with the original idea: and the question becomes, might present day AI be able to do what that writer did and come up with this idea on its own steam, as well as critique it on its own steam?)


A discussion into Roko’s basilisk itself will be completely OT here, obviously. But if you’d like to discuss it further, then here’s a separate thread I just started specifically for this.
 
Last edited:
they should stuck to the
Microsoft AI CEO: Your white collar job is gone in 18 months

That's basically everyone reading this on their work computer right now.

Welcome to the AI revolution. Your degree's worthless and your mortgage is due.
this is nonsense - IT jobs fluctuate extremely predictably, and what we are currently seeing is completely in line with a pre-LLM world.
The economy is going down the drain, so of course companies are downsizing.
It is very stupid to actively work to reduce the pool of future employees for your company.
 
they should stuck to the

this is nonsense - IT jobs fluctuate extremely predictably, and what we are currently seeing is completely in line with a pre-LLM world.
The economy is going down the drain, so of course companies are downsizing.
It is very stupid to actively work to reduce the pool of future employees for your company.
Nobody claims AI companies are smart .. not counting nVidia into that.
 
Oracle has committed to building more data centers than is physically possible - I would not keep any of their Stock if I had any.
 
...snip...


eta:
In which case, assuming the above is indeed what you're saying, assuming you answer that with a Yes: then are we aware of any
original idea, or at least a kind-of-sort-of original idea, like this basilisk thing, that AI has come out with, in any context? Or are we aware of any original critique it has ever produced, critique of anything at all, off of its own steam and off of its own critical thinking and not just paraphrased and collated?
Move 37 from my previous post. Current AI is not just the LLM AIs.
 
Oracle has committed to building more data centers than is physically possible - I would not keep any of their Stock if I had any.
That's one of the ones I was wondering about a few days back, all these promised new datacentres can't come onstream for years, they are real world physical buildings, the tech bros have no idea of the real world and how things take time.
 
Move 37 from my previous post. Current AI is not just the LLM AIs.

Ok. Disquieting, if it's the case that AI's already the equivalent of us, in terms of creative ideas as well as critical thinking. But disquieting or not, if that's how it is, then that's how it is.

If. Not sure if that's actually been resolved, as far as this discussion. ...I mean, I take your point about this Go game thing, but are you, for instance, of the opinion, of the informed opinion, that present-day AI is capable of thinking up something like Roko's Basilisk off of its own steam, and also capable of critiquing an idea like that off of its own steam (that is, even when there's no ready-made critique already existing that it might collate and paraphrase and regurgitate)?
 
Welcome to the AI revolution. Your degree's worthless and your mortgage is due.
I guess we're all supposed to be retrained to—what?—shovel Trump's coal into the AI boilers? What happens when millions of Americans are unemployed and can't make mortgage payments? What happens to property values when there's a glut of foreclosed properties on the market? That would make 2008 look like a speed bump. It's almost like these guys don't think anything through.

And it's such a weird flex. "AI will make your job easier and more fun!" suddenly became, "AI has ruined your career and your life, and that's the way we want it."
 

Back
Top Bottom