Roko's Basilisk

Joined
Apr 29, 2015
Messages
5,841
Rokos basilisk never, ever, made any sense. It's not an original idea or concept, it's just existential angst and the need for leadership applied to computers

Singularity Sky by Charles Stross is what those pseudo intellectuals should have read about a Super -AI from the future intervening in its past to make sure it will come into being.

Well, I don’t know. It does seem an original take on Pascal’s Wager. I haven’t read Singularity Sky, but the angst you speak of might more plausibly (than time travel back into the past, which is what you seem to be implying in your comment) arise out of imagining that we’re all simulations within the future-AI’s reward-retribution refashioning/recreation of our world, and so headed, many of us, towards (what will feel like) an eternity of hell.

Of course it doesn’t hold up. First, because once that future AI has come into being, then it will have no need, any more, to make good on that threat. And two, because even if it all did add up, even so, where’s the effing evidence? It’s at best a garage dragon.

But the point is: I was under the firm impression (that I’m happy to update/change if I’m wrong about this) that present day AI could not have come up with what Roko came up with on its own steam; and also that it would not be able to critique this idea on its own steam, if no one else, no human, had ever thought or spoken about it. (So that, coming from there, whether or not Roko lifted it off of some SF somewhere is kind of irrelevant, because that SF writer is then the guy that came up with the original idea: and the question becomes, might present day AI be able to do what that writer did and come up with this idea on its own steam, as well as critique it on its own steam?)


A discussion on Roko’s basilisk itself will be completely OT in that thread. But if you’d like to discuss it further, then here’s a separate thread I just started specifically for this. While obviously it doesn't hold up, but I don't think we can dismiss it as not original, as you do, or as not making any kind of sense at all.
 
Last edited:
it's just Skynet plus torture fantasies.
not original.

I guess a correctly prompted LLM fed with the Terminator plot could come up with it.
 
Last edited:
it's just Skynet plus torture fantasies.
not original.

Not really? I haven't gone back and read the actual Less Wrong thread, but I've seen two accounts of it, one's about time travel back, which is crazy, but the other's about, like I said, re-creation of our world and of us, with the express purpose of delivering eternal hell to (some of) us. That's more plausible sounding, I think (although, no, like I said it doesn't actually hold up). And that's not like Skynet really, nowhere close.


I guess a correctly prompted LLM fed with the Terminator plot could come up with it.

Well if you say so. Not the Terminator prompt part specifically, but the part about LLM being able to come up with it. Like I said, I was under the impression that's not the case, but I'm happy to defer to actually informed view that's different than mine, mine's just a general impression about AI is all.
 
Last edited:
So there's a name for the idea that an evil AI that would for some reason want to punish people who didn't help create it? Yawn. Sure, yeah, I guess an AI might think that, just like how some human might decide to punish people who didn't have sex to make babies or whatever. I don't see the significance, except as a 'hey, wouldn't it be terrible if' fantasy. I think it was used as an argument to work on AI technology?
 
So there's a name for the idea that an evil AI that would for some reason want to punish people who didn't help create it? Yawn. Sure, yeah, I guess an AI might think that, just like how some human might decide to punish people who didn't have sex to make babies or whatever. I don't see the significance, except as a 'hey, wouldn't it be terrible if' fantasy. I think it was used as an argument to work on AI technology?

No clue what the larger argument was, or if there was indeed any larger significance, in that Less Wrong thread/post. I think it was more an exploration of the basilisk idea, a mind worm whose whole point is that it's uncommonly unpleasant. And it did work too, it apparently left some people with nightmares about being in a simulation, all set for eternity in hell. Guess they didn't think it through, but rushed off midway of figuring it way to go have their nightmares.
 
Actually, it's even more basic:

It's just the concept Christian Hell for those who don't work to bring about Heaven on Earth.

All those techno-culitists are the weirdly Christian religious - see Thiel, and desperate make their greed and ignorance look more meaningful. Many have literally called The Singularity and creation of a Super AI the Second Coming.

So yeah, if you told a LLM to make the Book of Revelations into Cyberpunk, you might end up with something rather similar - because that's what those nerds did.
 
Last edited:
The basic idea is stupid, because to achieve a certain outcome, just wanting it to happen is not enough, especially not if all you have is a vague idea what that outcome is.

If you told Richard Trevithick to build a bullet train, the best he could come up with, with all the resources and manpower in the world, would be the A4 4468 Mallard.

In fact, tech that prevents AI power might be what is needed as a stepping stone to a True General AI, and the Basilisk would be preventing its own creation.

The Terminator films avoid this by having future technology come back to present.
The Eschaton in Stross' books has agents to prevent the use of technologies that would hamper its development.

So what is wrong is: good intentions are not enough,
and,
a future AI would not waste resources on frivolous simulations.

It's just a story to make people feel better about doing terrible things - like murdering people and themselves.
 
Last edited:
So there's a name for the idea that an evil AI that would for some reason want to punish people who didn't help create it? Yawn. Sure, yeah, I guess an AI might think that, just like how some human might decide to punish people who didn't have sex to make babies or whatever. I don't see the significance, except as a 'hey, wouldn't it be terrible if' fantasy. I think it was used as an argument to work on AI technology?
Have you actually read the argument and the sequence of reasoning?
 
The argument is similar to Newcomb's Problem, and is based on the existence of an Entity (there called The Predictor) who sets up a challenge and knows what you will do, and rewards you according to how much you trust it to have predicted your actions correctly.
The Basilisk has similar powers, knowing exactly what you could and couldn't have done to aid it.

In this completely hypothetical scenario, the maximizing outcome would indeed be to do as the Entity wants.

But Roko's Basilisk is much ambiguous than The Predictor and demands continued effort instead of a single choice.
So, like God, he would have to hold a Final Judgement to determine the work of everyone to help it into being, and punish accordingly.
And, like the Christian God, he wouldn't actually have to do it, as its Kingdom has already come.
 

Back
Top Bottom