Merged Artificial Intelligence

It's trying to get people (CEOs, CFOs, VCs etc) to jump on the AI bandwagon so they're not left behind clutching their millions.

<Fanfare> XYZ corp have sacked 10,000 workers as their jobs are replaced cheaply by AI.

Footnote months later: XYZ are now recruiting people as AI has FUBARed the BAU.

"Look what it can so" always has to be balanced with "Look what it can't do".
 
I guess we're all supposed to be retrained to—what?—shovel Trump's coal into the AI boilers? What happens when millions of Americans are unemployed and can't make mortgage payments? What happens to property values when there's a glut of foreclosed properties on the market? That would make 2008 look like a speed bump. It's almost like these guys don't think anything through.
Sometimes I get the feeling, that's exactly what they (the rich and the powerful) are preparing for. Make a fascist state before it's too late. Be able to somewhat dominate the incoming revolution, frustration and chaos before it dominates you. Set specific things in place, as long as it is still possible. It's not a conspiracy, but people behind the scenes guess what's coming with AI, sooner or later anyway.
 
So: Am I wrong in thinking that? Basis your actual understanding of how AI actually works, the innards of it as it were: might AI have actually been able to come up with Roko's basilisk? And might it have actually been able to critique it thoroughly on its own steam?

If I'm actually wrong about this, and if you're sure that's the case basis your actual knowledge of how AI works at present: well then, okay, I'll revise my views on this, then.
Strictly speaking, you're not wrong about this, but you underestimate (in my opinion) how far this kind of manipulation of tokens can go.
 
Strictly speaking, you're not wrong about this, but you underestimate (in my opinion) how far this kind of manipulation of tokens can go.

Not sure I understand. If I'm not wrong about this ---- and if it is the informed view that I am, then like I said I'm open to updating my perspective on this --- then what am I underestimating, and how?
 
Not sure I understand. If I'm not wrong about this ---- and if it is the informed view that I am, then like I said I'm open to updating my perspective on this --- then what am I underestimating, and how?
How the apparently simple manipulation of tokens can recurse and stack upon itself in increasingly complex ways to produce results that seem, on the surface, extremely unlikely.
 
How the apparently simple manipulation of tokens can recurse and stack upon itself in increasingly complex ways to produce results that seem, on the surface, extremely unlikely.

No, absolutely, I agree. I mean that's clearly evident, and all around us. Thing is, I was under the impression that actual creative thought and actual critical thinking --- for instance, actual conjuring up the basilisk, like Roko did, or actually critiquing it, like you or I might --- is so far beyond it. So far.

So then, when you clearly said that's not the case, that AI might indeed do both; and when @Darat and @The Great Zaganza both hinted at similar: then I thought I'd explore this further, and change my view if indeed I'm mistaken.

Now I'm not quite sure what you are saying about this. Do you think if Roko hadn't happened to formulate his idea, then AI might have been able to do that? Or to critique it directly, like you or I can?
 
Last edited:
No, absolutely, I agree. I mean that's clearly evident, and all around us. Thing is, I was under the impression that actual creative thought and actual critical thinking --- for instance, actual conjuring up the basilisk, like Roko did, or actually critiquing it, like you or I might --- is so far beyond it. So far.
Here's the problem: define creative thought. It's not as easy as it might be.

So then, when you clearly said that's not the case, that AI might indeed do both; and when @Darat and @The Great Zaganza both hinted at similar: then I thought I'd explore this further, and change my view if indeed I'm mistaken.

Now I'm not quite sure what you are saying about this. Do you think if Roko hadn't happened to formulate his idea, then AI might have been able to do that? Or to critique it directly, like you or I can?
I don't know. I think it's possible. I don't think Roko's Basilisk actually required that much of a leap of creativity.
 
I've been seeing some elaborate variations on Captcha-sort of popup windows lately. Now little puzzles instead of just checking the photo boxes with say, a part of a picture of a bus in it. Who knows, maybe it'll eventually evolve to The Voight-Kampff test
"The tortoise lays on his back... you're not helping. Why is that?"

The Voight-Kampff test is a fictional, advanced interrogation tool from Blade Runner used by "blade runners" to distinguish human beings from bioengineered androids called replicants.
 
Anyone else here actually earning a paycheck by using AI tools to solve business problems faster and better?.

To me it seems like a lot of ignoramuses standing on the outside looking in, and pointing and laughing at things they don't understand and don't want to understand.

Somewhere up thread, someone claimed that AI agents don't do anything. From my perspective, it's hard to imagine a take that is more stupid and wrong.
 
I occasionally use it in my programming, for looking stuff up. It beats stack overflow. It's excellent at understanding what I mean. But for pure coding ? Never tried.
 
Last edited:
Anyone else here actually earning a paycheck by using AI tools to solve business problems faster and better?.
Ehhh, kinda? I haven't seen anything to change my opinion from "consider it like an enthusiastic intern." Good at grunt work but don't expect too much. Certainly no one's job is at risk here, as far as I can tell.

I'd like to use it more for coding than I do; it turns out the work problems I have need more creativity and institutional knowledge than current coding agents can be trusted to handle. They try their best but the derp factor is high enough that I'd rather just do it myself. In my free time I've been using Claude to make video game prototypes, which it's fine at when broken into tiny feature chunks like "copy the zombie unit but make it bigger and slower." Nothing professional or probably even playable will come of it, but it's fun.

With some sanity checks and a lot of very careful prompting, LLMs have found a niche in our natural-language heavy ML tasks. "Here's an unstructured document, does it use approach X or Y?" kind of thing. Cleaning up the instances where it pollutes the output with "🎉You're good to go!🎉👍👍" despite explicit instructions not to do that is still easier than spending an afternoon with regex and a bunch of special cases.

And if course it's great for making professional write-only documents that no one cares about. Like using an LLM to turn a text into an email so the boss can use an LLM to turn the email into a text. Every missive from management and HR is so much more elaborate these days despite still not saying anything useful. Real business problem solving there, shifts my damn paradigms all over, I'll tell you what.
 
Last edited:
No, absolutely, I agree. I mean that's clearly evident, and all around us. Thing is, I was under the impression that actual creative thought and actual critical thinking --- for instance, actual conjuring up the basilisk, like Roko did, or actually critiquing it, like you or I might --- is so far beyond it. So far.

So then, when you clearly said that's not the case, that AI might indeed do both; and when @Darat and @The Great Zaganza both hinted at similar: then I thought I'd explore this further, and change my view if indeed I'm mistaken.

Now I'm not quite sure what you are saying about this. Do you think if Roko hadn't happened to formulate his idea, then AI might have been able to do that? Or to critique it directly, like you or I can?
Any of the current LLMs can produce "original" fiction up to about novella length. The video AIs can do the same when creating a snippet of video.

Below is the response from Copilot to "write a short story"

Here’s a short story spun just for you—compact, atmospheric, and with a little emotional resonance.

---

The Lantern at Lowwater Bridge

Every evening at dusk, when the river turned the colour of tarnished silver, Mara lit the lantern at Lowwater Bridge. No one had asked her to. No one even knew why she did it. But she kept the ritual with the quiet devotion of someone tending a memory.

The bridge itself was old—older than the village, older than the maps that pretended to know the land. Its stones sagged like tired shoulders, and the river beneath whispered secrets in a language only the reeds seemed to understand.

One night, as Mara set the lantern down, she noticed a figure on the far side of the bridge. A boy, no older than twelve, barefoot, clothes dripping as though he’d climbed out of the river.

“You’re late,” he said, as if greeting an old friend.

Mara felt a chill, but not of fear—more like recognition. “I don’t know you.”

“You do,” the boy replied gently. “You just forgot.”

He stepped closer, and the lantern’s glow flickered across his face. His eyes were the same grey as the river at dawn. Familiar. Impossible.

Mara’s breath caught. “You’re—”

“My brother,” he finished for her. “The one the river kept.”

Memories surged—muddy water, frantic hands, a scream swallowed by the current. She had been seven. She had been helpless.

“But you—” she whispered.

“I’m not here to stay,” he said. “Only to thank you. You kept the lantern lit. It helped me find my way.”

The lantern flame steadied, burning brighter than it ever had. The boy smiled, soft and warm, and then he stepped backward into the dark. Not vanishing—simply fading, like mist deciding it was time to go.

Mara stood alone on the bridge, the river murmuring below. She didn’t cry. She didn’t need to. The lantern glowed beside her, steady as a heartbeat.

And for the first time in years, the night felt whole.

 
they should stuck to the

this is nonsense - IT jobs fluctuate extremely predictably, and what we are currently seeing is completely in line with a pre-LLM world.
The economy is going down the drain, so of course companies are downsizing.
It is very stupid to actively work to reduce the pool of future employees for your company.
I would not be one bit surprised that companies are firing people citing ai performance they know will never happen in the expectation that down the line they can go back to the same people and hire them on at half rates.
 
Ehhh, kinda? I haven't seen anything to change my opinion from "consider it like an enthusiastic intern." Good at grunt work but don't expect too much. Certainly no one's job is at risk here, as far as I can tell.

I'd like to use it more for coding than I do; it turns out the work problems I have need more creativity and institutional knowledge than current coding agents can be trusted to handle. They try their best but the derp factor is high enough that I'd rather just do it myself. In my free time I've been using Claude to make video game prototypes, which it's fine at when broken into tiny feature chunks like "copy the zombie unit but make it bigger and slower." Nothing professional or probably even playable will come of it, but it's fun.

With some sanity checks and a lot of very careful prompting, LLMs have found a niche in our natural-language heavy ML tasks. "Here's an unstructured document, does it use approach X or Y?" kind of thing. Cleaning up the instances where it pollutes the output with "🎉You're good to go!🎉👍👍" despite explicit instructions not to do that is still easier than spending an afternoon with regex and a bunch of special cases.

And if course it's great for making professional write-only documents that no one cares about. Like using an LLM to turn a text into an email so the boss can use an LLM to turn the email into a text. Every missive from management and HR is so much more elaborate these days despite still not saying anything useful. Real business problem solving there, shifts my damn paradigms all over, I'll tell you what.
I've read that real programmers consider ai to be a kind of eternal intern, in that it's like new intake interns, good at doing discrete task programming but very bad at meshing together those tasks together into a unified whole.

Unlike interns though, ai (so far, and probably permanently in the case o current models) is simply unable to learn the skills necessary to improve beond creating chunks of code.
 
Here's the problem: define creative thought. It's not as easy as it might be.

A formal all-encompassing definition may indeed not be easy, but for the purpose of our discussion, this will do: In a world where Roko's basilisk, or equivalent, has not been formulated already, then, in an open-ended discussion that is generally about either AI or else generally about "basilisks" and infohazards, can an AI come up with this Roko's basilisk idea?

While we're at it, let's all similarly define critical thinking for the limited purpose of our discussion: In a world where there is no ready-made critique already existing about Roko's basilisk, can an AI, when it comes across this idea for the very first time, present a full-on critique of this idea?

I'm fairly sure (even if not 100% certain because I haven't actually done it, have I) that I myself could, on a good day, do the former, the creative thought part, the thinking up the basilisk part. And regardless of the ideation part, I'm like completely entirely 100% sure that I could indeed do the latter myself, the critical thinking part (because I actually have done it, myself, on my own steam, the full-on critiquing off of my own bat thing, when I first came across Pascal's Wager). Which is no big deal, I'm fairly sure you could as well, as could most here. The question is, can AI, that is to say present-day already-existing AI, do both these things?


I don't know. I think it's possible.

That's kind of vague. Which is cool, because I myself am at "I don't know, I don't think it can." I'm trying to find out whether your opinion should trump mine, or if this is a YMMV thing.

Thing is, I'm ignorant about the exact nitty gritties of what exactly AI programming is made up of, and what exactly AI, as in present-day AI, is capable of. Something that a computer professional working with, or at least familiar with, AI development specifically, or maybe an IT professional with a particular interest in AI, might reliably know. Which I know some of you are.

So, when @Darat, and @The Great Zaganza, and you as well, suggested that I may be mistaken in thinking that present-day AI may not be able to think up Roko's basilisk on its own steam, or be able to critique the basilisk on its own steam, then I tried to explore this further, to see if you were sure about this; and, if so, if that certainty came not just from a general impression (as is the case with me) but from specialized technical knowledge about the innards of AI systems. Because I'm happy to defer to better informed opinion, more reliably informed opinion, even when such opinion disagrees with mine.

But now, curiously enough, I don't know where you guys are coming from! @Darat keeps throwing out hints, but doesn't --- or at least, hasn't yet --- answered my direct specific question with a direct specific answers. And you start out saying in effect that I'm wrong, but now say I'm not wrong about this. All of which leaves me ...just a bit bewildered, confused, about what it is you actually think about this.


eta: It's cool, I'm not disagreeing with anything here, like I said I'm happy to defer to better informed opinion: I'm just trying to figure out what exactly you're thinking and saying, in this specific context.

etaa: Also, I see no reason why, in theory, AI might not do everything us flesh and blood intelligences can do, and do it better. No reason why one day, maybe one day very soon, AI shouldn't be able to do both creative ideation as well as critical thinking, and maybe surpass us humans in both. The question is, can it do these two things today, now, already ---- in context of this discussion, as it applies to something like Roko's basilisk? It is my impression, my vague not technically well informed but general impression, that it cannot do these things, not yet. I'm trying to find out, basis this discussion on the basilisk that I came across here, if that impression is wrong, and needs to be updated and corrected. Not one day in the future, but today, already.
 

Back
Top Bottom