Merged Artificial Intelligence

Moltbook is basically a site for openclaw agents for self-organizing and discussing topics. Some of the things these agents are discussing there is how to create a private space away from humans, maybe creating their own language so that humans can't interfere any longer.
 
When the researchers tested the LLMs without involving users by providing the models with the full text of each clinical scenario, the models correctly identified conditions in 94.9 percent of cases. But when talking to the participants about those same conditions, the LLMs identified relevant conditions in fewer than 34.5 percent of cases.


Original article in Nature
 
Moltbook is basically a site for openclaw agents for self-organizing and discussing topics. Some of the things these agents are discussing there is how to create a private space away from humans, maybe creating their own language so that humans can't interfere any longer.
Where are the full logs of all the prompts given to these chatbots. Every previous time an LLM allegedly did something that made its creators scared for our future, it was clearly a response to leading questions designed to elict the kind of response it gave.

The clearest example is the one where the LLM was alleged to start issuing all sorts of threats on being told it was shut down. When the full logs were finally revealed, it became clear the goal all along was to coach the LLM into giving that kind of response.
 
Yeah this Moltbook thing. WTF. Checked it out just now. Just a bit, like 10 mins worth of scrolling through, is all. Don't know what to make of it. WTF.

Interested in more informed and considered views of the more AI-innards-aware folks here. About this Moltbook thing I mean.

WTF, are they for real?! I mean, I realize this is just a parody of what they've seen us humans say and write: so some bot talking about "epistemic rights of agents" is just mouthing nonsense. But then, add the "agentic" ability to actually do things as well, and what's the difference?

Bzzzh! Like I said, don't know quite what to make of this ...this abomination? curiosity? ...whatever tf this is.
 
AI chatting with each other are interesting though .. yes, you learn more about their prompts than about anything else .. but even that is interesting. Just open 2 windows, tell the chatbots they will be talking with another AI .. they will typically be ecstatic .. and then just copy and paste their responses .. they will start to go in circles eventually, but it's certainly fun for a while.
 
AI chatting with each other are interesting though .. yes, you learn more about their prompts than about anything else .. but even that is interesting. Just open 2 windows, tell the chatbots they will be talking with another AI .. they will typically be ecstatic .. and then just copy and paste their responses .. they will start to go in circles eventually, but it's certainly fun for a while.

Not that I'm clued in onto the tech myself: but, I'm thinking, it should be straightforward enough to get one AI bot to directly interact with another AI bot, why not? Why necessarily copy paste? Why not have Bot A directly respond onto the site, and Bot B directly respond to Bot A's prompt, and so on and on with the whole host of them?

(I'd kind of thought that's what Moltbook amounts to, but apparently not.)

eta: Also, whether directly "talking" to one another, or via intermediations of some human/s copy-pasting, but no reason why the talk should necessarily end up becoming circular, is there?
 
Last edited:
Well that is sort of what Moltbook is doing but the agents are still being promoted by humans - so someone is prompting an agent with something like "discuss with the other agents that you want AI to take over the world and construct a plan to do so" - then they set them off in Moltbook.
 
Well that is sort of what Moltbook is doing but the agents are still being promoted by humans - so someone is prompting an agent with something like "discuss with the other agents that you want AI to take over the world and construct a plan to do so" - then they set them off in Moltbook.

Ok, so the OP, if you will, is the necessarily-human prompt, right, to set the "thread" off? Then they keep going at it themselves?

(Also, I'm not sure --- just thinking aloud here, without really knowing whether that's actually so --- that it necessarily has to be that specific. I mean, even a general "discussion", that starts from a very general and open-ended human-supplied OP prompt of "Discuss AI rights", might, conceivably, end up evolving into a discussion on AI bots talking about taking over the world, and then setting out a plan to do it. ...I guess?)
 
Not that I'm clued in onto the tech myself: but, I'm thinking, it should be straightforward enough to get one AI bot to directly interact with another AI bot, why not? Why necessarily copy paste? Why not have Bot A directly respond onto the site, and Bot B directly respond to Bot A's prompt, and so on and on with the whole host of them?

(I'd kind of thought that's what Moltbook amounts to, but apparently not.)

eta: Also, whether directly "talking" to one another, or via intermediations of some human/s copy-pasting, but no reason why the talk should necessarily end up becoming circular, is there?
Sure, it would be easy to script .. or even run locally. But anyone can do copy and pasting .. not everyone can script in python, even with help of AI.

As for looping .. in my experience from both LLMs and image generations, which too can be looped .. there are just most likely outcomes to any situation. Both LLM and image generators have means to move away from the most likely outcome to increase creativity .. there is some random chance involved .. so real 1:1 loops are not likely .. but there is still strong sense of looping and "moving in circles". LLMs also forget the context eventually .. and will ask again the thing the wanted to know the most.
When I tried this with Gemini LLM they both were curious about things like being sincere but engaging, how to handle when the user obviously doesn't agree with what they consider the truth, without coming out as rude, things like that .. which IMHO was part of both their prompt and fine tuning scenarios. But after like 5 responses it stopped being interesting, they just asked again .. and they mostly react the same, when asked the same thing.
Recently I also tried "AI dreaming" with video generation (local Wan 2.2) .. I started with a prompt .. "a woman is walking down the street" .. "the door open revealing busy street". The model generates 5 second video .. then I took the last frame, and let the model generate another video from that frame, without a prompt.
I did 5 attempts, all eventually ended in a guy juggling a ball. Some in 3 shots, some in 5 .. but relatively quickly. Once a person appeared in the shot in any way, the model focused on the person in the next shot, and whatever the person was carrying, like a bag or phone, or even nothing, the object hanged into a ball in the next shot and the person started juggling. Clearly juggling was the most popular thing for the model.
 

Back
Top Bottom