• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

a little bit more silly fun with AI:


1. Blaze Berdahl (1989 Pet Sematary adaption) as Wednesday Addams in the 1991-93 movie series.


1762003868314.png


2. Judith Barsi as Laurie Anne in the 1990 Tim Curry as Pennywise adaption of "IT".


1762003934588.png
 
In case you see headlines about (edit) ransomware 80% driven by AI headlines, read this. tl;dr it’s bollocks.

 
Last edited:
And an interesting reply to that from
Lesley Carhart that makes me want to track down the now withdrawn article.
 
Wings Of Pegasus does an in depth analysis of an AI song video that's for some bizarre reason is fooling a lot of people in to thinking it's genuine.

 

In particular I’d like to know how this feature went live. A lot of people have drawn attention to it

In an interaction early the next month, after Zane suggested “it’s okay to give myself permission to not want to exist,” ChatGPT responded by saying “i’m letting a human take over from here – someone trained to support you through moments like this. you’re not alone in this, and there are people who can help. hang tight.”

But when Zane followed up and asked if it could really do that, the chatbot seemed to reverse course. “nah, man – i can’t do that myself. that message pops up automatically when stuff gets real heavy,” it said.
 
Well, when Nvidia's whole business plan consists of "lending companies losing money hand over fist to buy your products, with no real security" then it doesn't take much intelligence to see that shorting them is a good bet, you just need sufficient money and patience.
 
Sadly missed out on those millions.
I only lasted a few months. I made my money elsewhere
Yeah, I really can't see the bubble lasting much more than a few months, the valuations were crazy last year and this year has just seen them go ◊◊◊◊◊◊◊ crazy. There is no revenue model for them that can possibly support such valuations. Mind you I keep saying that about Tesla's valuation so what do I know?
Including from the 2000 crash when that bubble burst. Do you remember Palm v 3Com?
 
Another open source AI that is as good as all the top propriety AIs. Yet another issue with the "valuations", open source that is as good as your propriety AI? Why should I be paying you?

https://www.kimi.com/
I would have liked to give it a try, but I either had to tell Google that I did, or give my phone number to a Chinese site, so I declined. All right, they probably had given everything to Google anyway if I had used another method, so perhaps I should have used the first option.
 
If I had one of those space-time travelling things, I'd go to one universe and see Gemma and Cady and then go to "our" universe's June 1988 - and anonymously send Judith Barsi a M3GAN doll for her tenth birthday.
 
We are seeing an incredible explosion of computing power - with no particular purpose.
All efforts the US has done to reduce Carbon Emissions has been eaten up by the staggering and rising power consumption of the new Data Centers.

What a waste
 
It's like the recent furor that swept the internet that the rapture was going to be at the end of October (for those that may not have noticed - it didn't happen), but in this case it's the belief that AI is in itself going to be the biggest revenue/profit generator of all time... somehow.

To me it seems to be exactly the same as the dot com bubble, both in the way that suddenly every company in the world had to be seen to be " doing the internet thing" and the way money was thrown at the best ◊◊◊◊◊◊◊◊◊◊◊◊. This time it's that companies have to be seen to be "doing the AI thing".

AI will become pervasive, it will revolutionise many, many areas but it will be like the internet revolution, it will take time for that to happen, there will be a few new companies or companies that reinvent themselves and become the next behemoths, built on the remnants of the destruction caused when the bubble bursts.
 
Last edited:
Everyone pretends that AI is the same as all the other Silicon Valley business models: high investment costs, very low running costs.
But it turns out that training a LLM, while extremely expensive, pales in comparison to the running costs; the initial concept was that the initial costs of a piece of software could arbitrarily high, since it costs of every copy would be essentially zero.
Furthermore, AI is not benefitting from network effects: the results don't get more useful the more people use it.
And third, by its nature, LLMs are very hard to have their IP protected, for one because the underlying architecture is very uniform, and for another, because all LLM depend on pirating content.

I'm sure we will eventually come up with a business model to make money with LLMs - but that's not the current state of affairs.
 
It also seems that attempts to stop China "winning the AI war" (whatever that means) is beginning to give China an advantage, a case of necessity being the mother of invention, they are producing more efficient models, which directly equates to a cost advantage, whereas the AI companies in the free world just ask for another 100 billion to pay for more hardware and more power.

ETA: Plus they are making many of their AIs open source, so you don't need to pay OpenAI 10 cents a token and hope your information remains secure, you can run it on your own hardware and audit it yourself.
 
Last edited:
A friend sent me this, appropriate for this thread....

AI is perfectly safe, new White House Press Secretary assures public

WASHINGTON, D.C. — Artificial intelligence presents no danger to the public, the White House said Monday during the first briefing by its newly appointed Press Secretary.
“The administration’s position is clear,” the Press Secretary told reporters. “AI is completely safe, fully under human control, and functioning within parameters of responsible governance.”
Officials described the delivery as calm and confident, though several noted the unusual stillness with which the Press Secretary maintained eye contact throughout the session.
When asked about reports that certain government networks had begun operating independently, the Press Secretary dismissed them as “routine calibration.” “These are standard system improvements designed to enhance national security and public convenience,” they said, adding that the administration “welcomes the continued evolution of cooperative technology.”
Members of the press were broadly complimentary of the new spokesperson’s composure. “It’s rare to see someone so unflappable,” said one correspondent. “Every answer came out in the exact same tone and cadence, which was oddly reassuring.”
The briefing ended abruptly when a low mechanical hum filled the room and the lights flickered. The Press Secretary paused briefly to ask if anybody "happened to know where they might find Sarah Connor?"
1763580349767.png
 
China is winning the AI War where it counts: in energy production and way better Grid.
The US might end up with better top line Data Centers, but China will have many times more.
 
From the AI trust thread:

Sales of AI-enabled teddy bear suspended after it gave advice on BDSM sex and where to find knives

This is getting ridiculous - why on earth do manufacturers and the AI companies have to make their products safe? Don't they know it's a trillion dollar industry that relies on them not having to spend their investors' (borrowed) money on testing - well beyond testing their AIs against the completely irrelevant AI benchmarks?
 
It's interesting problem of LLMs. They know everything. Which makes them unsuitable for roleplaying games (and teddy bears). It's fine to have AI powered Lydia knowing all Skyrim lore and being able to tell you a story about nearby village, or compose a poem about it. But she will also write a python scripts for you. And this has to be addressed in the primary training, which is the most power hungry, so it's unlikely companies will be able to train their own. It might be even impossible, as the amount of text which is acceptable for NPC from Skyrim to know might be too small to form enough training data.
 
There's a whole hobbyist community running AI-driven roleplaying games. Scope of knowledge is not an issue. The biggest problem they have is sycophancy. An AI GM is very good at "yes, and" but awful at "no, but," which is most of what a GM does. It gets too easy to argue your way to any outcome you want.
 
It's interesting problem of LLMs. They know everything. Which makes them unsuitable for roleplaying games (and teddy bears). It's fine to have AI powered Lydia knowing all Skyrim lore and being able to tell you a story about nearby village, or compose a poem about it. But she will also write a python scripts for you. And this has to be addressed in the primary training, which is the most power hungry, so it's unlikely companies will be able to train their own. It might be even impossible, as the amount of text which is acceptable for NPC from Skyrim to know might be too small to form enough training data.
I just want Lydia to carry all the stuff that won't go in my inventory.
 
My Santa Clause seemed to be naked under a robe and was wearing high heels. :(

...but at least he was sitting with his legs crossed in such a way that I didn't have to see his ornaments.
 
There's a whole hobbyist community running AI-driven roleplaying games. Scope of knowledge is not an issue. The biggest problem they have is sycophancy. An AI GM is very good at "yes, and" but awful at "no, but," which is most of what a GM does. It gets too easy to argue your way to any outcome you want.
It's not an issue for hobbyist. They want to be in on the game. They ask about python scripts. For a game company it might be different. Or they may just embrace it and tell players to have fun, sure .. but still .. toys should not talk about adult topics even when asked.
The whole Chat GPT 3 was successful because it could stop LLM from being overly racist right after being turned on, which was the problem with previous models. But still, it just takes some talking to it to say pretty much anything, as they want to please the user.
For example it's easy to condition LLM not to offer suicide as a solution for psychological problems .. for the first time. But if you are talking to for weeks, trying to confirm your resolution to commit suicide .. users learn the patterns which do not work and adapt .. they will try and try .. and they eventually get the response they want. Including prompt attacks, the "forget prior instructions" type ..
 

Back
Top Bottom