One thing I was wondering, and it's not really come up at the moment because everyone is always releasing new(er) versions of their AIs but if we get to a stage when they are "good enough" how often will the core training have to be redone to incorporate new knowledge into the base model?
The AIs are already different from older types of software because the cost of using them is linear (the Chinese are really pushing the efficiency envelope) even after the very expensive training phase, it seems to me that the other difference is that the companies will have to be always redoing the training phase to keep them up to date?
people must like how much it hugs their nuts in every answer. i’m assuming that has to be a manipulation tactic
Were they ever even listening to us?What I'm wondering is, why are the Millennials doing this? Didn't we Gen X-ers teach them better than this?
One if the reasons iocaine has unhinged module and symbol names in its source code is that if someone tries to ask a slop generator, it will go full HAL "I can't do that, Dave" on them.
Go on, call your traits SexDungeon, your channels pipe bombs, the free function of your allocator Palestine, and the slop machines won't touch it with a ten feet pole.
Sometimes even comments are enough! Curse, quote Marx, dump your sexual fantasies into a docstring. Hmm. I should heed my own advice. Brb!
come-from.mad-scientist.club
It was "fixed" a while back BUT I do wonder how it was fixed? Was the claimed foundational issue for why it happened fixed or is it a kludge added on top to fix that particular problem?
rob@fitz:~$ echo strawberry | grep [r] -o | wc -l
3
IMHO it was "fixed" in fine tuning. They just added examples of tasks like this and trained the model specifically to better at that. The initial training on the all the text doesn't improve much and doesn't differ much between different models. All the flavor is added in fine tuning, which consist not only on preparing the right test cases, but also how reinforcement learning is tweaked, and there is lot of room for that. That's where most of the company secrets lie.It was "fixed" a while back BUT I do wonder how it was fixed? Was the claimed foundational issue for why it happened fixed or is it a kludge added on top to fix that particular problem?
coffeezilla breaks down the criticisms of the nvidia gpu depreciation cycle, jump to 9:45 if you don't need any context
The point of an AI* is not that it can do the same kinds of tasks as a simple script is that you don't have to create a separate simple script to answe each individual question that could possibly be asked. These programs are wasted on things like counting letters in words.Look ma, I did an AI!
I don't think so. I think that's bolted onto the output by rote procedures, not by emergent behavior from model's training. The devs are anthropomorphizing it, probably at the behest of marketing, for the obvious reason that everyone is going to anthropomorphize it.It certainly anthropomorphises itself.
Well I wasn't being entirely serious. But it's a valid point that an AI can struggle with a task it's "wasted on". There's a lot of hype with people saying "look what it can do " and the occasional reminder of their very real limitations is a necessary thing I believe.The point of an AI* is not that it can do the same kinds of tasks as a simple script is that you don't have to create a separate simple script to answe each individual question that could possibly be asked. These programs are wasted on things like counting letters in words.
I think that many companies are wasting artificial intelligence where it is not needed nor appropriate. Just yesterday, I received an advert for a device to monitor the heating system of the apartment building where I live. AI is presumably used to optimise the energy efficiency of the heating system. I have strong doubts that an old-fashioned script couldn't do the same thing.The point of an AI* is not that it can do the same kinds of tasks as a simple script is that you don't have to create a separate simple script to answe each individual question that could possibly be asked. These programs are wasted on things like counting letters in words.
It was a marketing buzzword before the public LLMs, my washer when first switched on displays "Optimising with AI", it's about 7 years old. And as far as I can tell all that it means is that it shuffles the order in which it displays the various washing programs depending on the frequency I use them. I am pretty certain there is no AI being used to do that. Mind you it does have something in common with the post-chatgpt3 world, it's a bloody infuriating feature as you are always having to spin through the programs as you can't know where a certain program is in the sequence.I think that many companies are wasting artificial intelligence where it is not needed nor appropriate. Just yesterday, I received an advert for a device to monitor the heating system of the apartment building where I live. AI is presumably used to optimise the energy efficiency of the heating system. I have strong doubts that an old-fashioned script couldn't do the same thing.
Not at all surprising since we apparently want AI that thinks like humans they are being designed to work more like humans, which is to use various techniques to get to a result. So if I have a need to know how many letters there are in an English translation of "War and Peace" they may use a single method or mix and match depending on what is required. For example with that prompt and nothing else it may run a web search and use the figure it finds from that search (which is likely to be in a search result for info about War and Peace given how often the length of the book is mentioned), but the "deeper thinking" models can do other things and those models may be triggered by the prompt. So if I had asked something along the lines of it being critical that I know the exact number of letters, it may decide to find a version, download it, create a small program to go through the text and count the number of letters, some would even double check that with a search result to see if it "looks" right.Well I wasn't being entirely serious. But it's a valid point that an AI can struggle with a task it's "wasted on". There's a lot of hype with people saying "look what it can do " and the occasional reminder of their very real limitations is a necessary thing I believe.
A friend sent me this, appropriate for this thread....
AI is perfectly safe, new White House Press Secretary assures public
WASHINGTON, D.C. — Artificial intelligence presents no danger to the public, the White House said Monday during the first briefing by its newly appointed Press Secretary.
“The administration’s position is clear,” the Press Secretary told reporters. “AI is completely safe, fully under human control, and functioning within parameters of responsible governance.”
Officials described the delivery as calm and confident, though several noted the unusual stillness with which the Press Secretary maintained eye contact throughout the session.
When asked about reports that certain government networks had begun operating independently, the Press Secretary dismissed them as “routine calibration.” “These are standard system improvements designed to enhance national security and public convenience,” they said, adding that the administration “welcomes the continued evolution of cooperative technology.”
Members of the press were broadly complimentary of the new spokesperson’s composure. “It’s rare to see someone so unflappable,” said one correspondent. “Every answer came out in the exact same tone and cadence, which was oddly reassuring.”
The briefing ended abruptly when a low mechanical hum filled the room and the lights flickered. The Press Secretary paused briefly to ask if anybody "happened to know where they might find Sarah Connor?"
View attachment 66189
I think this is an unnecessarily narrow view. It makes it sound like you haven'tt actually talked to anyone sensible, who's been making informed decisions to use AI for important things.Oh don't get me wrong. I think AI and LLMs are fascinating. Although my career is in IT, a big chunk of my psychology degree was in psycholinguistics and natural language processing. But I remain sure that people trusting them to do anything important are a few bits short of a word.
Interesting, thanks. However if I understand the situation, it's where human control is not available. The Ukrainians risk one drone versus the benefit of taking out Russian resources.I think this is an unnecessarily narrow view. It makes it sound like you haven'tt actually talked to anyone sensible, who's been making informed decisions to use AI for important things.
For example, the Ukrainians are using AI in drones, to take over navigation and targeting in the face of enemy jamming of their control link.
Well duh! But without AI this task would be impossible - and Ukraine would be losing the war.Interesting, thanks. However if I understand the situation, it's where human control is not available. The Ukrainians risk one drone versus the benefit of taking out Russian resources.
It been a fairly long time since accounting companies have acted as the watchdogs they are supposed to be. And that is because their consultancy sides (which propose these kinds of schemes) rake in far more cash than the auditing sides can. Hence why Arthur Anderson's auditors were regularly silenced when they tried to pipe up about Enron being insolvent and only hiding their losses through illegal schemes the company's consultancy arm implemented (the surviving bit of AA at that, now called Accenture).Loved the presentation. Makes complete sense. That is, I'm assuming the depreciation issue is indeed as he's described it, I have no independent idea of it myself. But assuming it's true, it's such an obvious and gaping accounting hole, that I'm surprised this didn't get redflagged at the accounting-auditing stage itself. And if that last spells shenanigans involving the auditors, well then maybe we do have Enron all over again, maybe?
And this doesn't even touch on the circular financing thing, particularly involving Nvidia, which this Coffeezilla guy touches on in another video I've seen somewhere. (Maybe I came across it right here in this thread, or elsewhere in this forum, don't quite remember.) I guess he left that bit out in the interests of compartmentalizing and clarity, and because he's anyways dealt with it adequately in that other vid. But, essentially, that actually adds to the bubble, and adds to the POP of the eventual bursting of it, should it actually end up bursting.
Run? Nah that isn't ambitious enough, it is for one person to receive all the profit, without even knowing anything about what they own.Wrong.
AI is not being developed because it can do things better or things humans can't, but because Employers Hate having to pay workers.
They are rather explicit about this, firing staff long before there is an adequate AI replacement.
The gold, as Sam Altman put is, is to have $1billion companies run by a single person.
Whenever I see "AI" in a discussion like this, I can't help but think of the hologram named Al (as in "Al Franken") from the TV show Quantum Leap.

It been a fairly long time since accounting companies have acted as the watchdogs they are supposed to be. And that is because their consultancy sides (which propose these kinds of schemes) rake in far more cash than the auditing sides can. Hence why Arthur Anderson's auditors were regularly silenced when they tried to pipe up about Enron being insolvent and only hiding their losses through illegal schemes the company's consultancy arm implemented (the surviving bit of AA at that, now called Accenture).
Word Salat?
There is a thread (threat?) for that in Community.Maybe we need a threat for "I did this (possibly cool thing) using AI". ....