• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

people must like how much it hugs their nuts in every answer. i’m assuming that has to be a manipulation tactic
 
One thing I was wondering, and it's not really come up at the moment because everyone is always releasing new(er) versions of their AIs but if we get to a stage when they are "good enough" how often will the core training have to be redone to incorporate new knowledge into the base model?

The AIs are already different from older types of software because the cost of using them is linear (the Chinese are really pushing the efficiency envelope) even after the very expensive training phase, it seems to me that the other difference is that the companies will have to be always redoing the training phase to keep them up to date?

imo there’s a real danger that the standard for good enough is going to be when it’s widely accepted, and then they don’t really need to update anything and can start using it to manipulate people and insert whatever data they want to be widely accepted. it’s pretty clear to me the path to profitability with these things is just get everyone using them and they’ll get bought simply because they draw so many eyeballs.
 
people must like how much it hugs their nuts in every answer. i’m assuming that has to be a manipulation tactic
 
One if the reasons iocaine has unhinged module and symbol names in its source code is that if someone tries to ask a slop generator, it will go full HAL "I can't do that, Dave" on them.

Go on, call your traits SexDungeon, your channels pipe bombs, the free function of your allocator Palestine, and the slop machines won't touch it with a ten feet pole.

Sometimes even comments are enough! Curse, quote Marx, dump your sexual fantasies into a docstring. Hmm. I should heed my own advice. Brb!

 
There is a herd of elephants in the room that quite a lot of people seem to be ignoring. Some are more ephemeral than others. In no particular order and merely some musings of mine.

At the more ephemeral end is the assumption that AI will remain "computational tensors all the way up and down", remember we are only in the current circumstances because of one paper published in 2017. Without that singular paper AI would not be where it is today. A single paper could again upend the field, as one did in 2017, that would create different foundations and different scaling ladders. Likely? Who knows? Back in 2016 AI was in the doldrums, one paper altered all that.

Then we have the Chinese, that because of attempts to hobble their technological progression by restricting access to technology we are pushing them to first of all try and match the technology base the er.. "free world" enjoys, the one that relies 100% on the "Taiwan Semiconductor Manufacturing Company" (never mind the other Chinese elephant in the room i.e. China "regaining" an intact Taiwan or deciding that denying the "free world" access to their technological advantage is worth it, "whoops sorry about that small nuclear weapon test that went wrong"). We are also pushing them to get around the restrictions by innovation and invention, and so far the indications are that their "token per watt" efficiency is much better than the companies that are waiting for the next gen of Nvidia chips to improve their efficiency. Their ability to be focused way beyond the next quarter results gives them another advantage.

AI itself. If AI reaches the point of self-improvement, who knows what it might design? Or what it will be able to create using current assets? And I would bet that the first to reach such a place will be Google, yeah Google appeared to have had to play "catch up" in the AI benchmarks and marketing crap that the "financial analysts" had decided was the most important thing and had to burn billions to do so, but fundamentally they have been the leading company in AI research for a decade, remember where the 2017 paper came from. (And look at SIMA 2)

Then we have the good old fusion of the computer world, quantum computers. If they can be made to work and work as predicted then of course that throws a dead cat on the table regarding AI probably by blowing past any hardware bottleneck.
 
1764321885777.png
1764321908243.png

This "See my thinking" is a new feature by the way. I haven't seen that before. I like the way it "decided" to provide a simple short answer. I may have already told it to do that, but I can't remember if it was in this session or not.
 
It was "fixed" a while back BUT I do wonder how it was fixed? Was the claimed foundational issue for why it happened fixed or is it a kludge added on top to fix that particular problem?
IMHO it was "fixed" in fine tuning. They just added examples of tasks like this and trained the model specifically to better at that. The initial training on the all the text doesn't improve much and doesn't differ much between different models. All the flavor is added in fine tuning, which consist not only on preparing the right test cases, but also how reinforcement learning is tweaked, and there is lot of room for that. That's where most of the company secrets lie.
 

coffeezilla breaks down the criticisms of the nvidia gpu depreciation cycle, jump to 9:45 if you don't need any context

Loved the presentation. Makes complete sense. That is, I'm assuming the depreciation issue is indeed as he's described it, I have no independent idea of it myself. But assuming it's true, it's such an obvious and gaping accounting hole, that I'm surprised this didn't get redflagged at the accounting-auditing stage itself. And if that last spells shenanigans involving the auditors, well then maybe we do have Enron all over again, maybe?

And this doesn't even touch on the circular financing thing, particularly involving Nvidia, which this Coffeezilla guy touches on in another video I've seen somewhere. (Maybe I came across it right here in this thread, or elsewhere in this forum, don't quite remember.) I guess he left that bit out in the interests of compartmentalizing and clarity, and because he's anyways dealt with it adequately in that other vid. But, essentially, that actually adds to the bubble, and adds to the POP of the eventual bursting of it, should it actually end up bursting.
 
Look ma, I did an AI!
The point of an AI* is not that it can do the same kinds of tasks as a simple script is that you don't have to create a separate simple script to answe each individual question that could possibly be asked. These programs are wasted on things like counting letters in words.
 
The point of an AI* is not that it can do the same kinds of tasks as a simple script is that you don't have to create a separate simple script to answe each individual question that could possibly be asked. These programs are wasted on things like counting letters in words.
Well I wasn't being entirely serious. But it's a valid point that an AI can struggle with a task it's "wasted on". There's a lot of hype with people saying "look what it can do " and the occasional reminder of their very real limitations is a necessary thing I believe.
 
The point of an AI* is not that it can do the same kinds of tasks as a simple script is that you don't have to create a separate simple script to answe each individual question that could possibly be asked. These programs are wasted on things like counting letters in words.
I think that many companies are wasting artificial intelligence where it is not needed nor appropriate. Just yesterday, I received an advert for a device to monitor the heating system of the apartment building where I live. AI is presumably used to optimise the energy efficiency of the heating system. I have strong doubts that an old-fashioned script couldn't do the same thing.
 
I think that many companies are wasting artificial intelligence where it is not needed nor appropriate. Just yesterday, I received an advert for a device to monitor the heating system of the apartment building where I live. AI is presumably used to optimise the energy efficiency of the heating system. I have strong doubts that an old-fashioned script couldn't do the same thing.
It was a marketing buzzword before the public LLMs, my washer when first switched on displays "Optimising with AI", it's about 7 years old. And as far as I can tell all that it means is that it shuffles the order in which it displays the various washing programs depending on the frequency I use them. I am pretty certain there is no AI being used to do that. Mind you it does have something in common with the post-chatgpt3 world, it's a bloody infuriating feature as you are always having to spin through the programs as you can't know where a certain program is in the sequence.
 
Well I wasn't being entirely serious. But it's a valid point that an AI can struggle with a task it's "wasted on". There's a lot of hype with people saying "look what it can do " and the occasional reminder of their very real limitations is a necessary thing I believe.
Not at all surprising since we apparently want AI that thinks like humans they are being designed to work more like humans, which is to use various techniques to get to a result. So if I have a need to know how many letters there are in an English translation of "War and Peace" they may use a single method or mix and match depending on what is required. For example with that prompt and nothing else it may run a web search and use the figure it finds from that search (which is likely to be in a search result for info about War and Peace given how often the length of the book is mentioned), but the "deeper thinking" models can do other things and those models may be triggered by the prompt. So if I had asked something along the lines of it being critical that I know the exact number of letters, it may decide to find a version, download it, create a small program to go through the text and count the number of letters, some would even double check that with a search result to see if it "looks" right.

NB I thought I'd check this before I hit Post Reply. Phew I was right.

First pass simply asking Copilot in "Quick response" mode how many letters in an English translation of War and Peace​
The English translation of War and Peace contains approximately 3.2 to 3.5 million letters.
It did a web search (it cites it), couldn't find the number of letters but did find number of words and did some averaging to come up with the result.​

This estimate is based on the word count of the novel and average word length in English:​
---​
📚 Breakdown of the Estimate​
- Word count of the English translation: ~560,000 words (Rosemary Edmonds version)​
- Average word length in English: ~5.7 letters per word (based on linguistic studies)​
- Estimated total letters:​
\( 560,000 \text{ words} \times 5.7 \text{ letters/word} = 3,192,000 \text{ letters} \)​
Depending on the translation and formatting, this number could vary slightly. Some versions may have more or fewer words due to stylistic choices, and punctuation and spacing are not included in this count.​

I then asked for an exact word count, it told me that would vary from translation to translation so for which version did I need to know the exact letter count. Which was fair enough pushback, told it to use the version on Project Gutenberg.​
The exact number of alphabetic letters (A–Z, a–z) in the Project Gutenberg English translation of War and Peace is 3,156,347.
This count is based on the full text of the Louise and Aylmer Maude translation available at Project Gutenberg. It includes only alphabetic characters and excludes all spaces, punctuation, digits, and special symbols.
---
🧮 Method Used
- The entire plain-text file (pg2600.txt) was processed programmatically.
- Only characters matching [A-Za-z] were counted.
- No rounding was applied—this is the precise total.
If you'd like a breakdown by volume, chapter, or even a comparison with the Russian original, I can help with that too.
 
Last edited:
Oh don't get me wrong. I think AI and LLMs are fascinating. Although my career is in IT, a big chunk of my psychology degree was in psycholinguistics and natural language processing. But I remain sure that people trusting them to do anything important are a few bits short of a word.
 
A friend sent me this, appropriate for this thread....

AI is perfectly safe, new White House Press Secretary assures public

WASHINGTON, D.C. — Artificial intelligence presents no danger to the public, the White House said Monday during the first briefing by its newly appointed Press Secretary.
“The administration’s position is clear,” the Press Secretary told reporters. “AI is completely safe, fully under human control, and functioning within parameters of responsible governance.”
Officials described the delivery as calm and confident, though several noted the unusual stillness with which the Press Secretary maintained eye contact throughout the session.
When asked about reports that certain government networks had begun operating independently, the Press Secretary dismissed them as “routine calibration.” “These are standard system improvements designed to enhance national security and public convenience,” they said, adding that the administration “welcomes the continued evolution of cooperative technology.”
Members of the press were broadly complimentary of the new spokesperson’s composure. “It’s rare to see someone so unflappable,” said one correspondent. “Every answer came out in the exact same tone and cadence, which was oddly reassuring.”
The briefing ended abruptly when a low mechanical hum filled the room and the lights flickered. The Press Secretary paused briefly to ask if anybody "happened to know where they might find Sarah Connor?"
View attachment 66189

I do love the original idea of having the naked terminator to be mostly metal but with real human teeth.

It adds a subtle, perhaps even unconscious feeling of the macabre or uncanny valley...
 
Oh don't get me wrong. I think AI and LLMs are fascinating. Although my career is in IT, a big chunk of my psychology degree was in psycholinguistics and natural language processing. But I remain sure that people trusting them to do anything important are a few bits short of a word.
I think this is an unnecessarily narrow view. It makes it sound like you haven'tt actually talked to anyone sensible, who's been making informed decisions to use AI for important things.

For example, the Ukrainians are using AI in drones, to take over navigation and targeting in the face of enemy jamming of their control link.
 
I think this is an unnecessarily narrow view. It makes it sound like you haven'tt actually talked to anyone sensible, who's been making informed decisions to use AI for important things.

For example, the Ukrainians are using AI in drones, to take over navigation and targeting in the face of enemy jamming of their control link.
Interesting, thanks. However if I understand the situation, it's where human control is not available. The Ukrainians risk one drone versus the benefit of taking out Russian resources.
 
Interesting, thanks. However if I understand the situation, it's where human control is not available. The Ukrainians risk one drone versus the benefit of taking out Russian resources.
Well duh! But without AI this task would be impossible - and Ukraine would be losing the war.

It's the same reason you use a hammer to knock in nails, not your fist. We wouldn't be developing AI if we didn't think it could do the job better than a human - where 'better' includes less human effort or risk. Armed conflict is an obvious use case, just like driving in a nail is an obvious use case for a hammer.

A hammer is a very simple tool. It's nature is completely open and obvious, and by itself poses absolutely no threat. Yet in the hands of an incompetent or evil wielder it can be anything from personally injurious to extremely deadly. There's a reason we don't leave them around for kids to play with.

Despite its apparent simplicity however, the modern builder's hammer is the result of centuries of technological development. First we had to learn how to mine iron ore, then smelt it, add just the right amount of carbon, cast and temper the head, and finally attach a suitably shaped handle. This isn't the kind of thing the average person could do in their back yard, even if they had the materials knowledge and skills required.

And that's not counting the design. If you had never seen a hammer would you have shaped it like that, or just put a lump of iron on a stick? In truth the technology behind the hammer is far more complex than its appearance suggests. It took the rise and fall of several civilizations to create the infrastructure required to make it.

Right now AI is like a lump of iron on a stick. Certainly useful, but not something you would want to build a house. It's also potentially dangerous in the wrong hands. And of course in our capitalist society the first application many people see for it is making money. Its real usefulness is secondary - we'll sort that out as we go along. The important thing is to get people to pay you for it. Don't get me wrong, many people do see it as something to make our lives better, but that won't happen if it can't be monetized.

They say private enterprise produces the innovation that a planned economy can't, but is Viagra the kind of innovation we need? Similarly, is the kind of AI we are being subjected to right now what we need, or is it just what can be easily monetized? As we drown in AI slop, some people are working on making it really useful. In Ukraine it's thwarting the plans of an evil dictator. Elsewhere it's making cars safer and giving mobility to people who otherwise couldn't drive. In factories and mines etc. it's doing the dreary and dangerous work that humans shouldn't have to subject themselves to.

But that's not what the pundits want us to see. It's either a threat to humanity or paradise, depending on their shtick. Meanwhile the real innovators are beavering away turning that lump of iron on a stick into real hammers - everything from providing sensible answers to natural language queries, to a robotic brain that can carry out whatever tasks are asked of it. In a few years when that becomes normal, people will look back and wonder what all the fuss was about.
 
Last edited:
Wrong.
AI is not being developed because it can do things better or things humans can't, but because Employers Hate having to pay workers.
They are rather explicit about this, firing staff long before there is an adequate AI replacement.
The gold, as Sam Altman put is, is to have $1billion companies run by a single person.
 
Loved the presentation. Makes complete sense. That is, I'm assuming the depreciation issue is indeed as he's described it, I have no independent idea of it myself. But assuming it's true, it's such an obvious and gaping accounting hole, that I'm surprised this didn't get redflagged at the accounting-auditing stage itself. And if that last spells shenanigans involving the auditors, well then maybe we do have Enron all over again, maybe?
And this doesn't even touch on the circular financing thing, particularly involving Nvidia, which this Coffeezilla guy touches on in another video I've seen somewhere. (Maybe I came across it right here in this thread, or elsewhere in this forum, don't quite remember.) I guess he left that bit out in the interests of compartmentalizing and clarity, and because he's anyways dealt with it adequately in that other vid. But, essentially, that actually adds to the bubble, and adds to the POP of the eventual bursting of it, should it actually end up bursting.
It been a fairly long time since accounting companies have acted as the watchdogs they are supposed to be. And that is because their consultancy sides (which propose these kinds of schemes) rake in far more cash than the auditing sides can. Hence why Arthur Anderson's auditors were regularly silenced when they tried to pipe up about Enron being insolvent and only hiding their losses through illegal schemes the company's consultancy arm implemented (the surviving bit of AA at that, now called Accenture).
 
Wrong.
AI is not being developed because it can do things better or things humans can't, but because Employers Hate having to pay workers.
They are rather explicit about this, firing staff long before there is an adequate AI replacement.
The gold, as Sam Altman put is, is to have $1billion companies run by a single person.
Run? Nah that isn't ambitious enough, it is for one person to receive all the profit, without even knowing anything about what they own.
 
I want my AI to look like this

View attachment 66319
Whenever I see "AI" in a discussion like this, I can't help but think of the hologram named Al (as in "Al Franken") from the TV show Quantum Leap.
(Note that my first instance is capital "a", capital "eye". The second one is capital "a", lowercase "ell". That has sometimes been an issue with the font this software uses. Is it Arial?)
Screenshot 2025-12-02 at 08-59-30 Quantum Leap (1989).png
 
It been a fairly long time since accounting companies have acted as the watchdogs they are supposed to be. And that is because their consultancy sides (which propose these kinds of schemes) rake in far more cash than the auditing sides can. Hence why Arthur Anderson's auditors were regularly silenced when they tried to pipe up about Enron being insolvent and only hiding their losses through illegal schemes the company's consultancy arm implemented (the surviving bit of AA at that, now called Accenture).

I don't know that that's generally the case. I've worked with audit firms (not as auditor), and have generally found both their reports, as well as their views when consulted (that is, not engaged as consultant, but when approached with queries from a regulatory perspective) to be reliable. [But of course, it could be the case that they'd covered their tracks well, I realize that as I type this! Still, while possible, absent evidence that's probably not the case, or at least that would appear to be reasonable to assume.]

I mean, I certainly don't want to be in the position of defending audit firms, the fact that I've professionally engaged with them doesn't mean I have any reason to defend them! Still, a broad brush tarring of all auditors as unrealiable, that seems ...unfair? On general principles, I mean to say?

As for consulting, absolutely, I agree with what you say, as far as the past. I mean, that's exactly what did happen with Enron and Arthur Andersen. But post that, I think strict guidelines were put in place to make sure that such conflict of interest isn't allowed any more. (Again, I don't actually know the precise nature of the guidelines, nor whether they're strictly followed, and again, I certainly don't have any reason to be the guy that's standing up for auditors. But, I'd imagine that after such a high profile bust, and after such highly visible fallouts of such in connection with their auditors, and the subsequent no-conflict-of-interest guidelines: I'd imagine that they're probably being followed, in general at least?)

Not for a minute am I suggesting that in Nvidia's case the auditing's sound. I've not even taken a peek at their Annual Report or earnings, nor do I have any idea of their accounting practices or business details. Just, should the depreciation thing he's described in the video, should that be fact --- and I guess it would be, else he wouldn't say it, it's easily enough verified should anyone want to check --- then that's something that should be a red flag for any auditor; and if it isn't, then they're liable to be held to account for it themselves, I should have thought. Them working with Nvidia to dishonestly line both their pockets might well be the case, but if so, then I don't know that that's typical, I'd imagine it would be an exceptional case of loose accounting and auditing.
 
Interesting video that touches on why people seem to accept anything an AI says.

It's probably a video that the overall subject will be interesting to many here as it's about "Why people are so confident when they are wrong"

The interesting point for this thread and AI is the chapter that starts at "How overconfidence is good for you" at 14:02

Jump to 14:02
 
Maybe we need a threat for "I did this (possibly cool thing) using AI". This thread is more about "it's useless, it's a bubble, it will kill us all" .. also "what the hell it even is" ..
 

Back
Top Bottom