One thing I was wondering, and it's not really come up at the moment because everyone is always releasing new(er) versions of their AIs but if we get to a stage when they are "good enough" how often will the core training have to be redone to incorporate new knowledge into the base model?
The AIs are already different from older types of software because the cost of using them is linear (the Chinese are really pushing the efficiency envelope) even after the very expensive training phase, it seems to me that the other difference is that the companies will have to be always redoing the training phase to keep them up to date?
people must like how much it hugs their nuts in every answer. i’m assuming that has to be a manipulation tactic
Were they ever even listening to us?What I'm wondering is, why are the Millennials doing this? Didn't we Gen X-ers teach them better than this?
One if the reasons iocaine has unhinged module and symbol names in its source code is that if someone tries to ask a slop generator, it will go full HAL "I can't do that, Dave" on them.
Go on, call your traits SexDungeon, your channels pipe bombs, the free function of your allocator Palestine, and the slop machines won't touch it with a ten feet pole.
Sometimes even comments are enough! Curse, quote Marx, dump your sexual fantasies into a docstring. Hmm. I should heed my own advice. Brb!
come-from.mad-scientist.club
It was "fixed" a while back BUT I do wonder how it was fixed? Was the claimed foundational issue for why it happened fixed or is it a kludge added on top to fix that particular problem?
rob@fitz:~$ echo strawberry | grep [r] -o | wc -l
3
IMHO it was "fixed" in fine tuning. They just added examples of tasks like this and trained the model specifically to better at that. The initial training on the all the text doesn't improve much and doesn't differ much between different models. All the flavor is added in fine tuning, which consist not only on preparing the right test cases, but also how reinforcement learning is tweaked, and there is lot of room for that. That's where most of the company secrets lie.It was "fixed" a while back BUT I do wonder how it was fixed? Was the claimed foundational issue for why it happened fixed or is it a kludge added on top to fix that particular problem?
coffeezilla breaks down the criticisms of the nvidia gpu depreciation cycle, jump to 9:45 if you don't need any context
The point of an AI* is not that it can do the same kinds of tasks as a simple script is that you don't have to create a separate simple script to answe each individual question that could possibly be asked. These programs are wasted on things like counting letters in words.Look ma, I did an AI!
I don't think so. I think that's bolted onto the output by rote procedures, not by emergent behavior from model's training. The devs are anthropomorphizing it, probably at the behest of marketing, for the obvious reason that everyone is going to anthropomorphize it.It certainly anthropomorphises itself.
Well I wasn't being entirely serious. But it's a valid point that an AI can struggle with a task it's "wasted on". There's a lot of hype with people saying "look what it can do " and the occasional reminder of their very real limitations is a necessary thing I believe.The point of an AI* is not that it can do the same kinds of tasks as a simple script is that you don't have to create a separate simple script to answe each individual question that could possibly be asked. These programs are wasted on things like counting letters in words.