• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

I don't know if this counts as AI or just another crazy deepfake tool, but some scary stuff is possible:



You can change your face, change your voice. I am aware that some of this stuff has been available for a few years now. It's just that the level of verisimilitude keeps improving.

There are even better voice changers, namely RVC project.
As for face swap, the same guy has video about LivePortrait, which is not realtime, but insanely good.
https://www.youtube.com/watch?v=uyjSTAOY7yI
 
ChatGPT is ********

ABSTRACT

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as ******** in the sense explored by Frankfurt (On ********, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be ********ters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as ******** is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

"********" here is bovine excrement. The autocensor makes this kind of hard to quote. I once suggested that the term "********", being a technical term used in scientific literature, should be permitted, but my suggestion was rejected.
 
"********" here is bovine excrement. The autocensor makes this kind of hard to quote. I once suggested that the term "********", being a technical term used in scientific literature, should be permitted, but my suggestion was rejected.

I find that substituting the abbreviation BS works pretty well.
 
I think we can now safely say that AI is a bubble: there is no path forward to scale its capabilities - the data available to train it was been exhausted, and A.I. is not able to generate more itself: the myth of the Ai. making a smarter Ai. is dead for now.

China might have an edge in that it has potentially way more data with no privacy restrictions for training their systems on, but then, the average Chinese consumer/voter is hardly a good example for the average Westerner.

We are approaching a dead end at high speed, and plenty of people are looking for a way to cash out.
 
I think we can now safely say that AI is a bubble: there is no path forward to scale its capabilities - the data available to train it was been exhausted, and A.I. is not able to generate more itself: the myth of the Ai. making a smarter Ai. is dead for now.

China might have an edge in that it has potentially way more data with no privacy restrictions for training their systems on, but then, the average Chinese consumer/voter is hardly a good example for the average Westerner.

We are approaching a dead end at high speed, and plenty of people are looking for a way to cash out.

Nonsense. There is certainly bubble. It might burst (no, this crash wasn't it).
But AI is just starting. Lot of that money went into research, or hardware, which supports research greatly. Tons of smart people came to AI research. 5 years ago it was fringe academic topic. The speed of advancement is simply incomparable.
Monetization of LLMs reached its peak. But that's not really AI.
 
ChatGPT is ********



"********" here is bovine excrement. The autocensor makes this kind of hard to quote. I once suggested that the term "********", being a technical term used in scientific literature, should be permitted, but my suggestion was rejected.

Spare a thought for the German reproductive strategy researchers presenting a paper here in Australia, about the evolutionary success of 'sneaky *******'.

i.e. copulation with males who were not the 'alpha' in group/troupe/herd animals.

It turns out that it is a very useful genetic strategy, and it may explain the behaviour of many humans.

But the name though...

I still chuckle every time I think of it.
 
You can for example prompt to contemplated about god, and with maximum response length it might spew few pages at once. It will usually stop anyway, but then you can just say "tell me more", and it will continue to expand on its previous response.

But if you don't, it won't, and will in fact never think about those concepts again unless you prompt it to - because it can't.
 
Last edited:
But if you don't, it won't, and will in fact never think about those concepts again unless you prompt it to - because it can't.

Indeed. And it's still only thinking about things that it has been prompted to think about. It isn't using its spare time to think about other things that it wants to think about.
 
If you could pause a human brain it wouldn't too. It's not that it can't. It's that we don't want it. God forbid AI thinking without monetization !

Or even more analogous: wipe a human’s memory and then start it again with a similar input, you are going to get very similar results each time.
 
If you could pause a human brain it wouldn't too.

But you can't, and that is not a superficial distinction but in fact a definitive one. If you don't prompt a seven-year-old human brain, it will wander away to go climb a tree and pretend it's fighting a dragon. If you don't prompt a GPT software it doesn't even perform some rest or idle process, it completely stops doing anything whatsoever. Even characterizing it as being "paused" is fundamentally misleading.
 
We are prompted by our external environment all the time, as well as our internal.environment, and we do have a pause state - sleeping.

Again I should say that I don't believe the LLM AIs are sentient or that they work the same way as human brains do, but they do provide an insight into how from non-sentient matter sentience can arise.
 
But you can't, and that is not a superficial distinction but in fact a definitive one. If you don't prompt a seven-year-old human brain, it will wander away to go climb a tree and pretend it's fighting a dragon. If you don't prompt a GPT software it doesn't even perform some rest or idle process, it completely stops doing anything whatsoever. Even characterizing it as being "paused" is fundamentally misleading.

But we can easily loop GPT software .. and it will wander away as well. The lack of the loop is the difference. The way it reacts to inputs to create its outputs is not.
 
We are prompted by our external environment all the time, as well as our internal.environment, and we do have a pause state - sleeping.

Prompting is not simply an analogue of organic or spontaneous reaction to random environmental stimulus, it's a set of intentional instructions. An LLM does not "react" to a prompt, it obeys it - carries out the instructions. The output may be different depending on the prompt, but the process used to produce it is not; it can't become annoyed or pleased or surprised by a prompt, it can only dispassionately execute it.

Sleeping in humans is absolutely not a pause state, instrumentation has long since proven continuous activity during sleep. And dreams exist, after all.

Again I should say that I don't believe the LLM AIs are sentient or that they work the same way as human brains do, but they do provide an insight into how from non-sentient matter sentience can arise.

I have to disagree; I think if we're attempting to explore how sentience arises, an LLM is going about the problem backwards. Humans as a species developed sentience before they invented a complex rules-based spoken language; arguably, their ability to accomplish that feat in the first place hinged upon it. And humans as individuals are sentient before they learn how to speak - certainly before they learn how to speak properly or well. Yet the LLM approach posits that if something non-sentient can be engineered to mimic an already-existing language well enough, this will somehow cause sentience to spontaneously arise. There's no logical reason to expect that might happen; frankly it seems to rely on a kind of appeal to equivocation, where the implied argument is that if the non-sentient program can learn to speak well enough then it won't even matter whether or not it has attained sentience or how because nobody can tell the difference anyway - you wouldn't be able to, like, prove that it's not sentient, man...a somewhat pseudoscientific ontology.
 

Back
Top Bottom