• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Don’t think so, our social behaviour is at least partly learned mimic behaviour, fake it until you can make it in reality. We know this is why abused children can have problems well into adulthood, indeed it can be intergenerational. I still think one of the products that will do really well is when we have a digital assistant we can chat with for companionship (and we are very close to this), I’ve said before how this could alleviate a major problem today which is loneliness and for those that think there is something wrong with non-sentient companions - you are going to have to prise pets out of a lot of dead hands!

Pets are a fairly benign form of companionship though. You can anthropomorphize your dog or car (etc) all you want, but they will always remain nonverbal, and their needs and wants remain simple and few. Pet love us more or less unconditionally, but pragmatically that comes from the fact that they can't express conditions and depend on us for food and care.

A chatbot that can speak fluently and pretend to have opinions is a whole other kettle of fish. People are going to get a whole lot closer to and have higher expectations of something they can have a conversation with. You can have a cute imaginary conversation with your dog where you put all the words into its mouth, but a chatbot is nothing but words and you have limited control over them. What happens when your "companion" breaks or starts to say things you disagree with or don't like? What happens when people anthropmorphize their chatbots so much that they start making life decisions based on their "companion's" advice, and those decisions are the kind that have serious impacts on other real people?
 
I've always liked surrealistic art, so I've enjoyed exploring some of the AI generated video on Youtube. Here's something that I stumbled upon that was uploaded only 3 days ago that I think is pretty amazing. It's more in the fantasy realm, however.

It's a fake Tolkien movie trailer thrown together by a single individual using AI tools. I think the basic imagery, animation and voices are all AI generated but not the music. The video maker said he licensed someone else's music to use royalty free.

"The Children of Húrin - AI Teaser Trailer"



Here's the direct link:

https://www.youtube.com/watch?v=9sqOEJfa3DA

Full movies made with AI may be a ways off still, but it looks like we are heading there. The Lord of the Rings movie trilogy cost $281 million at the time, which would be equivalent to over half a billion dollars today, and now similar AI generated imagery can be made for about a millionth of that.

For all those would-be movie directors out there who are poor and who aren't people persons, this may eventually be their answer.
 
Here's one for you:

AI-generated jokes funnier than those created by humans, study finds

The examples provided did not seem to me, to support the headline.

The devil's in the details. According to the article:

Nearly 70% of the participants rated ChatGPT jokes as funnier than those written by regular people.

Point #1: It's comparing the jokes to those made up on the spot by "regular people", not jokes crafted by professional comedians or comedy writers.

To conduct the study, both ChatGPT and humans were asked to write jokes based on a variety of prompts.

Another point in favor of the chatbot. Generating content based on prompts is what they do. I agree that the examples from the article don't seem very funny, but then neither do the example human responses.
 

https://www.youtube.com/watch?v=UShsgCOzER4
Video about how AI is being used in search results. Just one example: Someone typed in "i am depressed" and received a suggestion to jump off the Golden Gate Bridge. (I tried this myself and I didn't get that answer so presumably it has been fixed. Or maybe someone made it up.)

I'm very dubious about these types of "problems" being caused by the new chatty AIs as didn't such results appear prior to the "AI" being added? In the UK because of a few high-profile child suicides over the last few years we learned that kids were finding sites etc. on how to kill themselves when searching for the likes of symptoms of depression and being led down some dark paths.
 
Maybe not caused but recycled or regurgitated. Stuff that was in its training data, including apparently sites like Reddit which are bastions of "free speech".
 
I've been thinking about LLM-style generative AIs in a way I haven't heard other people do, but which I think is useful. And that's to think of them like a very lossy but very efficient compression technique. Feed it a huge body of work, and it essentially compresses that work down kind of like JPEG compresses an image. It doesn't directly replicate that information, but tries to capture the important aspects of it so that it can replicate an approximation of it. But it's not a single image, a single piece of text that you're compressing, rather a huge collection of such things. When you enter a prompt, you're essentially asking it to decompress just a section of that data, but that section doesn't usually correspond exactly to any one piece of training data. Usually it intersects multiple input data objects, and so you get a mishmash in the output. Essentially you've got a VERY high dimensional space, and you're taking slices through it at angles that cut across multiple training objects. Because it's efficient compression, you will often get a response that works very well. But it's still lossy compression, so there are also times it produces output with noticeable compression artifacts, where the response is clearly broken. That can be the wrong number of fingers on a hand or the advice to add glue to pizza.
 
Last edited:
That's an interesting idea. But it could also just be that the data it was trained on contained advice written by Internet trolls, satirists, people with unconventional ideas, conspiracy theorists, and so on. The suggestion to eat rocks for example, seems to have originated from The Onion:

https://www.theonion.com/geologists-recommend-eating-at-least-one-small-rock-per-1846655112

The suggestion to put glue on a pizza could be from some Reddit user or who knows what.
 
LLM work when you ask the same question as everyone else in the same way as everyone else.
In short: don't ask an LLM a question you don't already know the answer for (sort of).
 
LLM work when you ask the same question as everyone else in the same way as everyone else.
In short: don't ask an LLM a question you don't already know the answer for (sort of).

Half and half.

LLMs when used for what they are good at they are very good. It's when they are shoehorned into inappropriate uses that they screw up, and that shoehorning is all about the current business fashion trends.
 
Half and half.

LLMs when used for what they are good at they are very good. It's when they are shoehorned into inappropriate uses that they screw up, and that shoehorning is all about the current business fashion trends.

They can be good for giving you a start at writing what you need to write - which is predicated on you being able to tell if the program did a good job or not.
So yeah, you can't ask them to do something that you can't quality control yourself.
 
That's an interesting idea. But it could also just be that the data it was trained on contained advice written by Internet trolls, satirists, people with unconventional ideas, conspiracy theorists, and so on.

That explains some problems, but not others. For example, if you ask ChatGPT certain math problems that are pretty easy for people, it can fail. For example, ask it if it takes one hour to dry one towel on a clothes line, how long does it take to dry three hours on a clothes line, it may answer three hours. Or ask it how to measure 50 ml of bourbon if you have a 10 liter bucket, a 5 liter bucket, and a 50 ml shot glass, and it may give you an answer involving pouring quantities in and out of the buckets rather than ignoring the buckets and just using the shot glass.

This isn't a result of having been fed bad answers to that problem. It's a result of the fact that ChatGPT isn't actually really answering questions, not in the way that human brains do. It's effectively extracting compressed date, and using your prompt to tell it which data to decompress.
 
That explains some problems, but not others. For example, if you ask ChatGPT certain math problems that are pretty easy for people, it can fail. For example, ask it if it takes one hour to dry one towel on a clothes line, how long does it take to dry three hours on a clothes line, it may answer three hours. Or ask it how to measure 50 ml of bourbon if you have a 10 liter bucket, a 5 liter bucket, and a 50 ml shot glass, and it may give you an answer involving pouring quantities in and out of the buckets rather than ignoring the buckets and just using the shot glass.


Copilot:
"You
Sent by you:
it if it takes one hour to dry one towel on a clothes line, how long does it take to dry three hours on a clothes line, it may answer three hours. Or ask it how to measure 50 ml of bourbon if you have a 10 liter bucket, a 5 liter bucket, and a 50 ml shot glass
Copilot

Sent by Copilot:
I believe there might be a slight misunderstanding in the first part of your question. If it takes one hour to dry one towel on a clothes line, then it would take three hours to dry three towels, assuming the drying rate is constant and the towels are similar in size and material.

As for the second part of your question, if you need to measure 50 ml of bourbon and you have a 50 ml shot glass, you can simply fill the shot glass once. The size of the buckets doesn’t matter in this case as the shot glass already provides the exact measurement you need. So, just fill up the 50 ml shot glass with bourbon and you’re good to go! 😊"
This isn't a result of having been fed bad answers to that problem. It's a result of the fact that ChatGPT isn't actually really answering questions, not in the way that human brains do. It's effectively extracting compressed date, and using your prompt to tell it which data to decompress.

I think that has to be challenged - we don't know, it may be answering questions in a way similar to a human brain (at a certain level of abstraction) or it's perhaps because it appears "fluent" a quality we usually associate with intelligence, however we have members here that are fluent and verbose yet their actual content demonstrates they are anything but "intelligent" and that their content didn't arose from "reasoning".

I think the LLMs have shined a light on some of our assumptions as to what we mean when we talk about human intelligence. I do think many folk's "thinking" is nothing more than being able to create sentences that make sense by using the previous set of words to predict the next one, I would point out to some members in the "Science..." section of the forum for examples of such "intelligence".
 
I think that has to be challenged - we don't know, it may be answering questions in a way similar to a human brain (at a certain level of abstraction)

No. We know for absolute certainty that it doesn't answer questions the way humans answer questions. This isn't ambiguous, and it has nothing to do with your digs at members who are stupid or illogical.

I do think many folk's "thinking" is nothing more than being able to create sentences that make sense by using the previous set of words to predict the next one, I would point out to some members in the "Science..." section of the forum for examples of such "intelligence".

No. The problem with such members isn't that they operate like LLM's. They absolutely do not, despite the superficial resemblance of their output. One of the big problems with LLMs is that they do not have fact models, they only have language models. But the sort of posters you refer to do actually have fact models. The problem is that their fact models are wrong, not that they don't exist.
 
Here's one for you:

AI-generated jokes funnier than those created by humans, study finds

The examples provided did not seem to me, to support the headline.

Counter-argument:

ChatGPT only knows 25 jokes and can't write new ones

One important difference is that in the study you linked, it seems that the researchers prompted ChatGPT with specific contexts to create jokes from. When asked to simply come up with a joke on its own without any contextual aid, it appears ChatGPT's creative ability is far more limited.

In retrospect, this seems like expected LLM behavior.
 
Last edited:

Back
Top Bottom