That sounds like using the wrong AI.
Should expand on that: It sounds like they used AI trained on data from The Fat Duck taster menu.
Last edited:
Don’t think so, our social behaviour is at least partly learned mimic behaviour, fake it until you can make it in reality. We know this is why abused children can have problems well into adulthood, indeed it can be intergenerational. I still think one of the products that will do really well is when we have a digital assistant we can chat with for companionship (and we are very close to this), I’ve said before how this could alleviate a major problem today which is loneliness and for those that think there is something wrong with non-sentient companions - you are going to have to prise pets out of a lot of dead hands!
Here's one for you:
AI-generated jokes funnier than those created by humans, study finds
The examples provided did not seem to me, to support the headline.
Nearly 70% of the participants rated ChatGPT jokes as funnier than those written by regular people.
To conduct the study, both ChatGPT and humans were asked to write jokes based on a variety of prompts.
https://www.youtube.com/watch?v=UShsgCOzER4
Video about how AI is being used in search results. Just one example: Someone typed in "i am depressed" and received a suggestion to jump off the Golden Gate Bridge. (I tried this myself and I didn't get that answer so presumably it has been fixed. Or maybe someone made it up.)
LLM work when you ask the same question as everyone else in the same way as everyone else.
In short: don't ask an LLM a question you don't already know the answer for (sort of).
Half and half.
LLMs when used for what they are good at they are very good. It's when they are shoehorned into inappropriate uses that they screw up, and that shoehorning is all about the current business fashion trends.
That's an interesting idea. But it could also just be that the data it was trained on contained advice written by Internet trolls, satirists, people with unconventional ideas, conspiracy theorists, and so on.
That explains some problems, but not others. For example, if you ask ChatGPT certain math problems that are pretty easy for people, it can fail. For example, ask it if it takes one hour to dry one towel on a clothes line, how long does it take to dry three hours on a clothes line, it may answer three hours. Or ask it how to measure 50 ml of bourbon if you have a 10 liter bucket, a 5 liter bucket, and a 50 ml shot glass, and it may give you an answer involving pouring quantities in and out of the buckets rather than ignoring the buckets and just using the shot glass.
"You
Sent by you:
it if it takes one hour to dry one towel on a clothes line, how long does it take to dry three hours on a clothes line, it may answer three hours. Or ask it how to measure 50 ml of bourbon if you have a 10 liter bucket, a 5 liter bucket, and a 50 ml shot glass
Copilot
Sent by Copilot:
I believe there might be a slight misunderstanding in the first part of your question. If it takes one hour to dry one towel on a clothes line, then it would take three hours to dry three towels, assuming the drying rate is constant and the towels are similar in size and material.
As for the second part of your question, if you need to measure 50 ml of bourbon and you have a 50 ml shot glass, you can simply fill the shot glass once. The size of the buckets doesn’t matter in this case as the shot glass already provides the exact measurement you need. So, just fill up the 50 ml shot glass with bourbon and you’re good to go!"
This isn't a result of having been fed bad answers to that problem. It's a result of the fact that ChatGPT isn't actually really answering questions, not in the way that human brains do. It's effectively extracting compressed date, and using your prompt to tell it which data to decompress.
I think that has to be challenged - we don't know, it may be answering questions in a way similar to a human brain (at a certain level of abstraction)
I do think many folk's "thinking" is nothing more than being able to create sentences that make sense by using the previous set of words to predict the next one, I would point out to some members in the "Science..." section of the forum for examples of such "intelligence".
Here's one for you:
AI-generated jokes funnier than those created by humans, study finds
The examples provided did not seem to me, to support the headline.