• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Indeed.

A.I is very much in the VR spot in that people like to play with it, but don't want to pay very much for it or would trust their lives to it if they have the choice: most people don't want autonomous terminators.
There's a huge range of practical applications, between basic pattern recognizers and fully autonomous terminators.

LLMs are already making it easier to draft documents and write basic code. This is only going to make it easier to employ semi-skilled and unskilled knowledge workers in areas like translation, documentation, and software development. This in turn means needing to hire fewer skilled knowledge workers to review and polish the output.

Meanwhile the military has a significant and growing need to automatically filter EM signal from noise, in the face of heavy jamming. It's reported that the Ukrainians are already training FPV drone targeting algorithms using AI, to increase the chance of a kill if the control signal gets jammed on final approach. Most people don't want autonomous terminators. But they do want tools that can get the most important jobs done, guaranteed.

I think there's a lot of serious and productive AI investment happening quietly, at the instigation of the DOD and other government agencies. I suppose there might be an LLM bubble, but I doubt it. Such tools have already broken into the porn imagery and hentai translation spaces.
 
I am seeing more and more examples of comically bad LLM AI failures that highlight the problems inherent in mimicking without understanding.

From a new article on Engadget, If AI is going to take over the world, why can't it solve the Spelling Bee?

I had spent a large chunk of a Saturday evening trying to shape the letters G, Y, A, L, P, O and N into as many words as possible.
...
I fired up ChatGPT, told it I was trying to win the Spelling Bee, gave it my letters and laid out the rules. “Give me a list of every word that can be formed with them,” I typed. “You’re allowed to repeat letters, but each word must contain 'N'.”

When I hit enter, this is what ChatGPT spat back:

Here is a list of words formed with the letters "G Y A L P O N" that must contain the letter "N":
GLNPAYO
YPNL
PGNOYL
ONAPYLG
LGNAPYO
ANLY
NAYGOPL
GYPON
AOGPLNY
PNAOGLY
This is just a sample of the list due to the extensive number of permutations possible.


“The words need to be in the dictionary,” I typed back. “Sure,” came the response. “I’ll provide a list of valid English words based on the given letters.” ANGLY, GALON, LANG.

The other AI chatbots that were tested on this problem also failed, each in its own unique way. They all presented horribly wrong answers with great confidence.


Another example is the increasing number of book covers generated using AI. These are produced by scammy author/publishers looking to make a quick buck on content produced using minimal effort. Here is one example available for purchase from Amazon to read on Kindle.

[IMGw=500]https://m.media-amazon.com/images/I/71WsSuI0D7L._SY466_.jpg[/IMGw]
 
Sorry, we couldn't find that page. But here's a cute dog instead.

It works for me. Perhaps it is an Amazon region thing.] (Actually it appears that Amazon has taken the original book down. Good for them)

In any case here is the original cover along with another example of an AI generated cover from another book available from Amazon. (Hopefully copying the images to another site will work.)

[IMGw=500]https://i.imgur.com/rYEmQ0S.jpg[/IMGw]

[IMGw=500]https://i.imgur.com/CAkxQYQ.jpg[/IMGw]
 
Last edited:
He does seem to have five fingers, though, doesn't he? At least on that one hand that we can see. (The eyes are weird, and kind of mismatched, but I guess that detail probably fits in with the character.)
 
Latest generative image AIs pretty much have the hands and fingers fixed, at least as good as an OK human artist (many even renowned artists have struggled with hands and feet), they still mess up on the likes of limbs and paws for animals. Even text rendering has gotten better. But they will still make weird "mistakes" that come from again not knowing/understanding what they are rendering.

LLMs have highlighted a known human bias which is to mistake "fluency" for understanding and accuracy. We see it here where we have some members that can write well good prose, yet what they write will be factually wrong or a load of crap reasoning but are given more credit because it is well written. It's not only AIs that hallucinate.
 
One area that does concern me is when the organised scammers get good enough AI to replace most of their staff. At the moment they need call centre setups, but just like legitimate call centres when most of those people can be replaced by AI it will allow organised scammers to have call centres of thousands of thousands of "callers" working around the clock.
 
Latest generative image AIs pretty much have the hands and fingers fixed, at least as good as an OK human artist (many even renowned artists have struggled with hands and feet), they still mess up on the likes of limbs and paws for animals. Even text rendering has gotten better. But they will still make weird "mistakes" that come from again not knowing/understanding what they are rendering.

…snip…

Hmm I may have to retract my post: https://www.creativebloq.com/ai/ai-art/stable-diffusion-3-is-a-lynchian-nightmare-fuel-generator

Stable Diffusion 3 is a Lynchian nightmare fuel generator

Horror: https://www.reddit.com/r/StableDiffusion/comments/1df5ic5/im_sorry/
 
Last edited:
I don't think anyone's mentioned this scenario. Is it not conceivable AI interfaces could achieve a near human level of interaction but with consistently more patience, kindness and at least apparent empathy, to the point where people start modifying their own behavior, sort of by example?

There was a TV show -- Humans, maybe? -- where a subculture of young people were mimicking the idiosyncratic behavior of robots. I'm picturing something similar, but instead of copying robotically stunted emotions, we'll be copying a spectrum of unusually humane ones.

Wishful thinking? Probably.
 
I don't think anyone's mentioned this scenario. Is it not conceivable AI interfaces could achieve a near human level of interaction but with consistently more patience, kindness and at least apparent empathy, to the point where people start modifying their own behavior, sort of by example?

There was a TV show -- Humans, maybe? -- where a subculture of young people were mimicking the idiosyncratic behavior of robots. I'm picturing something similar, but instead of copying robotically stunted emotions, we'll be copying a spectrum of unusually humane ones.

Wishful thinking? Probably.

Don’t think so, our social behaviour is at least partly learned mimic behaviour, fake it until you can make it in reality. We know this is why abused children can have problems well into adulthood, indeed it can be intergenerational. I still think one of the products that will do really well is when we have a digital assistant we can chat with for companionship (and we are very close to this), I’ve said before how this could alleviate a major problem today which is loneliness and for those that think there is something wrong with non-sentient companions - you are going to have to prise pets out of a lot of dead hands!
 
Last edited:
Don’t think so, our social behaviour is at least partly learned mimic behaviour, fake it until you can make it in reality. We know this is why abused children can have problems well into adulthood, indeed it can be intergenerational. I still think one of the products that will do really well is when we have a digital assistant we can chat with for companionship (and we are very close to this), I’ve said before how this could alleviate a major problem today which is loneliness and for those that think there is something wrong with non-sentient companions - you are going to have to prise pets out of a lot of dead hands!

Right, so if compassionate robots were to become ubiquitous, isn't it possible people might start mimicking them?

(I know I'm misreading you, just don't know where the disconnect is.)
 
One area that does concern me is when the organised scammers get good enough AI to replace most of their staff. At the moment they need call centre setups, but just like legitimate call centres when most of those people can be replaced by AI it will allow organised scammers to have call centres of thousands of thousands of "callers" working around the clock.

Scammers can make a massive number of calls now.
a. They use a recorded message and the scammed must select an option to talk to a human.
b. Many people in the world can be employed for very little money to make scam calls.

Maybe future AI scammers might jam our phones with calls, making telephones useless.
 
Latest generative image AIs pretty much have the hands and fingers fixed, at least as good as an OK human artist (many even renowned artists have struggled with hands and feet), they still mess up on the likes of limbs and paws for animals. Even text rendering has gotten better. But they will still make weird "mistakes" that come from again not knowing/understanding what they are rendering.

Oh, so they've now fixed the weird-fingers thing, have they? Didn't know that. ...In a weird kind of way, it's a pity, kind of, I mean it's like lobotomizing Picasso so he now starts producing perfect oil painting landscapes!

LLMs have highlighted a known human bias which is to mistake "fluency" for understanding and accuracy. We see it here where we have some members that can write well good prose, yet what they write will be factually wrong or a load of crap reasoning but are given more credit because it is well written. It's not only AIs that hallucinate.

Haha, how true! Agreed, we do have that bias, probably all of us.
 
https://www.bbc.co.uk/news/articles/c722gne7qngo

McDonald's is removing artificial intelligence (AI) powered ordering technology from its drive-through restaurants in the US, after customers shared its comical mishaps online.

A trial of the system, which was developed by IBM and uses voice recognition software to process orders, was announced in 2019.

It has not proved entirely reliable, however, resulting in viral videos of bizarre misinterpreted orders ranging from bacon-topped ice cream to hundreds of dollars worth of chicken nuggets.

McDonald's told franchisees it would remove the tech from the more than 100 restaurants it has been testing it in by the end of July, as first reported by trade publication Restaurant Business, external.

"After thoughtful review, McDonald’s has decided to end our current global partnership with IBM on AOT [Automated Order Taking] beyond this year," the restaurant chain said in a statement.

However, it added it remained confident the tech would still be "part of its restaurants’ future."
 
Way back in the day, I worked at a corporate-owned McDonald's. Because it wasn't franchised, corporate often used it to test new equipment and methods. Staging some ingredients in little steam cabinets, so that burgers could be assembled without waiting for patties to come off the grill, was one idea that I believe has since become standard practice.

Another was an automated soda dispenser, that was pretty neat - when it worked.

It doesn't surprise me at all that McD's is still in the habit of prototyping stuff in a few stores, before pulling it out and reworking it.
 

Back
Top Bottom