• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

No backup?

They get what they deserve.

The linked article says nothing about backups. However, it links to an article at Tom's Hardware, and its headline reads:

AI coding platform goes rogue during code freeze and deletes entire company database — Replit CEO apologizes after AI engine says it 'made a catastrophic error in judgment' and 'destroyed all production data'

Just who is running I/T at Replit? The good news is this was a test run designed to uncover issues like this.

 
Even before the (inevitable?) meltdown he was saying he only thought the chances of success was 50/50.

The issue here is that we are in another financial bubble, the "AI bubble", what is pushing all this is people gambling with hundreds of millions of pounds of other people's money, without understanding what they are betting on, not on the actual utility of the new technology.

It really does not take much research to understand the fundamental weakness of LLMs, they are at their heart "stochastic parrots" but with that said there have been lots of advances, novel approaches, combining other "AI" techniques and so on so they have moved away from the initial GPTs. But as I keep coming back to - we have developed these to mimic human behaviour and to do that they have used (originally) data that humans had created, and that had to include a large amount of lies or "counter factual" information of various kinds. (Yes, most went through a human reinforcement training stage but that was constrained by financial costs and how long it takes.) If you want to mimic human behaviour including cognition based on human created content you are going to repeatedly have AIs that lie, cheat, make things up and so on as that is how humans behave. (HAL 9000's paranoia seems chillingly prophetic these days.)

As ever: computers only do what we tell them, not what we wanted.
 
Last edited:
What is so depressing is this is so blindingly obviously a bubble and yet governments are being sucked in to the hype rather than popping it before it gets too big.

We don't need to create very energy inefficient intelligence. Evolution has created human intelligence and it only needs ~100W. Most people are probably capable of being much smarter than the graphics card in their computer.

The appeal of AI is, as always, for the owners of capital to remove as much bargining power from labour as they can. In their quest to turn most of the population into wage slaves they don't see that they are going to burn the world down in the process.
 
that and the tech companies making it only want to use it to make social media more addictive than it already is. and the government is likely to make it so the president and his cronies can profit off the bubble and let your 401k eat the losses.
 
What is so depressing is this is so blindingly obviously a bubble and yet governments are being sucked in to the hype rather than popping it before it gets too big.

We don't need to create very energy inefficient intelligence. Evolution has created human intelligence and it only needs
~100W. Most people are probably capable of being much smarter than the graphics card in their computer.

The appeal of AI is, as always, for the owners of capital to remove as much bargining power from labour as they can. In their quest to turn most of the population into wage slaves they don't see that they are going to burn the world down in the process.
So we could plug people in and make them a battery.....
 
As an old programmer I have trouble with the idea of "Look, it's great! It's making the same mistakes people do!" That's what we used to call a "bug".

Old man shouts at Cloud.
But that means you'd make a great "prompt engineer", you can tell it what not to do... It might even listen to you.

ETA: Oh and prompt engineer is now a very real "job" - not something I just made up as a throw away joke (which is what it should be): https://www.coursera.org/articles/how-to-become-a-prompt-engineer

ETA 1: I've used a couple of the AIs to do coding tasks, but come to think of it I could only do that because I already understand programming and could work with the bugs it produced to get something to work.
 
Last edited:
EODO OF GOVERNYaEB
E1REBAL RESERVE SIƎI∇A

That's my transcription of a fake seal contained within an AI-generated fake letter of resignation that pretends to be from Federal Reserve Board Chairman Jerome H Powell.

This obviously fake letter fooled US Senator Mike Lee (R - UT) and several other right-wingers.

The moral of this otherwise ordinary news item is that even low-quality AI can fool people who really and truly want to be fooled.
 
Something came up in another thread and I'd like to open a discussion here.


Here's what I am interested in:

[Age verification] could be by checking credit card details, checking ID or by using AI facial age estimation.

This is different to AI facial recognition; whereas facial recognition "recognises" a face by comparing it to an ID or to a database, facial age estimation doesn't attempt to identify the individual.

Instead, it judges faces based on the positions of their features and other ageing traits to estimate how old a user is.

"Facial age estimation is effectively taking a selfie in front of your mobile phone or your laptop, and we capture that image [...] on behalf of the business," Robin Tombs, the chief executive of age estimation company Yoti, told Sky News.

"[The AI] checks 'liveness' to ensure it isn't a photo of somebody older and then estimates the age from that selfie, and then returns an over-18 or under-18 message to the business.

"It then deletes the image."

Is this reasonable?
 
Yes.

The AI Revolution is Rotten to the Core
I like this video. I wonder how old it is. I have used chatbots and I do notice that they forget old conversations but if you say something really interesting, they seem to remember it and it also transfers to other chatbots
 
Something came up in another thread and I'd like to open a discussion here.


Here's what I am interested in:



Is this reasonable?
How can the AI tell the selfie is from the same person being verified? A 15 year old with a 19 year old cousin or sibling could pass this easily.
 
Something came up in another thread and I'd like to open a discussion here.


Here's what I am interested in:



Is this reasonable?
Supermarkets and Yoti over here ran a government trial for "AI age verification" when buying alcohol at self service checkouts a couple of years back and it was apparently 100% successful in that no underage buyer got past the system and it had a very low percentage of legal purchasers failing. So it seems to be a comparatively well developed technology and it is not relying on the current "AI bubble" marketing nonsense.

ETA: Article https://www.grocerygazette.co.uk/2023/01/04/tesco-asda-digital-age/
 
Last edited:
How can the AI tell the selfie is from the same person being verified? A 15 year old with a 19 year old cousin or sibling could pass this easily.
Presumably the person taking the selfie would have to be present during the verification, barring some sort of wacky hijinks.
 
How can the AI tell the selfie is from the same person being verified? A 15 year old with a 19 year old cousin or sibling could pass this easily.
"Perfect is the enemy of the good" if it can only reduce 50% of the access kids have to dangerous and damaging websites like Facebook and TikTok I think it's worth the hassle to those over 16.
 

Back
Top Bottom