• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Worried about Artificial Intelligence?

If the results aren't the same there's no worry about "real humans" being replaced.

If and only if the people making the decisions are aware that the results won't be the same. You've never run across management or executives making bad decisions? In my experience a lot of people running things are easily swayed by a pretty graph and will pick the sexy AI results even if they're wrong because they simply don't know better. I can explain in paragraphs why X is correct and Y is wrong, but AI can spit out Y in three seconds accompanied by music and a graph that looks like flowers growing in a field. I'd estimate about a third of the execs I do work for would fall for it and pick Y.
 
People don't say false things ? People understand everything ?
LLMs indeed don't know that they don't know something. That is very specific issue though, and has nothing to do with understanding.

Listen to Sean Carroll's Mindscape Podcast on the issue.
He asked the program something like:" what's the likelihood that the product of two different numbers is a prime and how does that change as the numbers get larger?".
The program said it was low, and getting lower with higher numbers.

If the program knew what a prime is, it wouldn't say such rubbish. It would understand that the chance is always zero. Because that is the definition of a prime.
But the program reads a definition the same way it reads a telenovela.
 
Last edited:
Listen to Sean Carroll's Mindscape Podcast on the issue.
He asked the program something like:" what's the likelihood that the product of two different numbers is a prime and how does that change as the numbers get larger?".
The program said it was low, and getting lower with higher numbers.

If the program knew what a prime is, it wouldn't say such rubbish. It would understand that the chance is always zero. Because that is the definition of a prime.
But the program reads a definition the same way it reads a telenovela.

That just means it doesn't understand what prime is. Ask 100 people and tell me how many got this one right.
Also this is how Bard responds: (at the moment on of the worse AIs on the market):

The product of two prime numbers can never be prime. This is because a prime number is a number that is divisible only by two numbers: 1 and itself. The product of two prime numbers is divisible by both of the prime numbers and itself, so it is not prime.

For example, the product of the prime numbers 5 and 7 is 35. 35 is divisible by 1, 5, 7, and 35, so it is not prime.

This is a basic fact of number theory, and it can be proven using the definition of a prime number.
 
If and only if the people making the decisions are aware that the results won't be the same.
I think you're conflating two different issues.

You're thinking of the problems that can arise when someone assumes the results will be the same, replaces the human with an AI, and only then realizes they should have done their due diligence.

Joe is talking about the unlikelihood of replacing humans with AIs after observing that the AI results are not only dissimilar but also subpar.

Tesla is trying to replace human drivers with AIs, and running into serious problems because the results aren't the same. But there's no major push in society to replace human drivers with AIs, because we've noticed that so far the results aren't the same.
 
That just means it doesn't understand what prime is. Ask 100 people and tell me how many got this one right.
Also this is how Bard responds: (at the moment on of the worse AIs on the market):

That's an example of a HUMAN telling the machine that it's wrong - most examples given no longer work, because they are getting patched - by humans, not the program. The program still understand nothing.
And therefore can't be relied on for anything that it's basic and that you yourself can't fact check.

It might still safe you time, but it won't replace you.
Any lawyer relying on such a program to right its legal documents will be lucky to just go broke.
 
Last edited:
That just means it doesn't understand what prime is. Ask 100 people and tell me how many got this one right.
Also this is how Bard responds: (at the moment on of the worse AIs on the market):

This is what Bing Chat replies with:


The likelihood that the product of two different numbers is a prime number is essentially zero. This is because a prime number is defined as a number that has exactly two distinct positive divisors: 1 and itself¹². Therefore, the product of two different numbers (both greater than 1) will always have at least four distinct positive divisors: 1, each of the numbers, and the product itself. This means it cannot be a prime number.

As for how this changes as the numbers get larger, it doesn't. Regardless of how large the two different numbers are, their product will never be a prime number if both numbers are greater than 1. This is a fundamental property of prime numbers and does not change based on the size of the numbers³.

However, if one of the numbers is 1, then the product will be a prime number if the other number is a prime number. But in this case, the two numbers are not different if we consider 1 not to be a prime number, which is the standard definition².

In terms of probability, the probability that two different integers are both simultaneously divisible by a prime p is $$\frac{1}{p^2}$$⁵. This means that the probability that two different integers are not simultaneously divisible by a prime p is $$1 - \frac{1}{p^2}$$⁵. This might give some insight into the distribution of prime numbers, but it doesn't directly answer your question because it's about divisibility, not about the product being prime.

I hope this helps! Let me know if you have any other questions..

Source: Conversation with Bing, 12/12/2023
....snip...

I've cut out the references it gives and the formatting of the formulae doesn't carry over in a direct copy, you can use this link: https://sl.bing.net/b4JzX0AyXDw if you want to see the answer in full.
 
I think you're conflating two different issues.

You're thinking of the problems that can arise when someone assumes the results will be the same, replaces the human with an AI, and only then realizes they should have done their due diligence.

Joe is talking about the unlikelihood of replacing humans with AIs after observing that the AI results are not only dissimilar but also subpar.

Tesla is trying to replace human drivers with AIs, and running into serious problems because the results aren't the same. But there's no major push in society to replace human drivers with AIs, because we've noticed that so far the results aren't the same.

Heh, you're right. I think both humans and AI are too stupid to be trusted to correctly assess AI usage.
 
I'm reminded of someone going "You can trick a self driving car by just writing the wrong number on the speed limit sign" as if A) that wouldn't also work for people and B) at least the self driving car would actually be paying attention to the speed limit sign.
 
Hey I get to reference an episode of Cracked: After Hours I haven't already referenced a billion times.

The gang are talking about how the "Robot Uprising/Singularity" would really happen in real life, would it be a Terminator/Skynet style uprising, or a more subtle "We get too dependent on the machines" kinda way.

Dan suggests it's already happened with the internet. We're already too dependent on it.

The others in the group scoff at this, saying the internet is stupid and can only do what we tell it to and doesn't even do that all that well and it needs people for upkeep and maintenance.

Dan retorts

A) The best and brightest minds of the world are working to make the internet better.
B) If someone to tried to shutdown the internet, we would turn on them.

Same thing here. People are laughing at how crude AI is now with all the same intellectual footing as laughing at the canvas and wood framed plane flying a few dozen feet over a sand dune in Kitty Hawk NC and going "Yeah sure that's gonna replace luxury passenger liners for travel between New York and Europe, pull the other one."

Yeah AI will need upkeep for the foreseeable long term future. But if it becomes useful and integrated we as a society won't let it just not be upkept.
 
I tend towards Peter Watts's vision of the AI singularity: The AIs will promptly occupy themselves with AI thoughts that are inaccessible to human minds, and most people won't notice much of a difference, or any difference at all, after the transition to an AI-dominated society.
 
I'm reminded of someone going "You can trick a self driving car by just writing the wrong number on the speed limit sign" as if A) that wouldn't also work for people and B) at least the self driving car would actually be paying attention to the speed limit sign.

plenty of cars now read the speed signs - badly.
 
plenty of cars now read the speed signs - badly.

//Slight hijack//

My new car has that thing where it shuts off the engine at red lights to save gas and I'm pretty it can tell when when the light turns greens because it turns the engine back on before I have a chance to step on the gas.
 
You know what a HUMAN does we asked about something they don't understand?
they ask for explanation/clarification.

But these LLM don't understand that they might not understand something - because it's all the same to them.
 
That's an example of a HUMAN telling the machine that it's wrong - most examples given no longer work, because they are getting patched - by humans, not the program. The program still understand nothing.
And therefore can't be relied on for anything that it's basic and that you yourself can't fact check.

It might still safe you time, but it won't replace you.
Any lawyer relying on such a program to right its legal documents will be lucky to just go broke.

Actually that's exactly what LLMs can't do. They can't be easily patched by humans telling them what to do. This is just gradual improvement of the technology. About one year of a difference.
 
Then we've looped back to the start.

If AIs can't ever do it, what are we worried about?
 
Again nobody cares if the results are the same.

We're not having the "can a machine truly think" discussion.
 
Again nobody cares if the results are the same.

We're not having the "can a machine truly think" discussion.

I think how the results are arrived at also matters. If NASA carefully calculates the correct place for the spacecraft descent and Glumbo the Chimp throws a dart at the globe and they both hit on the same spot, the results are the same. Does it matter? It does to Glumbo's employment prospects, and likely also to the peace of mind of the returning astronauts and their insurance companies.
 

Back
Top Bottom