• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Worried about Artificial Intelligence?

Not even close.

What could HAL do that we cannot make a computer do today?

From the Wiki page:

HAL has been shown to be capable of speech synthesis, speech recognition, facial recognition, natural language processing, lip reading, art appreciation, interpreting emotional behaviours, automated reasoning, spacecraft piloting and computer chess.

Which of those functions are not yet possible? Lip reading? Nope, that has apparently been done already.
 
I bet that it picks Sheila! Candi is nowhere near as attractive no matter how many neural implants she says she's got. She doesn't seem to get that there's more to farting than just the sound.

Do you remember when we thought that AIs would break down if they were ever exposed to a serious case of cognitive dissonance?

I still think Clarke (mainly in the sequel novel) explored an area we may have to deal with if we do manage to develop general-AIs they may indeed be subject to something akin to "mental illness" in humans.
 
Last edited:
What could HAL do that we cannot make a computer do today?
Be so reliable that it can claim to make no errors without making everyone around it burst into laughter.

I just asked ChatGPT which is more likely to be green—a purple hat or a blue smoothie. The answer: the blue smoothie.

Current tools are bad at anything that requires them to understand language as representational, because they don’t.

The hard part isn’t understanding what is being said by reading lips, but understanding what it means.
 
I feel like people don't get that AI is like all other tech, it's going to get better at an exponential rate.

"Close" means something different in this context. If AIs are getting the broad, conceptual strokes of something now and 99% screwing up the practical application of it... that's actually pretty close.

Anything AI can do in a "Funny LOL I see what you were trying to do but look at how much you messed it up" way NOW, it's going to be doing very, very, very well 18 month, 36 months, 72 months down the road. Like we're not talking AI perfecting this on the time scale of some detached far point in the future.

Like a few months back the big "tell" was the AI can draw human hands.

1) Rob Liefield couldn't draw feet and he was the most successful comic artists of an entire decade.
2) Half of cartoonist joke about how they can't draw hands.
3) Seen AI art in the last few weeks? That's not that much of a problem anymore.
 
Last edited:
"Close" means something different in this context. If AIs are getting the broad, conceptual strokes of something now and 99% screwing up the practical application of it... that's actually pretty close.
They don’t get the broad conceptual strokes of anything. They don’t have concepts at all. Hell, they don’t even have percepts. They don’t understand what you’re saying.
 
Last edited:
They don’t get the broad conceptual strokes of anything. They don’t have concepts at all. Hell, they don’t even have percepts. They don’t understand what you’re saying.

There's no way to go down this road without having a philosophical "Does it have a soul" discussion.

My point call it whatever you want, if it's bad at doing it now, it's going to be good at doing it soon.
 
There's no way to go down this road without having a philosophical "Does it have a soul" discussion.
Yes there is, as is evidenced by the many people in the relevant professions who manage to do exactly that. You don’t need to talk about souls to acknowledge the distinction between form and meaning.

My point call it whatever you want, if it's bad at doing it now, it's going to be good at doing it soon.
The problem isn’t that current AIs are bad at representational understanding, it’s that they don’t do it at all.
 
If at the end of the day the results are the same nobody is going to care.

If I tell an AI "Paint me a seascape" and it they can do it on a FUNCTIONAL level, nobody is going to care about the behind the scenes process, much less what we call it.
 
If at the end of the day the results are the same nobody is going to care.

If I tell an AI "Paint me a seascape" and it they can do it on a FUNCTIONAL level, nobody is going to care about the behind the scenes process, much less what we call it.

Surely it depends on the task. Some can be accomplished without understanding at all, others require some understanding, and some require a great deal of understanding.
 
Surely it depends on the task. Some can be accomplished without understanding at all, others require some understanding, and some require a great deal of understanding.

And if at the end of the day the results are the same, nobody is going to care how much understanding is required.*

---
*Actually, I suppose a lot of people will start to question their assumptions about how much understanding a task actually requires. And a lot of people will loudly proclaim that such-and-such a task clearly proves that AIs understand things.

But as long as the results are the same, I doubt many people will care that an AI did it instead of a human.
 
And if at the end of the day the results are the same, nobody is going to care how much understanding is required.*

---
*Actually, I suppose a lot of people will start to question their assumptions about how much understanding a task actually requires. And a lot of people will loudly proclaim that such-and-such a task clearly proves that AIs understand things.

But as long as the results are the same, I doubt many people will care that an AI did it instead of a human.

For some tasks the results won't be the same, though. Those that require actual understanding will yield different results if the agent performing them doesn't understand them. The danger is when the user assumes the task requires no understanding, or falsely believes the agent does have that understanding, and accepts the false results as true.

I see it pretty much daily in my work: we have complex sets of data, and some self-service reporting tools, and a team of professional reporters. Some people use the self-service tools and get incorrect answers because they don't know how to ask the right questions, and the self-service tool dumbly accepts their input and dumbly outputs based on that. Any computer or AI can add numbers, but some sets of numbers require understanding so the agent knows which numbers should be added, and which ones shouldn't, and under what circumstances, and oh look at that other thing over there that completely changes the meaning of this and renders the entire thing pointless.

I don't think we're at the stage yet with AI that it can tell you you're asking it the wrong question. It's going to try to answer questions even when they're the wrong ones and the answers would be worse than useless even though they're 100% correct for the question asked.
 
If the results aren't the same there's no worry about "real humans" being replaced.

If the results aren't exactly the same but close enough and people are okay with "close enough" being done quicker/cheaper what problem are we trying to fix?

If the results are the same what problem are we trying to fix?
 
Last edited:
They don't
It's trivially easy to make them say false things.
And if they understood what a Quote/Reference IS, they knew not to make one up on the spot.

People don't say false things ? People understand everything ?
LLMs indeed don't know that they don't know something. That is very specific issue though, and has nothing to do with understanding.
 

Back
Top Bottom