mumblethrax
Species traitor
- Joined
- Apr 5, 2004
- Messages
- 4,991
Not even close.Are not all of the elements achievable now?
Not even close.Are not all of the elements achievable now?
Not even close.
HAL has been shown to be capable of speech synthesis, speech recognition, facial recognition, natural language processing, lip reading, art appreciation, interpreting emotional behaviours, automated reasoning, spacecraft piloting and computer chess.
I bet that it picks Sheila! Candi is nowhere near as attractive no matter how many neural implants she says she's got. She doesn't seem to get that there's more to farting than just the sound.
Do you remember when we thought that AIs would break down if they were ever exposed to a serious case of cognitive dissonance?
What could HAL do that we cannot make a computer do today?
From the Wiki page:
Which of those functions are not yet possible? Lip reading? Nope, that has apparently been done already.
Be so reliable that it can claim to make no errors without making everyone around it burst into laughter.What could HAL do that we cannot make a computer do today?
They don’t get the broad conceptual strokes of anything. They don’t have concepts at all. Hell, they don’t even have percepts. They don’t understand what you’re saying."Close" means something different in this context. If AIs are getting the broad, conceptual strokes of something now and 99% screwing up the practical application of it... that's actually pretty close.
They don’t get the broad conceptual strokes of anything. They don’t have concepts at all. Hell, they don’t even have percepts. They don’t understand what you’re saying.
Yes there is, as is evidenced by the many people in the relevant professions who manage to do exactly that. You don’t need to talk about souls to acknowledge the distinction between form and meaning.There's no way to go down this road without having a philosophical "Does it have a soul" discussion.
The problem isn’t that current AIs are bad at representational understanding, it’s that they don’t do it at all.My point call it whatever you want, if it's bad at doing it now, it's going to be good at doing it soon.
Is there some reason to suppose that the results are going to be the same?If at the end of the day the results are the same nobody is going to care.
Is there some reason to suppose that the results are going to be the same?
If at the end of the day the results are the same nobody is going to care.
If I tell an AI "Paint me a seascape" and it they can do it on a FUNCTIONAL level, nobody is going to care about the behind the scenes process, much less what we call it.
Surely it depends on the task. Some can be accomplished without understanding at all, others require some understanding, and some require a great deal of understanding.
They don’t get the broad conceptual strokes of anything. They don’t have concepts at all. Hell, they don’t even have percepts. They don’t understand what you’re saying.
Of course they understand and have concepts. Why wouldn't they have ?
And if at the end of the day the results are the same, nobody is going to care how much understanding is required.*
---
*Actually, I suppose a lot of people will start to question their assumptions about how much understanding a task actually requires. And a lot of people will loudly proclaim that such-and-such a task clearly proves that AIs understand things.
But as long as the results are the same, I doubt many people will care that an AI did it instead of a human.
They don't
It's trivially easy to make them say false things.
And if they understood what a Quote/Reference IS, they knew not to make one up on the spot.