Do we have computers that make bad ones? I'm not being facitious, here.We don't have computers that can make good jokes.
These are technically possible now though, right? Negative and positive feedback, and all that?We don't have computers that can feel pain, or pleasure...
Do we have computers that make bad ones? I'm not being facitious, here.
These are technically possible now though, right? Negative and positive feedback, and all that?
It almost sounds like you are saying "computers are capable of all the same types of learning and behaviour as humans, including self-awareness."
Well, the past tense would be confusing.1) present tense
All.2) all?
Functionally.3) "same types"--you mean functionally? or by same process?
Absolutely. Self-aware computer systems are the rule rather than the exception.4) self-awareness?
AI is whatever hasn't been done yet.The strong AI people have been saying that various amazing things will be forthcoming soon. They have been saying this for a long time.
Real-world problem. And it's a problem for cats and dogs and gorillas... and dolphins, for that matter.We don't have computers that can dependably cross a street. That's a hard problem, but not for people.
A limitation they share with most people.We don't have computers that can make good jokes.
Ah. And you can prove that, can you?We don't have computers that can feel pain, or pleasure, or love.
I disagree completely. The human brain is nothing but a squishy, unreliable computer.I'm perfectly willing to concede that such computers might be possible in hundreds (not thousands) of years, but then they won't really be computers anymore.
Searle is a clown. Unless he has recanted his "Chinese Room" recently?Look, I'm aware that these are deep issues. You've got people like Dennett on one side, and people like Searle on the other.
Dennet seems to be pretty much right. I'm not sure I agree with him entirely, but he's onto something. Consciousness is not magic. It's merely the ability to examine one's own thought processes. We discussed this a while back, and while I don't necessarily agree with Dennet's position that a thermostat is conscious, I figure that a computer that supports all reasonable requirements for consciousness - sense, memory, decision and introspection - could be constructed using fewer than one hundred transistors.Dennett seems to want to do away with consciousness by sleight-of-hand.
Or in 1950, for that matter.Searle doesn't seem to be able to imagine what computers might be capable of in the future.
Well, sorry, but I meant exactly what I said.But I couldn't let what you said stand--at least as read literally.
Oh, sure. Bad jokes are easy. ExampleDo we have computers that make bad ones? I'm not being facitious, here.
Possible and done. Pain and pleasure are, as you say, negative and positive feedback signals. In humans, the psychology of our responses to these signals is complex. In simpler organisms, less so.These are technically possible now though, right? Negative and positive feedback, and all that?
Ah. And you can prove that, can you?
Well, sorry, but I meant exactly what I said.
I try.You're consistent.
That's not an unreasonable assertion. Given that pain is a negative feedback signal, and that injury to your children puts your genetic propagation at risk, it is reasonable that you would feel the same (or similar) signals in that situation.But, inconsistent person that I am, I imagine I feel pain when my child feels pain.
And how, exactly, is that relevant to the discussion?On the other hand, my computer is much more useful to me than my child. I would feel nothing but irritation should someone destroy my computer.
Of course it hurts when I stub my toe. And a robot can experience pain when it breaks a wheel.It would be a more interesting conversation if some of you who believe computers are currently capable of anything would admit that it hurts when you stub your toe.
I explained (briefly) why Dennett is basically right: The requirements for consciousness are sense, memory, decision and introspection. You can simplify this further if you wish, but to remove all reasonable objections, I posited a device that has two inputs with multiple states, two memory cells again with multiple states, and the logical ability to compare inputs and memories to each other in any combination and adjust the memory depending on the results. As I said, a hundred transistors suffices.It's not too interesting if you just say that Searle is a clown, and that Dennett is basically right.
Well, I'm playing Baldur's Gate II right now. My characters scream when they get hit.Show me some examples of computers you believe currently experience pain or pleasure.
Or profound irritation.Remember, there are people at either end of the conversation who are acting as if they feel pleasure.
What is hard to define?Remember, also, that not everything that is undefinable (or very hard to define) is therefore non-existent.
Is your brain not squishy? If I club it, do you not ouch?Also, WETWARE! SQUISHY! YUCKERS! the horror! the horror!
Squishy is purely physical, and irrelevant. If you like inconsistency and stupidity, well, your life must be an unending sea of bliss.squishy and inconsistent and stupid is nice.
But that is not imagination. That is simply the computer doing what it is designed to do. It measures the characteristics of vehicles and determines what the vehicle is and who is driving it. If it comes to the conclusion that an apparently hostile vehicle is actually friendly then that is determined entirely by past experience, not by any kind of guessing of imagination. Imagination would be if it suddenly decided the vehicle was driven by a herd of pink elephants. Although this would probably be cause for maintanence rather than celebrating the birth of AI.
Independently of specific pre-programming. Let's say the military programs a machine to observe an area and identify vehicles by their general shape, engine sound, and speed. The machine makes several thousand observations of hostile, friendly, and civilian vehicles. One day, it draws on these observations to determine that a particular vehicle, despite fitting several characteristics of a hostile vehicle is instead being operated by friendlies- perhaps because they drive it differently. Could such a thing be possible, or at least plausible, even if this was not a characteristic the designers programmed or even planned for?
But the human brain works the same way. There is nothing in all imagination that wasn't assimilated from past experiences.
If the machine suddenly decided the vehicle was driven by pink elephants, that means it had some experience of elephants, the color pink, etc.
The machine might be in an environment where trees and rocks exist; so it could, in theory, imagine that green rocks were driving some vehicle. The machine's designers would undoubtably see this as some form of processing error, but it could also very well be simple imagination.
I think a lot of us forget that everything we think or imagine is based entirely off of our past experiences; that our brains came as blank as can be, and were programmed over the course of our lifetimes with a vast array of experiences, cross-linked via trial and error.
So if we were to create some vastly complex thinking machine, and gave it a lifetime of experiences and the means to cross-index those experiences in any way it desired, then, yes, it would imagine quite a bit.
My antivirus-a-mabobby is being rather insistant that I renew my subscription. Soon.And there's the rub:
A machine "desires" nothing. It hasn't got a desire-a-mabobby.
Isn't it possible to create rules and goals unintentionally, especially in complex systems? As I recall quite a lot of "I, Robot" was about just such problems.You could feed a bazillion facts into a database, and give it the capability to cross index items. It would never go and do the indexing, though, without being "told" to. If you tell it to do so, you're going to have to provide rules and goal because it won't come up with any on its own.
Aren't "mental limits" just another form of a physical limitation?We've got imagination. We have built in goals (food, safety, sleep) and we set ourselves other goals in attaining those primary goals. We have limitations that restrict our data combining - physical limits that prevent carrying out some actions, mental limits on how much information we can process at once.
Well, it seems to me that the limits you mentioned already apply, so it has inherently "some kind of limit". It is funny that you mentioned physical requirements, because so far the prime motivators for my robotic protagonist's actions have been precisely security, then power.You could provide such a program with a way to determine its own goals. A "learning" algorithm, so to speak. But, you must still set it an objective of some kind and give it some kind of limits.
This seems contradictory- "combining all tables gets you imagination, but it isn't usefull data, so it isn't imagination"?If you don't you get a tremendous mess. Just do a join of all tables in a database, with out a WHERE clause. That gets you imagination, in spades. The database will combine all of the elements in all of the tables in all ways possible. Won't do you much good, because there's no way of sorting something useful out of the crap. Wouldn't do the machine much good, either. It'd crunch and grind and spit out phracking long lists of gibberish, but it wouldn't be any closer to imagination.
And there's the rub:
A machine "desires" nothing. It hasn't got a desire-a-mabobby. You could feed a bazillion facts into a database, and give it the capability to cross index items. It would never go and do the indexing, though, without being "told" to. If you tell it to do so, you're going to have to provide rules and goal because it won't come up with any on its own.
You could provide such a program with a way to determine its own goals. A "learning" algorithm, so to speak. But, you must still set it an objective of some kind and give it some kind of limits. If you don't you get a tremendous mess. Just do a join of all tables in a database, with out a WHERE clause. That gets you imagination, in spades. The database will combine all of the elements in all of the tables in all ways possible. Won't do you much good, because there's no way of sorting something useful out of the crap. Wouldn't do the machine much good, either. It'd crunch and grind and spit out phracking long lists of gibberish, but it wouldn't be any closer to imagination.
We've got imagination. We have built in goals (food, safety, sleep) and we set ourselves other goals in attaining those primary goals. We have limitations that restrict our data combining - physical limits that prevent carrying out some actions, mental limits on how much information we can process at once.
Unless you provide your program with some sort of goals and limits, it won't "imagine," it'll either do nothing or else spew endless garbage.
Define "imagine".
When two existing perceptions are combined within the mind the resultant third perception referred to as its synthesis and on occasion a fourth called the antithesis, which at that point only exists as part of the imagination, can often become the inspiration for a new invention or technique.