Piscivore
Smelling fishy
This is probably a dumb question, but do computers or like machines of any caliber have the capability to make correlations between apparently unrelated information in such a way as this?
This.What has the "sex_robot_book" tag got to do with this!?
I think it will happen eventually. I don't believe that there is any "special" quality about the human brain that makes duplicating it's functions impossible. It's ultimately just a machine and any machine can be reverse engineered.
But they cannot do such things yet, right?IMHO, a complex enough computer will perfectly duplicate the actions of a human.
But how about coming up with their own, independently?A computer will do anything it is programmed to do.
You want imagination? program the algorithm which you think best suits the definition of imagination (it will probably be YOUR imagination) or in other words tell the computer what you want it to make of two perceptions (according to the link you provided) in order to come up with a third one.
Eh, the usual human stuff. He laughs, he learns, he loves.Such a computer will not be programmed in its every detail. We will provide the basis and it will grow into intelligence, as children do. The question is, what will we (and such intelligences) DO?
But they cannot do such things yet, right?
This is probably a dumb question, but do computers or like machines of any caliber have the capability to make correlations between apparently unrelated information in such a way as this?
http://www.internationalskeptics.com/forums/showthread.php?p=2627135#post2627135But how about coming up with their own, independently?
Independently of specific pre-programming. Let's say the military programs a machine to observe an area and identify vehicles by their general shape, engine sound, and speed. The machine makes several thousand observations of hostile, friendly, and civilian vehicles. One day, it draws on these observations to determine that a particular vehicle, despite fitting several characteristics of a hostile vehicle is instead being operated by friendlies- perhaps because they drive it differently. Could such a thing be possible, or at least plausible, even if this was not a characteristic the designers programmed or even planned for?Independently of what, though?
Yes, definitely.Independently of specific pre-programming. Let's say the military programs a machine to observe an area and identify vehicles by their general shape, engine sound, and speed. The machine makes several thousand observations of hostile, friendly, and civilian vehicles. One day, it draws on these observations to determine that a particular vehicle, despite fitting several characteristics of a hostile vehicle is instead being operated by friendlies- perhaps because they drive it differently. Could such a thing be possible, or at least plausible, even if this was not a characteristic the designers programmed or even planned for?
Originally Posted by yairhol![]()
A computer will do anything it is programmed to do.
You want imagination? program the algorithm which you think best suits the definition of imagination (it will probably be YOUR imagination) or in other words tell the computer what you want it to make of two perceptions (according to the link you provided) in order to come up with a third one.
But how about coming up with their own, independently?
Let's say the military programs a machine to observe an area and identify vehicles by their general shape, engine sound, and speed. The machine makes several thousand observations of hostile, friendly, and civilian vehicles. One day, it draws on these observations to determine that a particular vehicle, despite fitting several characteristics of a hostile vehicle is instead being operated by friendlies- perhaps because they drive it differently. Could such a thing be possible, or at least plausible, even if this was not a characteristic the designers programmed or even planned for?
Independently of specific pre-programming. Let's say the military programs a machine to observe an area and identify vehicles by their general shape, engine sound, and speed. The machine makes several thousand observations of hostile, friendly, and civilian vehicles. One day, it draws on these observations to determine that a particular vehicle, despite fitting several characteristics of a hostile vehicle is instead being operated by friendlies- perhaps because they drive it differently. Could such a thing be possible, or at least plausible, even if this was not a characteristic the designers programmed or even planned for?
I knew that, but that doesn't preclude the possibility that it might do something that is not random but is unexpected, does it?I Don't think that will ever happen. even asking the computer to generate a random number is not really random but subject to certain rules and algorithms.
Regards,
Yair
I linked to the definition that was pertinent to the discussion I was looking for. This definition is describing behaviour that has been observed. I'm not sure what sort of "proof" you think you want beyond that.@OP: You imply that people can imagine. I demand proof for this statement. Also, a proper definition of what imagination means.
What if the "who is driving it" wasn't a characteristic the designers intended to program for?But that is not imagination. That is simply the computer doing what it is designed to do. It measures the characteristics of vehicles and determines what the vehicle is and who is driving it.
I probably am. Thanks, and to you too Pixy and Thabiguy, for pointing me in the right direction.You seem to be asking about heuristics and genetic algorithms, but this has nothing to do with imagination.
Yes, definitely.
Computers can alter their programming based on data received - or to put it another way, they can learn from observation. Usually this is set up so the computer's operation will remain with in certain bounds, because we expect computers to behave predictably, unlike people.
Computers are capable of all the same types of learning and behaviour as humans, including self-awareness, but are much simpler and less sophisticated, so they don't fare as well on complex problems. Then again, it takes decades of training for a human to competently handle the situation you describe.