• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Can computers "imagine"

A computer will do anything it is programmed to do.
You want imagination? program the algorithm which you think best suits the definition of imagination (it will probably be YOUR imagination) or in other words tell the computer what you want it to make of two perceptions (according to the link you provided) in order to come up with a third one.
Computers are dumb but capable of doing as much as its programmer is able to provide.

Regards,
Yair
 
What has the "sex_robot_book" tag got to do with this!?
 
I think it will happen eventually. I don't believe that there is any "special" quality about the human brain that makes duplicating it's functions impossible. It's ultimately just a machine and any machine can be reverse engineered.
 
IMHO, a complex enough computer will perfectly duplicate the actions of a human.
 
Such a computer will not be programmed in its every detail. We will provide the basis and it will grow into intelligence, as children do. The question is, what will we (and such intelligences) DO?
 
What has the "sex_robot_book" tag got to do with this!?
This.
I think it will happen eventually. I don't believe that there is any "special" quality about the human brain that makes duplicating it's functions impossible. It's ultimately just a machine and any machine can be reverse engineered.
IMHO, a complex enough computer will perfectly duplicate the actions of a human.
But they cannot do such things yet, right?
A computer will do anything it is programmed to do.
You want imagination? program the algorithm which you think best suits the definition of imagination (it will probably be YOUR imagination) or in other words tell the computer what you want it to make of two perceptions (according to the link you provided) in order to come up with a third one.
But how about coming up with their own, independently?
Such a computer will not be programmed in its every detail. We will provide the basis and it will grow into intelligence, as children do. The question is, what will we (and such intelligences) DO?
Eh, the usual human stuff. He laughs, he learns, he loves.
 
This is probably a dumb question, but do computers or like machines of any caliber have the capability to make correlations between apparently unrelated information in such a way as this?

Sure. That's easy; anything Turing-complete can do that.


Independently of what, though? Our thoughts are dependent on our genetic heritage and our environment. A computer has the same restrictions as a human; it's not going to come up with a new program without some sort of input.
 
Independently of what, though?
Independently of specific pre-programming. Let's say the military programs a machine to observe an area and identify vehicles by their general shape, engine sound, and speed. The machine makes several thousand observations of hostile, friendly, and civilian vehicles. One day, it draws on these observations to determine that a particular vehicle, despite fitting several characteristics of a hostile vehicle is instead being operated by friendlies- perhaps because they drive it differently. Could such a thing be possible, or at least plausible, even if this was not a characteristic the designers programmed or even planned for?
 
Yes. The brain IS a computer and it does so.

IXP
 
Independently of specific pre-programming. Let's say the military programs a machine to observe an area and identify vehicles by their general shape, engine sound, and speed. The machine makes several thousand observations of hostile, friendly, and civilian vehicles. One day, it draws on these observations to determine that a particular vehicle, despite fitting several characteristics of a hostile vehicle is instead being operated by friendlies- perhaps because they drive it differently. Could such a thing be possible, or at least plausible, even if this was not a characteristic the designers programmed or even planned for?
Yes, definitely.

Computers can alter their programming based on data received - or to put it another way, they can learn from observation. Usually this is set up so the computer's operation will remain with in certain bounds, because we expect computers to behave predictably, unlike people.

Computers are capable of all the same types of learning and behaviour as humans, including self-awareness, but are much simpler and less sophisticated, so they don't fare as well on complex problems. Then again, it takes decades of training for a human to competently handle the situation you describe.
 
Originally Posted by yairhol
A computer will do anything it is programmed to do.
You want imagination? program the algorithm which you think best suits the definition of imagination (it will probably be YOUR imagination) or in other words tell the computer what you want it to make of two perceptions (according to the link you provided) in order to come up with a third one.

But how about coming up with their own, independently?

I Don't think that will ever happen. even asking the computer to generate a random number is not really random but subject to certain rules and algorithms.

Regards,
Yair
 
Let's say the military programs a machine to observe an area and identify vehicles by their general shape, engine sound, and speed. The machine makes several thousand observations of hostile, friendly, and civilian vehicles. One day, it draws on these observations to determine that a particular vehicle, despite fitting several characteristics of a hostile vehicle is instead being operated by friendlies- perhaps because they drive it differently. Could such a thing be possible, or at least plausible, even if this was not a characteristic the designers programmed or even planned for?

Yes. What you have described is a typical example of a classification problem, solved by means of supervised learning - this means that the machine learns to identify vehicles from training data pre-classified as hostile/friendly/civilian, as opposed to the machine making up categories of its own.

In this kind of machine learning, the classifying function is not programmed by the designers, instead it is inferred by the machine from observed data and characteristics. The process is often implemented by methods such as artificial neural networks which infer very complicated classifying functions that are not easily analyzed and explained in terms of simple characteristic-decision relationships. The designers usually do not seek to understand why the machine has decided this way or that way; their usual concern is how often the decision is correct and how to improve that.

In your scenario, it is entirely possible - and it also frequently happens in the real world - that the machine will decide for classification that is contrary to some "obvious" characteristics, because of some other characteristics, less obvious to human observer. Sometimes, such hidden characteristics will be bogus - for example, the classifier may learn to "lock on" some spurious artifact of sample data - and the decision will be erroneous. Sometimes, the decision may be right and the classifier may actually be seeing real patterns that humans do not see.

The problem is that without independent means of verification, there is no way to tell whether in a particular counter-intuitive decision the machine is being smarter than a human or whether it is off-course. Unless the trained machine has already been proved to outperform human observers in the accuracy of its decisions, it is unlikely that people would rely on its judgement; rather, they would just take the decision as "advisory".
 
Independently of specific pre-programming. Let's say the military programs a machine to observe an area and identify vehicles by their general shape, engine sound, and speed. The machine makes several thousand observations of hostile, friendly, and civilian vehicles. One day, it draws on these observations to determine that a particular vehicle, despite fitting several characteristics of a hostile vehicle is instead being operated by friendlies- perhaps because they drive it differently. Could such a thing be possible, or at least plausible, even if this was not a characteristic the designers programmed or even planned for?

But that is not imagination. That is simply the computer doing what it is designed to do. It measures the characteristics of vehicles and determines what the vehicle is and who is driving it. If it comes to the conclusion that an apparently hostile vehicle is actually friendly then that is determined entirely by past experience, not by any kind of guessing of imagination. Imagination would be if it suddenly decided the vehicle was driven by a herd of pink elephants. Although this would probably be cause for maintanence rather than celebrating the birth of AI.

I'm not quite sure what you mean by "not characteristic the designers programmed". If they did not program it to give a hostile/friendly output then it will not suddenly decide to do so and so your scenario does not make sense. If they did program for this output then it is simply doing what is programmed. You seem to be asking about heuristics and genetic algorithms, but this has nothing to do with imagination.
 
@OP: You imply that people can imagine. I demand proof for this statement. Also, a proper definition of what imagination means.
 
I Don't think that will ever happen. even asking the computer to generate a random number is not really random but subject to certain rules and algorithms.

Regards,
Yair
I knew that, but that doesn't preclude the possibility that it might do something that is not random but is unexpected, does it?

@OP: You imply that people can imagine. I demand proof for this statement. Also, a proper definition of what imagination means.
I linked to the definition that was pertinent to the discussion I was looking for. This definition is describing behaviour that has been observed. I'm not sure what sort of "proof" you think you want beyond that.

But that is not imagination. That is simply the computer doing what it is designed to do. It measures the characteristics of vehicles and determines what the vehicle is and who is driving it.
What if the "who is driving it" wasn't a characteristic the designers intended to program for?

You seem to be asking about heuristics and genetic algorithms, but this has nothing to do with imagination.
I probably am. Thanks, and to you too Pixy and Thabiguy, for pointing me in the right direction.
 
Last edited:
Yes, definitely.

Computers can alter their programming based on data received - or to put it another way, they can learn from observation. Usually this is set up so the computer's operation will remain with in certain bounds, because we expect computers to behave predictably, unlike people.

Computers are capable of all the same types of learning and behaviour as humans, including self-awareness, but are much simpler and less sophisticated, so they don't fare as well on complex problems. Then again, it takes decades of training for a human to competently handle the situation you describe.

my bolding


It almost sounds like you are saying "computers are capable of all the same types of learning and behaviour as humans, including self-awareness."

I've bolded some of the words that raise important issues.

If you changed that to: "computers might some day be capable of many of the same kinds of behaviour as humans, without self-awareness", few would disagree.

The problem with the statement as it stands:

1) present tense
2) all?
3) "same types"--you mean functionally? or by same process?
4) self-awareness?

The strong AI people have been saying that various amazing things will be forthcoming soon. They have been saying this for a long time.

We have computers that can play chess, because chess is a closed, logical problem.

We don't have computers that can dependably cross a street. That's a hard problem, but not for people.

We don't have computers that can make good jokes.

We don't have computers that can feel pain, or pleasure, or love.

I'm perfectly willing to concede that such computers might be possible in hundreds (not thousands) of years, but then they won't really be computers anymore.

Look, I'm aware that these are deep issues. You've got people like Dennett on one side, and people like Searle on the other.

Dennett seems to want to do away with consciousness by sleight-of-hand.

Searle doesn't seem to be able to imagine what computers might be capable of in the future.

So, I'm not coming down on either side, neither do I feel like I can argue with the big boys.

But I couldn't let what you said stand--at least as read literally.
 

Back
Top Bottom