• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Can computers "imagine"

Self-Awareness is the new Soul

A semi-unmeasurable attribute meant to somehow differ man from beast and machine.
 
I get your point.

But, right now, I'm hitting my head against the desk.

It hurts.

I know this.
 
or the old joke about behaviorism:

two behaviorists are having sex. when it's over, one says:

"That was good for you. Was it good for me?"

get it?
 
Last edited:
Do we have computers that make bad ones? I'm not being facitious, here.


These are technically possible now though, right? Negative and positive feedback, and all that?

technically possible? or just conceivable?

G. Edelman has something like a theory of how the brain produces consciousness that involves feedback loops, it's true.
 
It almost sounds like you are saying "computers are capable of all the same types of learning and behaviour as humans, including self-awareness."

Yep. That's exactly right.

1) present tense
Well, the past tense would be confusing.

All.

3) "same types"--you mean functionally? or by same process?
Functionally.

4) self-awareness?
Absolutely. Self-aware computer systems are the rule rather than the exception.

That self-awareness is rather limited, when compared to human consciousness. But it is very real. Computers can deliver all sorts of information about what they are doing, and why, and what they have done, and what they will do.

The strong AI people have been saying that various amazing things will be forthcoming soon. They have been saying this for a long time.
AI is whatever hasn't been done yet.

We don't have computers that can dependably cross a street. That's a hard problem, but not for people.
Real-world problem. And it's a problem for cats and dogs and gorillas... and dolphins, for that matter.

We don't have computers that can make good jokes.
A limitation they share with most people.

We don't have computers that can feel pain, or pleasure, or love.
Ah. And you can prove that, can you?

I'm perfectly willing to concede that such computers might be possible in hundreds (not thousands) of years, but then they won't really be computers anymore.
I disagree completely. The human brain is nothing but a squishy, unreliable computer.

Look, I'm aware that these are deep issues. You've got people like Dennett on one side, and people like Searle on the other.
Searle is a clown. Unless he has recanted his "Chinese Room" recently?

Dennett seems to want to do away with consciousness by sleight-of-hand.
Dennet seems to be pretty much right. I'm not sure I agree with him entirely, but he's onto something. Consciousness is not magic. It's merely the ability to examine one's own thought processes. We discussed this a while back, and while I don't necessarily agree with Dennet's position that a thermostat is conscious, I figure that a computer that supports all reasonable requirements for consciousness - sense, memory, decision and introspection - could be constructed using fewer than one hundred transistors.

Modern microprocessors commonly exceed one hundred million transistors.

Searle doesn't seem to be able to imagine what computers might be capable of in the future.
Or in 1950, for that matter.

But I couldn't let what you said stand--at least as read literally.
Well, sorry, but I meant exactly what I said.
 
Do we have computers that make bad ones? I'm not being facitious, here.
Oh, sure. Bad jokes are easy. Example
These are technically possible now though, right? Negative and positive feedback, and all that?
Possible and done. Pain and pleasure are, as you say, negative and positive feedback signals. In humans, the psychology of our responses to these signals is complex. In simpler organisms, less so.

You can define pleasure and pain to preclude what computers already do, but that definition would be arbitrary.
 
Ah. And you can prove that, can you?


Well, sorry, but I meant exactly what I said.

You're consistent.

But, inconsistent person that I am, I imagine I feel pain when my child feels pain. On the other hand, my computer is much more useful to me than my child. I would feel nothing but irritation should someone destroy my computer.

I've got to go for today.

It would be a more interesting conversation if some of you who believe computers are currently capable of anything would admit that it hurts when you stub your toe.

It's not too interesting if you just say that Searle is a clown, and that Dennett is basically right.

Show me some examples of computers you believe currently experience pain or pleasure.

Remember, there are people at either end of the conversation who are acting as if they feel pleasure.

Remember, also, that not everything that is undefinable (or very hard to define) is therefore non-existent.

Also, WETWARE! SQUISHY! YUCKERS! the horror! the horror!

squishy and inconsistent and stupid is nice.

my last words, before i was assimilated, for today.
 
You're consistent.
I try.

But, inconsistent person that I am, I imagine I feel pain when my child feels pain.
That's not an unreasonable assertion. Given that pain is a negative feedback signal, and that injury to your children puts your genetic propagation at risk, it is reasonable that you would feel the same (or similar) signals in that situation.

On the other hand, my computer is much more useful to me than my child. I would feel nothing but irritation should someone destroy my computer.
And how, exactly, is that relevant to the discussion?

It would be a more interesting conversation if some of you who believe computers are currently capable of anything would admit that it hurts when you stub your toe.
Of course it hurts when I stub my toe. And a robot can experience pain when it breaks a wheel.

It's not too interesting if you just say that Searle is a clown, and that Dennett is basically right.
I explained (briefly) why Dennett is basically right: The requirements for consciousness are sense, memory, decision and introspection. You can simplify this further if you wish, but to remove all reasonable objections, I posited a device that has two inputs with multiple states, two memory cells again with multiple states, and the logical ability to compare inputs and memories to each other in any combination and adjust the memory depending on the results. As I said, a hundred transistors suffices.

The reason Searle is a clown is that he tears systems apart looking for the consciousness box, and when he doesn't find it, posits instead that consciousness is magical. This is nonsense, because consciousness is a property of the system he defined, not a component.

Show me some examples of computers you believe currently experience pain or pleasure.
Well, I'm playing Baldur's Gate II right now. My characters scream when they get hit.

How and why is that not pain?

Remember, there are people at either end of the conversation who are acting as if they feel pleasure.
Or profound irritation.

Remember, also, that not everything that is undefinable (or very hard to define) is therefore non-existent.
What is hard to define?

Also, WETWARE! SQUISHY! YUCKERS! the horror! the horror!
Is your brain not squishy? If I club it, do you not ouch?

squishy and inconsistent and stupid is nice.
Squishy is purely physical, and irrelevant. If you like inconsistency and stupidity, well, your life must be an unending sea of bliss.
 
But that is not imagination. That is simply the computer doing what it is designed to do. It measures the characteristics of vehicles and determines what the vehicle is and who is driving it. If it comes to the conclusion that an apparently hostile vehicle is actually friendly then that is determined entirely by past experience, not by any kind of guessing of imagination. Imagination would be if it suddenly decided the vehicle was driven by a herd of pink elephants. Although this would probably be cause for maintanence rather than celebrating the birth of AI.

But the human brain works the same way. There is nothing in all imagination that wasn't assimilated from past experiences.

If the machine suddenly decided the vehicle was driven by pink elephants, that means it had some experience of elephants, the color pink, etc.

The machine might be in an environment where trees and rocks exist; so it could, in theory, imagine that green rocks were driving some vehicle. The machine's designers would undoubtably see this as some form of processing error, but it could also very well be simple imagination.

I think a lot of us forget that everything we think or imagine is based entirely off of our past experiences; that our brains came as blank as can be, and were programmed over the course of our lifetimes with a vast array of experiences, cross-linked via trial and error.

So if we were to create some vastly complex thinking machine, and gave it a lifetime of experiences and the means to cross-index those experiences in any way it desired, then, yes, it would imagine quite a bit.
 
Independently of specific pre-programming. Let's say the military programs a machine to observe an area and identify vehicles by their general shape, engine sound, and speed. The machine makes several thousand observations of hostile, friendly, and civilian vehicles. One day, it draws on these observations to determine that a particular vehicle, despite fitting several characteristics of a hostile vehicle is instead being operated by friendlies- perhaps because they drive it differently. Could such a thing be possible, or at least plausible, even if this was not a characteristic the designers programmed or even planned for?

There was a project I read about once along those lines. I can't find a link to it, so I don't know for sure whether the project was real or just a story made. At any rate:

A computer was programmed to detect pictures that showed tanks. The idea being to develop a general recognition algorithm, then train using pictures with tanks and pictures without tanks. A human operator would then correct the program's picks, and the program would reanalyse and improve its detection. After a long period of training, it got to the point where it would recognize all of the test pictures correctly.

At this point, they were to demonstrate it and were given a picture that was not one of the samples. The program failed. It couldn't tell if there was a tank in the picture or not.

Now, you might think the program had simply "memorized" the sample set and couldn't tell anything about a new photo. You'd be wrong. It could detect some tanks in other new pictures that were given to it, but not reliably.

It turns out that the sample pictures were all taken such that the tank pictures all showed a stretch of blue sky, and that the pictures without tanks didn't. The difference being (so I remember reading) that the tank pictures were shot in the winter and the non-tank pictures were all shot in the summer.

The program "learned" to distinguish pictures of winter versus pictures of summer rather than pictures with tanks versus pictures without.

A surprising (and unexpected) result, but not imagination.
 
But the human brain works the same way. There is nothing in all imagination that wasn't assimilated from past experiences.

If the machine suddenly decided the vehicle was driven by pink elephants, that means it had some experience of elephants, the color pink, etc.

The machine might be in an environment where trees and rocks exist; so it could, in theory, imagine that green rocks were driving some vehicle. The machine's designers would undoubtably see this as some form of processing error, but it could also very well be simple imagination.

I think a lot of us forget that everything we think or imagine is based entirely off of our past experiences; that our brains came as blank as can be, and were programmed over the course of our lifetimes with a vast array of experiences, cross-linked via trial and error.

So if we were to create some vastly complex thinking machine, and gave it a lifetime of experiences and the means to cross-index those experiences in any way it desired, then, yes, it would imagine quite a bit.

And there's the rub:
A machine "desires" nothing. It hasn't got a desire-a-mabobby. You could feed a bazillion facts into a database, and give it the capability to cross index items. It would never go and do the indexing, though, without being "told" to. If you tell it to do so, you're going to have to provide rules and goal because it won't come up with any on its own.

You could provide such a program with a way to determine its own goals. A "learning" algorithm, so to speak. But, you must still set it an objective of some kind and give it some kind of limits. If you don't you get a tremendous mess. Just do a join of all tables in a database, with out a WHERE clause. That gets you imagination, in spades. The database will combine all of the elements in all of the tables in all ways possible. Won't do you much good, because there's no way of sorting something useful out of the crap. Wouldn't do the machine much good, either. It'd crunch and grind and spit out phracking long lists of gibberish, but it wouldn't be any closer to imagination.


We've got imagination. We have built in goals (food, safety, sleep) and we set ourselves other goals in attaining those primary goals. We have limitations that restrict our data combining - physical limits that prevent carrying out some actions, mental limits on how much information we can process at once.

Unless you provide your program with some sort of goals and limits, it won't "imagine," it'll either do nothing or else spew endless garbage.
 
My laptop has an imagination. Sometimes when I try to get it to work for me it's off in la la land daydreaming via a bunch of useless processes I can't get rid of, eating 99% of the available CPU...
 
And there's the rub:
A machine "desires" nothing. It hasn't got a desire-a-mabobby.
My antivirus-a-mabobby is being rather insistant that I renew my subscription. Soon.

You could feed a bazillion facts into a database, and give it the capability to cross index items. It would never go and do the indexing, though, without being "told" to. If you tell it to do so, you're going to have to provide rules and goal because it won't come up with any on its own.
Isn't it possible to create rules and goals unintentionally, especially in complex systems? As I recall quite a lot of "I, Robot" was about just such problems.

We've got imagination. We have built in goals (food, safety, sleep) and we set ourselves other goals in attaining those primary goals. We have limitations that restrict our data combining - physical limits that prevent carrying out some actions, mental limits on how much information we can process at once.
Aren't "mental limits" just another form of a physical limitation?

Replace "food" with "power" and "sleep" with "compiling time"- don't then all the same parameters apply to a machine, especially a complex one?

You could provide such a program with a way to determine its own goals. A "learning" algorithm, so to speak. But, you must still set it an objective of some kind and give it some kind of limits.
Well, it seems to me that the limits you mentioned already apply, so it has inherently "some kind of limit". It is funny that you mentioned physical requirements, because so far the prime motivators for my robotic protagonist's actions have been precisely security, then power.

If you don't you get a tremendous mess. Just do a join of all tables in a database, with out a WHERE clause. That gets you imagination, in spades. The database will combine all of the elements in all of the tables in all ways possible. Won't do you much good, because there's no way of sorting something useful out of the crap. Wouldn't do the machine much good, either. It'd crunch and grind and spit out phracking long lists of gibberish, but it wouldn't be any closer to imagination.
This seems contradictory- "combining all tables gets you imagination, but it isn't usefull data, so it isn't imagination"?

And since when was coherence a prerequisite of imagination?
 
And there's the rub:
A machine "desires" nothing. It hasn't got a desire-a-mabobby. You could feed a bazillion facts into a database, and give it the capability to cross index items. It would never go and do the indexing, though, without being "told" to. If you tell it to do so, you're going to have to provide rules and goal because it won't come up with any on its own.

You could provide such a program with a way to determine its own goals. A "learning" algorithm, so to speak. But, you must still set it an objective of some kind and give it some kind of limits. If you don't you get a tremendous mess. Just do a join of all tables in a database, with out a WHERE clause. That gets you imagination, in spades. The database will combine all of the elements in all of the tables in all ways possible. Won't do you much good, because there's no way of sorting something useful out of the crap. Wouldn't do the machine much good, either. It'd crunch and grind and spit out phracking long lists of gibberish, but it wouldn't be any closer to imagination.


We've got imagination. We have built in goals (food, safety, sleep) and we set ourselves other goals in attaining those primary goals. We have limitations that restrict our data combining - physical limits that prevent carrying out some actions, mental limits on how much information we can process at once.

Unless you provide your program with some sort of goals and limits, it won't "imagine," it'll either do nothing or else spew endless garbage.

And you're saying you can't provide a machine the exact same requirements as a human?

In fact, all I can see that you've said here is that, in order to make a machine with a human-like behavior, it needs all associated human-like behaviors as well.

OF COURSE IT DOES!

Conversely, if you take a new-formed human brain and hook it into some system that keeps it continually fed, protected, and isolated from anything except general inputs (no hunger or threats or anything), it'll do the same thing as your computer above: generate gobbledygook.

If we put a thinking machine together that requires periodic recharging (such as, say, a robot vacuum), then it will set a goal to recharge when it gets low; and subordinate to that, set a goal to come up with an optimal navigation plan to reach its docking station. The ones we have now are simple and linear in nature, of course, but if we give one a complex enough neural-like system of computation and the ability to process inputs and randomly index things through trial and error, there's no reason why the machine can't, through trial and error, come up with multiple various plans to reach its docking station when hungry - plans dealing with obstacles or a relocated station, etc.

None of what you mentioned is magical or totally unique to humankind; all of them (except, possibly, sleep) could be and probably will be built into the machines of the future.

As for our imagination, I see absolutely no functional difference in a human's imagination and your table generator above. Humans can imagine all sorts of useless gobbledygook - and often do.

But you are right - humans have needs that must be fulfilled, and can sort out the imagined data combinations to come up with useful solutions to fulfill those needs. However, that doesn't make them unique.

Some machines already have needs, and do what we do, albeit in a very limited fashion.

So I'm not exactly sure what your post is on about...
 
Last edited:
For what it's worth, genetic algorithms already come up with unexpected solutions to problems. I can't seem to find it on the net, but I read an article a few years ago about a simple genetic algorithm application to compute the fastest route from point A in the solar system to point B. The algorithm finally spit out a hugely complex solution involving multiple slingshots around inner planets and a course correction which involved flying between a planet and one of its moons.

The point of the article wasn't just that genetic algorithms work, it was that they work in ways that even the programmers don't anticipate. It's not as simple as, say, using numerical methods to find the best option out of a set of trivial solutions; it actually gives the impression of innovation and lateral thinking.

So it looks to me like the "hard" part of imagination isn't making the actual leaps between apparently unrelated concepts, but rather understanding enough about the concepts and how they do relate. The real difficulty seems to lie in explaining to your genetic algorithm how to determine relative success, and (most of all) what it means to combine two different approaches to the same problem. It turns out that this is actually prohibitively difficult a lot of the time.
 
Define "imagine".

For my purposes the portion of the Wiki page to which I linked suffices:
When two existing perceptions are combined within the mind the resultant third perception referred to as its synthesis and on occasion a fourth called the antithesis, which at that point only exists as part of the imagination, can often become the inspiration for a new invention or technique.

Obviously, I'm less than thrilled with the "within the mind" bit, but it's Wiki.
 

Back
Top Bottom