• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Did Godel disprove the idea of artificial intelligence?

Part of what gives people (most people) their self awareness is the physical form they are in. Everyone can perceive that they are distinct entities, separate from everything else.

How exactly does a machine come to see itself as an independent "thing"?
 
Interesting Ian said:
I told you. People just see the truth. There is no process involved. They just know by an immediate intuitive understanding. Maybe their minds simply see the Platonic forms.
Intuition can be wrong so it does not qualify as a method of knowing the truth.

But you are still not seeing the nature of the objection.

Lucas' entire argument critically hinges on this premise that a human can know something that a machine can't. And it comes down to a bald unsupported statement - "People just see truth".

Do you not think it is possible for someone to "just know" something and be wrong? Didn't Aristotle just know that the natural state of a body was at rest? Didn't a lot of people just know that the sun went round the earth? What about someone who says "I just know I will win the lottery this time"?

How do you distinguish between someone who believes that they see the truth and are wrong, and somebody who believes that they see the truth and do see the truth?

Lucas uses the example of someone capable of understanding Godel's theorems and using them to come to a logical conclusion:
We now construct a Gödelian formula in this formal system. This formula cannot be proved-in-the- system. Therefore the machine cannot produce the corresponding formula as being true. But we can see that the Gödelian formula is true: any rational being could follow Gödel's argument, and convince himself that the Gödelian formula, although unprovable-in-the-system, was nonetheless----in fact, for that very reason---true.
Note he says "... follow Godel's argument..." ie follow some logically valid process. Then your own admission:
I've read 2 of his articles. He explicitly states he does not mean this. After all, if it were a process, then an algorithm could simulate it. Which indeed you proceed to point out. Do you really think that Lucas is so dim as to not understand this?
So I have shown you where he explicitly states that the human knows the truth of the formula through a logically valid process. You have not shown me where he states anything else, explicitly or otherwise. So by your own argument above, Lucas' contention is refuted.
 
jay gw said:
Part of what gives people (most people) their self awareness is the physical form they are in. Everyone can perceive that they are distinct entities, separate from everything else.

How exactly does a machine come to see itself as an independent "thing"?
Easy - all our sense organs have artificial equivalents - cameras, microphones, machines for detecting odors, flavours - you could easily mimic touch. Then these are sent as signals through to the computer and it can work out the nature of reality as well as any human can.

But even if you could do all that and have an artificial intelligence that convinced people that it was conscious - if it was algorithmic it would not be conscious.

Suppose you take the machine code of your artificially intelligent machine and print it out, then get someone to run the algorithm manually - as you can with any algorithm. The results would be the same (albeit vastly slowed down). In this case can anybody still assert that a consciousness exists?

If not then how could you assert the same when the algorithm is run on some machine?
 
Robin said:
Easy - all our sense organs have artificial equivalents - cameras, microphones, machines for detecting odors, flavours - you could easily mimic touch. Then these are sent as signals through to the computer and it can work out the nature of reality as well as any human can.

Actually, human sensory perception is really non-trivial to emulate. One big advantage of the humain brain is blocking tons of unnecessary input that computers can't weed out so easily, even with fancy algoritms. Working out the nature of reality as well as any human is not for today...
 
Jorghnassen said:
Actually, human sensory perception is really non-trivial to emulate. One big advantage of the humain brain is blocking tons of unnecessary input that computers can't weed out so easily, even with fancy algoritms. Working out the nature of reality as well as any human is not for today...

But the question I was answering did not say "as well as any human" did it? An artificially intelligent machine would not need to process information in exactly the same way as a human to get self-awareness.

A computer can do a great deal of what a human mind does with sense data and what it does is probably enough to become aware of itself.
 
A computer can do a great deal of what a human mind does with sense data and what it does is probably enough to become aware of itself.

It's possible that given the power of computers in the future, the amount of data coming in wouldn't be a problem.

AI has one definite advantage over humans - the lack of emotions to 'confuse' or distort it's judgment! Emotions may work to help thinking, but they can also really create false ideas/perceptions.

A machine will never invent 'god' to explain anything!
 
jay gw said:
It's possible that given the power of computers in the future, the amount of data coming in wouldn't be a problem.

AI has one definite advantage over humans - the lack of emotions to 'confuse' or distort it's judgment! Emotions may work to help thinking, but they can also really create false ideas/perceptions.

A machine will never invent 'god' to explain anything!

An artificially intelligent machine might well have machines, especially it works using processes similar to those in animal brains. And it might well decide to believe in God!
 
jay gw said:
It's possible that given the power of computers in the future, the amount of data coming in wouldn't be a problem.

AI has one definite advantage over humans - the lack of emotions to 'confuse' or distort it's judgment! Emotions may work to help thinking, but they can also really create false ideas/perceptions.

A machine will never invent 'god' to explain anything!

If emotions and religion weren't useful in some way, they wouldn't exist (one can argue the former is much more useful than the latter, but anyway). Now who is to say emotions aren't required for self-awareness?
 
jay gw said:

A machine will never invent 'god' to explain anything!

Unless it starts to think about where it came from, where its designers came from, or what the ideal computer is. ;)
 
A machine will never invent 'god' to explain anything!

In fact if you think about it an artificial intelligence will be able to see the gods (gods given that there will probably have been a design team) and have a chat with them. The AI may well then ask, "so tell me, who designed and built you guys?"
 
Robin said:
Intuition can be wrong so it does not qualify as a method of knowing the truth.

But you are still not seeing the nature of the objection.

Lucas' entire argument critically hinges on this premise that a human can know something that a machine can't.

This is true. Obviously if people deny this, then the Goedelian argument fails. Argue about this with Lucas then. He says that there is something we can see to be true which a machine cannot i.e a Goedelian sentence. Why don't you and the other people on here email him and argue it out?? I don't know if this is true or not. I simply say that any other argument that people have come up with is irrelevant. They have either not read Lucas and my contributions to this thread, or they have not understood.

But as I say, if you simply baldy assert that there are no Goedelian sentances which computers cannot see to be true, or in as much as they truly cannot see some sentence is true, then neither can we, and everyone agrees with this apart from Lucas, Penrose, and almost certainly Goedel then what is the purpose of this thread???


And it comes down to a bald unsupported statement - "People just see truth".

Yes of course they can!

Do you not think it is possible for someone to "just know" something and be wrong?

They didn't know it then did they?? :rolleyes:

Didn't Aristotle just know that the natural state of a body was at rest? Didn't a lot of people just know that the sun went round the earth? What about someone who says "I just know I will win the lottery this time"?

Yeah, mystical experiences is like when some fool says he knows he will win the lottery. :rolleyes:

Look, I've had enough of the inane arguments on this thread. I'm going.

Note he says "... follow Godel's argument..." ie follow some logically valid process. Then your own admission:

So I have shown you where he explicitly states that the human knows the truth of the formula through a logically valid process. You have not shown me where he states anything else, explicitly or otherwise. So by your own argument above, Lucas' contention is refuted. [/B]

No! "Follow" does *not* necessarily mean a logically valid process. He simply means that the commuincator has effectively communicated what he wanted to convey!

That's it. I've had enough.
 
Interesting Ian said:
This is true. Obviously if people deny this, then the Goedelian argument fails. Argue about this with Lucas then. He says that there is something we can see to be true which a machine cannot i.e a Goedelian sentence.
A Goedelian sentence is one which is so constructed as to be true if and only if the machine in question cannot prove it. Or, less precisely but perhaps more understandably, it is one which says, "the machine cannot prove me." Therefore, if the machine can "prove" it, it is necessarily false. And so, if we assume that the machine cannot prove any false statements, the Goedelian sentence is necessarily unprovable by the machine and therefore true.

Lucas is trying to demonstrate that he is different from the machine. Naturally, in the course of his demonstration, he may not assume that he is different from it; that would be circular reasoning. But if he allows for the possibility that he is identical to it, how can he correctly claim to see unconditionally that the Goedelian sentence is true? If he "sees" that it's true, and he happens to be identical to the machine, then the machine also can "see" that it's true. But that would make it false!

If he does not wish to "see" anything as true which might turn out actually to be false, and if he does not simply assume that he is different from the machine, then he cannot claim to see that the Goedelian sentence is true. But then his entire argument falls apart.
 
If emotions and religion weren't useful in some way, they wouldn't exist (one can argue the former is much more useful than the latter, but anyway). Now who is to say emotions aren't required for self-awareness?

But emotions can't be given to machines.

The idea of god may hit the AI at a certain point, but it's my opinion that culture is responsible for 90 percent of religious beliefs.

That being the case, machines don't have a church or a god that looks like C3P0.
 
jay gw said:
But emotions can't be given to machines.

So you assert. I have yet to see a demonstration one way or another.

And I have seen a lot of emulations of emotions in an effort to get realistic "human-like" behavior. Oddly enough, the usual effect of these emulations is to make the machine's behavior less intelligent, by an objective criterion, than before. Which, of course, fits in with our observation of humanity -- when in the grips of strong emotions, people make mistakes they otherwise wouldn't.
 
Interesting Ian
But as I say, if you simply baldy assert that there are no Goedelian sentances which computers cannot see to be true, or in as much as they truly cannot see some sentence is true, then neither can we, and everyone agrees with this apart from Lucas, Penrose, and almost certainly Goedel then what is the purpose of this thread???

But I didn't baldly assert it did I? I made a logical refutation to Lucas' argument which you agreed with. You just didn't agree with my understanding of his argument.

"Follow" does *not* necessarily mean a logically valid process
But "argument" does. Particularly "Godel's argument". If Godel had only clearly communicated his ideas then nobody would remember him.

I don't think you can co-opt Godel into the debate - he was a convinced dualist, yet apart from a single statement he did not use his theorems to support the notion. He probably realised the futility.
 

Back
Top Bottom