• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Did Godel disprove the idea of artificial intelligence?

Originally posted by MESchlum
I stated that (due to Godel among others) a "being that knows all mathematical truths" is not possible.

The impossibility is, I will grant, my opinion. Even if such a thing exists, I am quite certain that it would not be anything close to human (probably a lot more like a machine, with infinite memory, and the dogged persistance to run down each and every implication of each and every outcome).
A Turing machine has infinite memory.

And an equal amount of persistence. :D

But that's still not good enough.

Now if it were infinitely fast, too, then we'd be getting somewhere. But that is a much less realistic assumption than infinite memory. Because it would actually have to use its infinite speed. On the other hand, for a standard finite-speed machine, any computation that terminates uses a finite amount of memory; the only reason the machine needs an infinite amount is to be absolutely sure that it won't run out of memory in the middle, because it doesn't know beforehand how much it will need.

I wouldn't say that a "being that knows all mathematical truths" is impossible, exactly. But even if it were standing in front of us, spouting mathematical truths, we'd have no way to check that they really were truths, unless they were among the truths that we could have proved for ourselves to begin with. So what good is the being, after all? We have no way to verify that it is what it claims to be.
 
Interesting Ian
Huh?? but here you are denying that the human mind is simply the execution of algorithms! If the human mind amounts to no more than the execution of algorithms, then it proceeds via logic. So how come you are saying the human mind is not a system of logic?
Maybe because that is the position I have maintained in this forum for a number of months now.
Of course there are several definitions for algorithm, but at least in the TM sense and algorithm is:
1. a finite number of exact instructions (each instruction being expressed by means of a finite number of symbols);
2. if carried out without error, it will produce the desired result in a finite number of steps;
By which definition it would be ludicrous to describe any mind as an algorithm. Neither is the mind a system of logic (which is something different from an algorithm).

Clearly the human mind is capable of devising and running algorithms and similarly of devising and using systems of logic. It may even be possible to model the human mind using an algorithm. But it is not an algorithm.

Not every physical deterministic system is an algorithm. If a system is not an algorithmic then you might still be able to understand it. But if anybody is trying to find the instruction set for the mind they are barking up the wrong tree.

In terms of consciousness I do not reject that an artificial machine can be conscious - I don't know one way or the other. But I reject the idea that it is conscious just because it appears to be conscious.

The crude argument is that if I have a computer simulation of a washing machine, no matter how well it simulates the physics involved it will never get my washing clean. A computer model of a mind is no more a mind than the computer simulation of a washing machine is a washing machine.

I've read 2 of his articles. He explicitly states he does not mean this. After all, if it were a process, then an algorithm could simulate it. Which indeed you proceed to point out. Do you really think that Lucas is so dim as to not understand this?

Well then how do we know it? If he can't state this then how does he know that we know? And precisely what is it that we can know that we can't logically prove? Can you at least point me to the other article?

Godel's propositions, after all, have logically valid proofs. I don't think that Lucas is dim but he has left a great deal unsaid. Smart people are capable of saying stupid things. Kepler, for example, spent part of his life with rigorous science and part with nonsense (to give him credit he abandoned his weird theories when the evidence mounted against them).
 
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Did Godel disprove the idea of artificial intelligenc

Originally posted by Robin
(Anybody who is about to jump in with the "Church-Turing thesis" please note that they did not claim a Turing machine can do anything that any physical system can)
Do you mean, like washing clothes? Or do you mean, even information-processing-wise (i.e., tasks for which a simulation is as good as the real thing)?
 
Originally posted by Jorghnassen
It takes more than a map to know how the brain works (besides, brain mapping has its own problems), and to build an artificial brain. Can an omelet be mapped? Can number theory make an artificial omelet (or rather, how does one use number theory to make an artificial omelet)? You see there is a huge difference between description (math can be used to describe reality, but in a limited way), and implementation.

/stealing and misusing material from a lecture on A.I. I once attended...
It depends what you're trying to implement. There is not a huge difference between a computer program and a description of a computer program. A precise description of a computer program is a perfectly good computer program itself, albeit in a different programming language.

Of course, a physical computer is needed to run programs. But we already have physical computers.

If you already have a kitchen and the right ingredients, all you need to make an omelet is the recipe.
 
Originally posted by Robin
Not every physical deterministic system is an algorithm.
Is any?

I do not think I understand what you mean by "algorithm". Or, possibly, what you mean by "is".
Godel's propositions, after all, have logically valid proofs.
Apparently. And yet, if a computer applied the very same proof to the axiomatic system underlying its own operation, it would end up "proving" something that's false!

And if it were firmly convinced (so to speak) that the proof was logically valid---as we are convinced when we carry out the proof---it would thereby convince itself that it wasn't either a Turing machine, although of course it is one.

So how do we know we aren't one too?

Tricky business, this whole Goedel thing ... :D
 
69dodge said:

Of course, a physical computer is needed to run programs. But we already have physical computers.

If you already have a kitchen and the right ingredients, all you need to make an omelet is the recipe.

My point was that, to make an A.I. that is equivalent to a human brain (essentially rewording the poll question), the "right ingredients" might have to be exactly the same "ingredients" as the ones for a real human brain. Computers as we know today (or more advanced versions of the same kind of building blocks) will not be able to think as humans do. You can't make an omelet out of Cadbury eggs, even with the right recipe...
 
69dodge
Do you mean, like washing clothes? Or do you mean, even information-processing-wise (i.e., tasks for which a simulation is as good as the real thing)?
You are jumping back and forth a bit with my posts here but the it has not even been proved that the TM is capable of any information-processing task.
Godel's propositions, after all, have logically valid proofs.
--------------------------------------------------------------------------------

Apparently. And yet, if a computer applied the very same proof to the axiomatic system underlying its own operation, it would end up "proving" something that's false!
But there is no axiomatic system underlying its own operation. A computer programming language (or instruction set) is not an axiomatic system of the type considered by Godel. This is one major flaw with Lucas' argument.
And if it were firmly convinced (so to speak) that the proof was logically valid---as we are convinced when we carry out the proof---it would thereby convince itself that it wasn't either a Turing machine, although of course it is one.
Firstly, the computer would not necessarily operate by the principles of a Turing Machine and secondly even if it does, Godel's theorem does not apply to a Turing Machine.

So my main point still stands - Lucas is saying that there is at least one thing that we can know, but which a given machine can not know. How exactly does he say that we know this thing, if not through a valid logical process?
 
My point was that, to make an A.I. that is equivalent to a human brain (essentially rewording the poll question), the "right ingredients" might have to be exactly the same "ingredients" as the ones for a real human brain.

/agree Jorghnassen

Much like a quantum state machine . To be a complete model of the universe the machine would have to be a carbon copy of the universe it is modeling down to the last photon. We can mimic certain behaviors but to be %100 accurate means to be all inclusive. So topically the answer is no we cannot have AI that "thinks" like a human. Even identical twins don't have identical thoughts and reactions all of the time,and their biological machines not mimics.

That does not mean however that a good computer model on a fast machine could not pass the Turing test. In fact certain "Expert Systems" do as well as any human but with the understanding that they are generally confined to a single discipline I.E. Deep Blue, Medical Programs for diagnosis, etc.
 
TillEulenspiegel,
So topically the answer is no we cannot have AI that "thinks" like a human. Even identical twins don't have identical thoughts and reactions all of the time,and their biological machines not mimics.

And what do you mean with that? That one of the twins does not think like a human???
You are self-contradicting. Precisely, that's the reason you should consider seriously the posibility of machines thinking. No brain is equal to any other, and for start you don't know how much difference in a brain will stop resulting in human intelligence. How about substituting all neurones with electrical equivalents, for example?
 
BTW, there is an interesting page that makes a good argument against Penrose position, and gives a much more credible analysis about the consequences of Godel's theorem.

http://psyche.cs.monash.edu.au/v2/psyche-2-04-mccullough.html

Here there is the conclusion of the author:
8.1 Penrose's arguments that our reasoning can't be formalized is in some sense correct. There is no way to formalize our own reasoning and be absolutely certain that the resulting theory is sound and consistent. However, this turns out not to be a limitation on what computers or formal systems can accomplish relative to humans. Instead, it is an intrinsic limitation in our abilities to reason about our own reasoning process. To the extent that we understand our own reasoning, we can't be certain that it is sound, and to the extent that we know we are sound, we don't understand our reasoning well enough to formalize it. This limitation is not due to lack of intelligence on our part, but is inherent in any reasoning system that is capable of reasoning about itself.
 
Peskanov said:
TillEulenspiegel,


And what do you mean with that? That one of the twins does not think like a human???
You are self-contradicting. Precisely, that's the reason you should consider seriously the possibility of machines thinking. No brain is equal to any other, and for start you don't know how much difference in a brain will stop resulting in human intelligence. How about substituting all neurones with electrical equivalents, for example?

Perhaps I wasn't clear.

The Acme, the idealized form of a thinking human is a thinking human.When we look for example of a standardized "sameness" it would be ( IMO ) a set of identical twins who had the same biological attributes, upbringing, environment etc.

The fact that they still show divergence is an example of the problems of modeling "human" thinking. A demonstration of the fuzziness or gray areas of the hidden processes that make a human intelligence what it is, a sort of "la deux ex machina" .
If it's not possible to standardize behavioral and thought processes in the closest example we can have ( now )of the mechanism of thinking, How can we think that we could manufacture a successful counterfeit?

As I stated "Expert Programs do some things better then humans, but they do one thing only while You can argue with Your spouse on your cell phone,(Reasoning and speech skills, learned manipulation of a technical gadget) while driving, ( motor activity ) ,around a curve ( trig on the fly) and eating a doughnut ( -8- (l) um mm doughnut. It is definitely not a simplistic , linear process that some people here try to equate it with I.E.an "algorithm". We can't even quntify the basics or define the concepts, how could we possibly construct an effective model?


Good read here: http://www.abc.net.au/science/bigquestions/s460741.htm
 
The fact that they still show divergence is an example of the problems of modeling "human" thinking. A demonstration of the fuzziness or gray areas of the hidden processes that make a human intelligence what it is, a sort of "la deux ex machina" .
If it's not possible to standardize behavioral and thought processes in the closest example we can have ( now )of the mechanism of thinking, How can we think that we could manufacture a successful counterfeit?
How does a twin reckon his brother is intelligent? Interaction, plus physical reckoning (he looks also as a human) makes reasonable to think so.
The turing test only adresses the first step, interaction. If the machines talks with you like a human, you have half of the picture.
How can you complete the picture and know the machine thinks like you? Only if you can identify it's physical processes with yours. If the computer is modelled as a human brain (e.g. neural nets disposed in a human brain-like configuration).
If the machine was designed as a human brain, and behaves like one, my intuition would tell me the machine is intelligent.

As I stated "Expert Programs do some things better then humans, but they do one thing only while You can...
But that's a missconception of yours. The quantity of tasks (experts systems or not) a computer can carry simultaneously is infinite. We can enter into practical details if you want, but you must know that theorically the number of decissions a computer can make at the same time has no limit.
Usual expert systems are strictly interactive (they only "think" when requested) and one-user-only. Others are multiuser, they share their time between several questions done by remote users. But those are implementation details, they are not designed to develop the kind of intelligence humans have. Other AI systems are more complex and show the attributes of decission and volition.

A good analogy for this question could be wheater modelling. Modern wheater modelling systems simulate the whole atmosphere, a highly interactive system which happens "all at the same time". Every portion of air exchanges properties with the neighbour portions.
For practical purposes, parallel computing is used. Hundreds of computers share the problem, taking a piece of it and solving them locally.
However, the same computation can be solved by just one computer, much more slowly.

The deal is that any task solved by many computers or Turing machines, can also be solved by just one more slowly. And that's the reason philosophers don't have to deal with problems like the one you present, because just answering the general question is enough.
 
Me:
Godel's theorem does not apply to a Turing Machine
I must amend this. I was using the fact that there is no concept of 'provable' within an algorithm. Moreover there is no concept of true and false other than values defined by the programmer.

But Penrose has recast Godel's theorem using the concept of 'halting'. So while the Lucas argument merely assumes that a TM is such a system, Penrose explains why it might be.

It does not effect the main objections to Lucas' argument as put here.
 
Peskanov said:
How does a twin reckon his brother is intelligent? Interaction, plus physical reckoning (he looks also as a human) makes reasonable to think so.
The turing test only adresses the first step, interaction. If the machines talks with you like a human, you have half of the picture.
How can you complete the picture and know the machine thinks like you? Only if you can identify it's physical processes with yours. If the computer is modelled as a human brain (e.g. neural nets disposed in a human brain-like configuration).
If the machine was designed as a human brain, and behaves like one, my intuition would tell me the machine is intelligent.


But that's a missconception of yours. The quantity of tasks (experts systems or not) a computer can carry simultaneously is infinite. We can enter into practical details if you want, but you must know that theorically the number of decissions a computer can make at the same time has no limit.
Usual expert systems are strictly interactive (they only "think" when requested) and one-user-only. Others are multiuser, they share their time between several questions done by remote users. But those are implementation details, they are not designed to develop the kind of intelligence humans have. Other AI systems are more complex and show the attributes of decission and volition.

A good analogy for this question could be wheater modelling. Modern wheater modelling systems simulate the whole atmosphere, a highly interactive system which happens "all at the same time". Every portion of air exchanges properties with the neighbour portions.
For practical purposes, parallel computing is used. Hundreds of computers share the problem, taking a piece of it and solving them locally.
However, the same computation can be solved by just one computer, much more slowly.

The deal is that any task solved by many computers or Turing machines, can also be solved by just one more slowly. And that's the reason philosophers don't have to deal with problems like the one you present, because just answering the general question is enough.

A few things. All computers are finite machines and can only deal with finitely many tasks at one point in time. Second, massive parallelism is no indication of intelligence, neither is crunching numbers. It's not too hard to devise algorithms that involve doing some straight-forward decision making (like playing chess). Some very easy pattern recognition in the presence of interference suddenly isn't so trivial, even when using very "smart" algorithms. And that's not even going into less defined areas such as "creativity" and "imagination".

Now if one could make a machine, that didn't know anything at first and just had sensor devices to see, hear, touch, smell and taste, and some motor devices to move, and some sound emitting device, and that machine then learned to interact with one lifeform, be it cockroaches, squirrels, parrots or humans, now that would be truly artificial intelligence. Until then, it's just a fancy abacus.
 
A few things. All computers are finite machines and can only deal with finitely many tasks at one point in time.
Wrong; read again, I said a "theorical computer". A physical computer is only limited by the quantity of matter and energy avalaible in the universe to build it. There is no theorical limit to the quantity of nodes a parallel computer can have. In practical terms, every year the size and power of supercomputers grows more and more, only limited by budgets.
Mind that the current model of brain, used by neuroscientist, is a giant parallel computer where the neurons are the computational units.
Recently, supercomputers reached 100 Teraflops, which compares well with some estimations of the computational power of the brain.
Second, massive parallelism is no indication of intelligence, neither is crunching numbers.
Nobody said so. I only argue that it's necesary to replicate what brains do in similar timings.
Aside from speed, huge memory capacity is a necesity. You can calculate your way with 1 bit a time if needed, but the data must be there, and in case of human intelligence is a big quantity of data.
It's not too hard to devise algorithms that involve doing some straight-forward decision making (like playing chess). Some very easy pattern recognition in the presence of interference suddenly isn't so trivial, even when using very "smart" algorithms. And that's not even going into less defined areas such as "creativity" and "imagination".
So? Nobody here is offering reasons against the possible systematization of those propierties of the brain,( aside from intuitive arguments). Ian says that the default position should be "impossible", but his reasons seem weak to me. Neuroscientist says the most sensible position now, with all the information about the brain available, should be "possible"; and near all of them think "sure, it's just another proccess of the brain, like visual recognition".
Now if one could make a machine, that didn't know anything at first and just had sensor devices to see, hear, touch, smell and taste, and some motor devices to move, and some sound emitting device, and that machine then learned to interact with one lifeform, be it cockroaches, squirrels, parrots or humans, now that would be truly artificial intelligence. Until then, it's just a fancy abacus.
I don't get it. This kind of thing has been done several times, with bees, fishes, and even big mamals like cows. Most animals have a very low capability of interaction and it's easy to fool them, that does not prove anything...
 
TillEulenspiegel, about your link:
http://www.abc.net.au/science/bigquestions/s460741.htm
After reading the article, I can only say it's a nice example of this disaster area called "theory of the mind". They totally fail to tell why the human intelligence is not / can not be the fruit of a deterministic system (aka machine). Take a look at the game:
Now, how can thoughts do that? How can the desire ‘I would like to raise my arm’ be turned into the physical activity of the arm moving? Well, we can trace back a sort of chain of command, can’t we? We know that there are nerve impulses in my arm that cause the muscles to contract, and these nerve impulses have travelled down my nerve fibres from my brain, so the signals originate in electrical activity in my brain. But what is it that just triggers all that, that chain of command? What starts those electric currents off in the first place? How is it that a thought can be translated into electrons moving down nerves, and so on?
Incredibly fuzzy question. Why should a "though be translated" into electrons if you already ignore the nature of though? How do you know is not already composed by electrons?
Thoughts can’t move electrons or arms or whatever, because there are no thoughts – at least, there are no thoughts that are things with physical efficacy; there are only electrons and other matter frolicking about in accordance with physical laws.
Same here. I find it funny, because in fact most idealist and dualist, which defend a non-material nature of the mind (as this article) will argue that thoughts can, in fact, move electrons or arms.
Phillip: Let’s get back to the question that you raised earlier. What role does consciousness play? What is its advantage?

Paul: This is the mystery if we try to define it away, for then why do we possess it (or imagine we possess it) at all? In other words, if a cleverly programmed automaton, or a zombie, that had evolved to perform a lot of complicated functions, can get by in the world without being conscious of its own existence, what is the purpose of us having this consciousness or this self-consciousness, this self-awareness? It does seem to be a mystery if it doesn’t fulfil any useful role in nature. So I think we have to take consciousness seriously, in spite of the fact that many scientists would like to do away with it.
Here they take the stance that consciousness is awareness of our functions, but not the function themselves which could be produced by a machine.
But then:
Let us accept that consciousness is real. Can you deduce some practical evolutionary purpose for our degree of it?

Paul: It’s easy to use consciousness-type language to see an advantage. For example, it is clearly advantageous to be able to predict the future to a limited extent – to plan
Nice one; after all consciousness can perform functions: e.g. guessing the future.
This function can be, and is, performed by machines. So?. Next:
You know, it’s a very curious thing about the self, that it is a paradoxical mixture of something which is unchanged with time and something that changes with time. If you ask, ‘Are you the same person you were at the age of ten?’ well, in one sense you are; there’s a continuity of memory, certain personality traits remain unchanged, and so on. On the other hand, you are clearly not exactly the same person. Not only has your body changed but your mind has changed as well. So there is something that we like to call the ‘self’ which is preserved intact through time, and yet something in there is changing, too. So I don’t think we are ever going to understand what we mean by the self without understanding the psychology of temporality and the puzzle of the sensation of the flux of time
This property is shared by all complex systems, from wheater to economic markets to computer programs, and noboy has problems identifying those as entities. No mistery here. Next:
Paul: I’ve made it very clear that I think that consciousness is something associated with complexity, and therefore that I wouldn’t expect to find a rock to be conscious, or for that matter a star or a planet. Consciousness seems to be something that emerges over time as complexity advances.
Then, where is the problem in accepting that the brain is a machine, and another complex machine could be conscious?

Conclusion: where is the argument against a thinking machine, or machines with self-awareness? I am unable to find it.
And, sadly, this is the common trend in those discussions. An argument from ignorance. "I don't know, so what you say is impossible".
 
Look, I'm not saying it's impossible, I'm saying a two things:
-first our understanding of the brain is still too rudimentary to reverse-engineer it completely;
-computers, as we know them today (now, I'm not talking speed and memory size, which can increase a whole lot and it's still not going to make a difference, I'm talking more basic architecture, building blocks), are likely not going to be good enough to emulate a brain.

Now you are going to say that my second position is purely hypothetical and I have no proof that computers are insufficient, but the other position is just as hypothetical, and has no evidence for it other than wishful thinking. Call me skeptical, but until someone comes up with the software for sufficient human brain emulation, I'm going to stick to the position that thinking machines are just as possible as cold fusion.
 
Jorghnassen said:
I'm going to stick to the position that thinking machines are just as possible as cold fusion. [/B]

I have no problem with either intelligent or thinking machines, so long as people are not suggesting that they would be conscious. How can the execution of an algorithm lead to consciousness?
 
Jorghnassen, I am not trying to create an argument from authority, but I have the feeling you are uninformed about neuroscience.
first our understanding of the brain is still too rudimentary to reverse-engineer it completely
That's true and I think undisputed.
computers, as we know them today (now, I'm not talking speed and memory size, which can increase a whole lot and it's still not going to make a difference, I'm talking more basic architecture, building blocks), are likely not going to be good enough to emulate a brain.
The problem here is that you are denying the validity of current neuroscience; I mean that you are being over-skeptical.
Neuroscience has a basic model for brains (a functional, mathematical model of the neuron as the base of it). This model is known to be incomplete as some of the "slow" properties of the neuron have an unkown behaviour. For example, the way in which neurones (slowly) wire within themselves. This proccess seems computationally inexpensive, but it's a necessary property of the brain.
Scientist can use the current model to imitate most of the expensive, complex processes known in the brain, like reckoning of sounds for example. However our artificial neurons still lack the plasticity of the real ones.

Now, you claim that modern computers are not ready to emulate a brain. And neuroscience claims the reverse, and they have the numbers. They can't produce a working brain because they lack the general map of the brain and details like this neuronal rewiring, however the computational cost IS reasonably estimated, and it is within the reach of current hardware!

In this forum we have discussed several time about the advances in artificial neural nets. Take a look at one of the most powerful examples of neuroscience, the computerised emulation of a well-known section the brain:
http://www.newscientist.com/article.ns?id=dn3488
 
Ian,
I also don't have a problem with non-interactive dualism, where consciousness sit there "feeling" passively while the machine works it's way through the world. However I don't see any logical reason to support that idea.
And I certainly have a problem with the other two popular philosophical options, clasic dualism and idealism, because both require consciousness modifying the brain processes to include their concept of free will.
The concept of the brain being manipulated to a grade in which it can choose to pick a cookie, for example, seems huge to me.
Maybe that seems a simple decision, however my view as a programmer is that the quantity of elements present is enormous. This kind of sophisticated soul/body interaction seems extremely impausible sitting from the neuroscientist chair.
 

Back
Top Bottom