barehl
Master Poster
- Joined
- Jul 8, 2013
- Messages
- 2,655
There are show-stopping flaws in the chain of logic you've presented here
No, I would like to see this first. You might be right and this might falsify my entire theory.
There are show-stopping flaws in the chain of logic you've presented here
Thank you; I'm always open to a correction and many people on this forum have made corrections to my posts in the past.I'm correcting the following error because this is the subforum devoted to Science, Mathematics, Medicine, and Technology.
Questions are finite strings of symbols taken from a fixed finite alphabet (which happens to be Chinese in your example, but that doesn't matter). There is only a countable infinity of finite strings of symbols over a fixed finite alphabet.
I suspect this is just another example of your habit of padding your posts with extraneous details, and getting those details wrong. If I'm wrong about that, please explain how the distinction between countably and uncountably infinite cardinalities affects your argument.
So, your way of showing that I'm wrong is to start with different premises, make a different argument, and thereby come to a different conclusion? That one did make me laugh. If your theory of philosophy were true then only one argument could exist at a time. The actual way to show that I'm wrong is to either show that one of my premises is wrong or to show a logical flaw in my argument.A way to see that the assertions in the OP are wrong is forget about Chinese. We can use a language that consists of only 10 questions, each with a symbol, e.g. A - J, and answer, e.g. 1 - 10. The facilitator has a book mapping the 10 symbols to answers. We get the same results - a person and computer will do the same mapping without understanding the language and neither would be considered to be thinking.
I know this is fairly standard, but on what grounds do we say it isn't thinking?
Finally, this seems like a useless exercise anyway, a quibble, really. If the machine acts for all practical purposes like it's a "thinking" machine, who cares if it "really" thinks or not?
You again are missing the point. The Room I've described so far is not even theoretically capable of doing the same function. If you can describe what it would take to make the room functional, you are welcome to do so.Apologies to Searle, but the idea of the Chinese Room as an argument against machine intelligence is stupid: the fact that the parts don't understand what they are doing doesn't mean that the system as a whole doesn't understand.
If you want us to try to falsify your theory, it helps to say what it actually is.No, I would like to see this first. You might be right and this might falsify my entire theory.
Finally, a halfway decent response.Why can't the system based on pattern matching, logic, and set theory answer why a helium balloon will rise if you let go of the string?
Yes, we could do that with a set.Helium near standard temperatures and pressures is less dense than air.
Yes, a helium balloon as a whole weighs less than the volume of air it displaces.The volume of a helium balloon is mostly filled with helium.
This is where you are running into a problem. I can't see how we could describe viscosity, turbulence, and kinetic energy in terms of set theory.Air is a fluid medium.
We could probably describe this.Positive buoyancy in a fluid medium occurs when an object displaces more mass of fluid than its own mass.
I'm not seeing any way to describe force in terms of set theory.Positively buoyant objects in a fluid medium experience an upward force.
Again, I don't think we can describe forces, acceleration, or movement with set theory.A tether is a tensile member providing a downward force to counteract an upward force.
Most small helium balloons are tethered by a string that is held.
Objects accelerate in the direction of the net forces acting on them.
etc.
They don't have to be technical. You could ask what you would do to stay warm if it was cold outside. Someone who is an educator would probably recognize these types of questions right away. For example, I could ask how many plates you would need to set on the table if there are four of you eating dinner. This is easily answered by set theory. I could say that you have iced tea, lemonade, and cherry soda. You get some cherry soda but your guest doesn't like cherry soda. What do you do? This again can be answered by set theory by excluding the drink that they don't like. Many questions like this can be answered but not all of them. Some questions that can be answered by children are beyond set theory. This generally involves anything with an internal process or anything that requires modeling. For example, we had a pencil sharpener that would get full of shavings and you would have to dump them out in the trash can. Children can easily understand this but I don't think it can be described with set theory. These examples seem to indicate where we would have to go next with our Room.Must a "Chinese Room" demonstrate omniscience in order to be evaluated as capable of understanding or of intelligence?
I don't think it was meant to. The Chinese Room is designed to tackle one question: "does it understand Chinese?" That answer has to be "yes," at least from a black box perspective. If the room doesn't convincingly appear to understand Chinese, then the gotcha when the curtain is whisked away to reveal some shmuck with a lookup table won't have any effect.
If you want us to try to falsify your theory, it helps to say what it actually is.
I... don't think you read my post right.Under the weak standard, the thought experiment breaks down. I don't even need a room. I can put my written question into an envelope, seal it, then open the envelope again to reveal - Hey Presto!- correctly written Chinese. Is the understanding then to be found in the pen, or the paper, or the writer/reader?
Yeah, I'll drink to that. It's like wrestling a falsifiable test out of an MDC applicant.We weren't talking about my theory. This thread to me is remedial, covering the low level basis of cognitive theory. This is below what I would consider foundational. You claimed to find a show stopping error in my thought experiment about a version of Searle's Chinese Room. Do I think you've actually found an error? No.
I'm still trying to decide how low level I have to go to explain the concepts of my theory, but this thread has not exactly been encouraging. I've seen repeated evidence in the various threads that cognitive theory is much more difficult for people to understand than I had expected.
Now, whether that argument is sound or not, I'm trying not to weigh in on, since I want to avoid being sidetracked to hear what barehl has to say. But that is the argument. It's not a Turing Test. The Room doesn't have to convince someone that it is strongly intelligent, merely that it understands Chinese.
No, barehl: My way of showing that the assertions in the OP are wrong is toSo, your way of showing that I'm wrong is to start with different premises, make a different argument, and thereby come to a different conclusion? .
Anything else is beside the point. The Chinese Room is not meant to be a model for a full AI,
(snipped just a bit out)
Real World Fact 2.) It is possible to create a working "Chinese" room based on pattern matching, set theory, and logic.
The extension to a language such as Chinese also invalidates your assertion - the number of questions that can be asked in any language is finite.
You are right. But the Turing test has a finite number of questions. And a pattern-matching program would answer most or even all of those "What number comes after n" questions by recognizing the pattern.That can't be right.
"What number comes after 1?"
"What number comes after 2?"
"What number comes after 3?"
...
Wouldn't that be an infinite set of questions?
You are right. But the Turing test has a finite number of questions. And a pattern-matching program would answer most or even all of those "What number comes after n" questions by recognizing the pattern.
ELIZA would not - the assumption of the Chinese room is that the AI can pass the Turing test. The best result so far looks like convincing 33% of the judges that it was human. Turings original criteria for a pass was "70% of the time after five minutes of conversation".Is ELIZA a good example of a "Chinese room?" In that, it can fool some people for awhile.
Agreed.This thread to me is remedial, covering the low level basis of cognitive theory. This is below what I would consider foundational.
I was happy to have pleased you by correcting your error.Thank you; I'm always open to a correction and many people on this forum have made corrections to my posts in the past.I'm correcting the following error because this is the subforum devoted to Science, Mathematics, Medicine, and Technology.
You are quite wrong. The spoiler contains a straightforward proof of the following theorem.Questions are finite strings of symbols taken from a fixed finite alphabet (which happens to be Chinese in your example, but that doesn't matter). There is only a countable infinity of finite strings of symbols over a fixed finite alphabet.
I suspect this is just another example of your habit of padding your posts with extraneous details, and getting those details wrong. If I'm wrong about that, please explain how the distinction between countably and uncountably infinite cardinalities affects your argument.
You do cling to those ad hominems like a child hugging a teddy bear in the dark. But, you have an obvious flaw in your reasoning. A question is an arbitrary length. It could be ten characters or one thousand or one million or one septillion. There is no defined stopping point for a question. But, in terms of pattern matching we can illustrate this fairly simply.
Instances of dogs are countable in the same way that integers are countable. So, while at any given point in time we can only deal with a finite set of dogs in the same way that we can only deal with a finite set of integers, there is no stopping point. We can always have more or larger integers and we could always have more or larger instances of dogs. Admittedly, you will eventually run out of places to put these dogs but you would also run out of storage to write down integers.
But, for every integer there are an infinite number of reals, such as the reals between zero and one. And, for any instance of dog there are an infinite number of questions. This is why they are both uncountable.
Out with your theory, and we shall remedialize it, and bang rocks together, and drool upon it, and gnaw upon its corners, and gradually develop the proto-fundamental understanding you seem to think we lack. At which point your words will still be here, and then we can necro this thread to partake once more, this time treating you like the genius I am certain you will demonstrate yourself to be.