jmercer said:
I figured I'd address this single post since one of the few things I can claim to be is a "computer expert". And please don't take this as mocking - it's not. It's a technical explanation.
While a computer can recall information and/or data, it cannot interpret or assign significance to that data. So citing a computer's ability to retrieve information, data - even images - as "thinking" is clearly false.
[...]
So, while the process of recall in a computer resembles thinking in human beings, that's only because human beings crafted computers to provide the desired results. It's not because the computer itself "decides" what value to assign to any particular piece of data.
While true, there is a meta-issue that you're glossing over, which is the ability of computers to self-program; to decide "for themselves" what value to assign to a particular datum in accordance with a broader meta-program written by a human. This kind of self-modification is a key aspect of the various disciplines of artificial intelligence, machine learning, data mining, pattern recognition, and so forth.
A classic example of this is in the way that connectionist networks work -- the system is presented with a set of (in some cases unlabelled) data, and modifies itself, usually by adjusting a set of numeric "weights," in order to produce a "better" fit to the data. The results of such a calculation are typically very good, but they are specifically not the result of a human being hand-crafting computers (more accurately, programs) to provide the results. Instead, they come from humans hand-crafting programs to machine-craft programs to produce the results. There's also no reason to stop the meta-issues here; Nakisa, for example, has had quite good results with "genetic connectionism," where the network design itself was the output of yet another optimization program -- in other words, a human hand-crafted a program to craft a program to craft a program to get results. And there's still no reason to stop : lather, rinse, repeat.
There are, however, other examples -- one of my favorites is a computer program designed to look for interesting graph theory problems. Basically, it generates random conjectures of the form "[Graph property 1] >= [Graph property 1]," and generates graphs a random until it finds a counterexample. If no counterexample is found in a zillion random trials, it puts the conjecture into an automated theorem prover and tries to prove it. Obviously, a human had to design and write this program -- but that human may never have "thought" about the particular new theorem that the system finds.
The real trouble, though, is that this meta-problem mimics a lot of what we know about human intelligence as well. Humans obviously get most of their information by observation of the world, but they are also well-fit (by design or by evolution) to observe the world and to collect information. How is this different than a neural network that is designed to observe a problem space, to collect information about it, and then to respond?