Yes, that occurred to me as well (assuming you were referring to the rest of what I quoted). Upon reflection though, I think it still applies.Originally posted by TillEulenspiegel
The rest of the quote seems a little dated tho as there are many expert systems and autonomous agents in use.
While genetic algorithms (for example) can be a very efficient means of optimization, they still operate only within very narrow constraints. Such a program might become very good at (say) face recognition -- much better than anything a human would come up with in a reasonable length of time by hand-coding -- but it isn't going to spontaneously decide to start analyzing stock market trends, even if it has access to the pertinent data.
One reason for that has to do with granularity; you might have a large number of ways to approach a problem, and each approach can be further broken down into chunks. These chunks can be tossed around, traded around randomly between members of a population to explore huge combinatorial possibilities, but you reach a point where you can't permit flexibility in the details that comprise a chunk without running a risk of reducing the whole assembly to complete junk that doesn't do anything.
This, in turn, has a lot to do with the rigidity of the machinery on which the whole process takes place; below a minimal point of adherence to form, there arises a risk that the machine will crash, bringing the entire process to an abrupt halt. The process of evolution that produced (that property which must not be named) in the human brain is not so constrained; even the most malformed mutant will not cause the whole environment to lock up.
Even if this problem is solved, the most flexible learning program will still not venture outside the bounds defined for it by its designer, because it wouldn't have any reason to. In fact, to do so would almost certainly violate the parameters defined for the program by its designer -- part of that process being a method of evaluating the program's success in terms explicitly defined by the programmer. Attempting to overcome this by including rewards for violating the parameters as being within the acceptable parameters amounts to nothing more than an expansion of the rigid, explicitly defined bounds.
It seems to me that truly flexible, creative self-modifying programs would require full access to design space, and an implicit, rather than explicit selective mechanism -- the problems they face should somehow be natural consequences of their interaction with their environment, rather than contrived constraints we try to import from the much different world in which we live.
The undefineable property of the human brain which we refer to by the dubious term 'intelligence' (or at least some of its components) lay along a design path which was taken by only one among all the species on earth. Even if such an artificial whatever had full access to a similarly rich design space, there is no particular reason to expect it to exhibit a preference for a path that would lead to a corresponding unnameable artifical property.