I've put this in this forum because it is largely concerned with an arbitrary (but, I believe, defensible) definition of "intelligence."
I've been looking for a reasonable definition of intelligence that does not depend on demonstration of the ability to communicate with humans. This is what I've arrived at:
Intelligence (operational definition) as a characteristic of a process: the demonstration of the process's ability to act in favor of self-preservation by counteracting, with novel responses, novel external threats to the continuation of the process.
The crux of the definition is the idea of effective novel responses to novel external signals from (that is, changes in) its environment. The emphasis on self-preservation and "threats" is to distinguish "effective" novel behavior from, say, random behavior. In order to speak of the effectiveness of a reaction to an external event, we must posit some goal on the part of the system doing the reacting. This presents difficulties. We cannot say for certain that anything (except our individual selves) experiences goals as conscious intentions or desires, nor that such experience is causally significant even when it occurs. Nor does the observation of behavior consistently producing a certain effect (such as orbiting around another body in space) imply that it's reasonable to describe the effect produced as a goal.
Self-preservation, though, is at least a likely goal, at least some of the time, of any process or being we would describe as intelligent. If we attempt to imagine system that's intelligent (by traditional meanings of the word), without self-preserving behavior, it seems to lead to contradictions. For instance, we can imagine taking a supposedly intelligent being, such as a person, and put it into an unfamiliar environment containing mortal hazards (let's say, pits). For various reasons, the being might move in a straight line until it falls into a pit, or blunder around at random until it falls into a pit. Perhaps it panics, or it is unable to sense the pits, or it is unable to understand that the pits are hazardous. That does not prove the being is not intelligent, but it does mean that it has not demonstrated its intelligence in this particular case. A being that never exhibitied self-preserving behavior could never demonstrate its intelligence in any such test, and that would call into serious question the notion that it was intelligent in the first place.
Therefore, while self-preservation might not be appropriate for any fully satisfying philosohpical definition of intelligence, it might be the best way available to distinguish intelligence for an operational definition.
Demonstrating self-preservation doesn't require that intelligent agents must always be observed to act in favor of self-preservation, only that they demonstrate the capability of doing so. Nor does it require the attribution of conscious desire or intention. In other words, when we see among a system's novel responses to novel external signals, an overall trend of responses favoring the system's self-preservation, that's sufficient to surmise "intelligence" by this definition.
Evolution:
- Generates novel responses to novel signals from the earth's environment, in the form of new genomes and creatures.
- Responds to novel situations in a manner that promotes the continuation of the process, by diversifying and disseminating the living populations in which the process occurs.
But on a nuts and bolts level, is it reasonable to consider evolution as a potentially intelligent system? Does evolution have the types of "components" we might expect to see in a system capable of intelligence? The answer, I propose, is a definite yes.
The human nervous system is our only definitive example of an intelligent system, but it's generally accepted that computer systems are theoretically capable of intelligent (by common definitions) behavior. Computing theory then tells us that a great number of other systems whose behaviors embody certain sets of rules, all proven computationally equivalent to computers, must then share this same theoretical capability.
What do all these systems have in common?
1. Memory; the ability to preserve symbols or patterns for periods of time, and the ability of that stored information to influence the system's subsequent behavior.
2. The capability of propagating stable patterns from one part of the sytem to another.
3. Nonlinear amplification, by which the effect of a component on the system (the component's outputs) can exhibit a threshold response in relation to the effects of other parts of the system on a component (the component's inputs).
In evolution, #1 is accomplished by genomes; #2 is accomplished by protein machinery at all levels including whole organisms; and #3 is accomplished by selection, the threshold output being whether an organism succeeds in reproducing.
Skeptical criticism, please.
Respectfully,
Myriad
I've been looking for a reasonable definition of intelligence that does not depend on demonstration of the ability to communicate with humans. This is what I've arrived at:
Intelligence (operational definition) as a characteristic of a process: the demonstration of the process's ability to act in favor of self-preservation by counteracting, with novel responses, novel external threats to the continuation of the process.
The crux of the definition is the idea of effective novel responses to novel external signals from (that is, changes in) its environment. The emphasis on self-preservation and "threats" is to distinguish "effective" novel behavior from, say, random behavior. In order to speak of the effectiveness of a reaction to an external event, we must posit some goal on the part of the system doing the reacting. This presents difficulties. We cannot say for certain that anything (except our individual selves) experiences goals as conscious intentions or desires, nor that such experience is causally significant even when it occurs. Nor does the observation of behavior consistently producing a certain effect (such as orbiting around another body in space) imply that it's reasonable to describe the effect produced as a goal.
Self-preservation, though, is at least a likely goal, at least some of the time, of any process or being we would describe as intelligent. If we attempt to imagine system that's intelligent (by traditional meanings of the word), without self-preserving behavior, it seems to lead to contradictions. For instance, we can imagine taking a supposedly intelligent being, such as a person, and put it into an unfamiliar environment containing mortal hazards (let's say, pits). For various reasons, the being might move in a straight line until it falls into a pit, or blunder around at random until it falls into a pit. Perhaps it panics, or it is unable to sense the pits, or it is unable to understand that the pits are hazardous. That does not prove the being is not intelligent, but it does mean that it has not demonstrated its intelligence in this particular case. A being that never exhibitied self-preserving behavior could never demonstrate its intelligence in any such test, and that would call into serious question the notion that it was intelligent in the first place.
Therefore, while self-preservation might not be appropriate for any fully satisfying philosohpical definition of intelligence, it might be the best way available to distinguish intelligence for an operational definition.
Demonstrating self-preservation doesn't require that intelligent agents must always be observed to act in favor of self-preservation, only that they demonstrate the capability of doing so. Nor does it require the attribution of conscious desire or intention. In other words, when we see among a system's novel responses to novel external signals, an overall trend of responses favoring the system's self-preservation, that's sufficient to surmise "intelligence" by this definition.
Evolution:
- Generates novel responses to novel signals from the earth's environment, in the form of new genomes and creatures.
- Responds to novel situations in a manner that promotes the continuation of the process, by diversifying and disseminating the living populations in which the process occurs.
But on a nuts and bolts level, is it reasonable to consider evolution as a potentially intelligent system? Does evolution have the types of "components" we might expect to see in a system capable of intelligence? The answer, I propose, is a definite yes.
The human nervous system is our only definitive example of an intelligent system, but it's generally accepted that computer systems are theoretically capable of intelligent (by common definitions) behavior. Computing theory then tells us that a great number of other systems whose behaviors embody certain sets of rules, all proven computationally equivalent to computers, must then share this same theoretical capability.
What do all these systems have in common?
1. Memory; the ability to preserve symbols or patterns for periods of time, and the ability of that stored information to influence the system's subsequent behavior.
2. The capability of propagating stable patterns from one part of the sytem to another.
3. Nonlinear amplification, by which the effect of a component on the system (the component's outputs) can exhibit a threshold response in relation to the effects of other parts of the system on a component (the component's inputs).
In evolution, #1 is accomplished by genomes; #2 is accomplished by protein machinery at all levels including whole organisms; and #3 is accomplished by selection, the threshold output being whether an organism succeeds in reproducing.
Skeptical criticism, please.
Respectfully,
Myriad