Sorry, I hadn't realized this discussion had re-opened.
Well, the first statement, in general, is a tautology. It's certainly the case that a neuron is either firing or it isn't, and it's similarly true that a car is either running or it isn't, that a bag is either empty or it isn't, et cetera. The key question is whether or not two states that can be broadly categorized as "firing" can be further subdistinguished.
And, of course, they can, as you yourself point out; two neurons can both be "firing" but of different pulse frequencies. The Mc-P neural model explicitly ignores this factor and treats alll "firing" neurons as mathematically and computationally identical. If you assume that neurons can fire at different rates and that these different rates can be distinguished, you've more or less re-invented the McClelland/Rumelhart PDP neuron. But the McClelland/Rumelhart neuron, in turn, assumes that the only subcategorization that exists is frequency.
This is an oversimplification. In general, neural pulses are all of the same amplitude, but they can vary not only in frequency, but also in phase and pulse shape. Ask any signal engineer about the amount of information that can be carried in the "phase" of a signal, and you'll see how crucial this oversimplification may be.
So the problem is that the "activation" of a real neuron cannot be described only as a single real-valued number (such as the firing rate); using a real-valued activation function is a computational simplification. I could get a more accurate model by assuming a complex-valued (frequency and phase) activation function, but this would be computationally more complex, and still leave out all the information carried in pulse shape. If I were much smarter than I am, I might be able to come up with a numeric encoding of pulse shape (perhaps in terms of Fourier coefficients), but the resulting neural model would be a nightmare to evaluate. Even when I do this, I'm focusing only on the electrical aspects of signal transmission and not on the chemical.... you see how the creation of a perfect model quickly turns into a hole with no bottom.
However, only by using this more detailed model would we be able to construct artificial neural models that are able to mimic the full behavior of real neurons.
But what does this have to do with logic vs. emotion? If a human modeller develops a mathematical abstraction, and then shows that this abstraction is in theory capable of performing a particular behavior, this says little or nothing about how such behavior is actually realized in the human brain. I can, in theory, build a computer running Windows XP out of water pipes and valves, or out of the abstract "cells" that form Roger Conway's "Game of Life." The real computer on someone's desk, however, is made out of doped silicon. The fundamental reasoning is flawed :
1) The human brain does X
2) A model based on Y does X
3) A model based on Y does Z
Therefore,
4) The human brain is based on Y
5) The human brain does Z
Neither conclusion can be supported within this argument. Frankly, I don't think it matters if your framework "is sufficient for a basis of all brain processes." What matters is if your framework is an accurate description of the basis that the brain uses, something that only the neurologists can tell us.
JAK said:
As I recall, neurons, in general, abide by the "all-or-none" law. Either they fire or they don't. However, "frequency of axonic transmission" is a variable.
I contend that this design is sufficient for a basis of all brain processes - including "logic and emotion."
Well, the first statement, in general, is a tautology. It's certainly the case that a neuron is either firing or it isn't, and it's similarly true that a car is either running or it isn't, that a bag is either empty or it isn't, et cetera. The key question is whether or not two states that can be broadly categorized as "firing" can be further subdistinguished.
And, of course, they can, as you yourself point out; two neurons can both be "firing" but of different pulse frequencies. The Mc-P neural model explicitly ignores this factor and treats alll "firing" neurons as mathematically and computationally identical. If you assume that neurons can fire at different rates and that these different rates can be distinguished, you've more or less re-invented the McClelland/Rumelhart PDP neuron. But the McClelland/Rumelhart neuron, in turn, assumes that the only subcategorization that exists is frequency.
This is an oversimplification. In general, neural pulses are all of the same amplitude, but they can vary not only in frequency, but also in phase and pulse shape. Ask any signal engineer about the amount of information that can be carried in the "phase" of a signal, and you'll see how crucial this oversimplification may be.
So the problem is that the "activation" of a real neuron cannot be described only as a single real-valued number (such as the firing rate); using a real-valued activation function is a computational simplification. I could get a more accurate model by assuming a complex-valued (frequency and phase) activation function, but this would be computationally more complex, and still leave out all the information carried in pulse shape. If I were much smarter than I am, I might be able to come up with a numeric encoding of pulse shape (perhaps in terms of Fourier coefficients), but the resulting neural model would be a nightmare to evaluate. Even when I do this, I'm focusing only on the electrical aspects of signal transmission and not on the chemical.... you see how the creation of a perfect model quickly turns into a hole with no bottom.
However, only by using this more detailed model would we be able to construct artificial neural models that are able to mimic the full behavior of real neurons.
But what does this have to do with logic vs. emotion? If a human modeller develops a mathematical abstraction, and then shows that this abstraction is in theory capable of performing a particular behavior, this says little or nothing about how such behavior is actually realized in the human brain. I can, in theory, build a computer running Windows XP out of water pipes and valves, or out of the abstract "cells" that form Roger Conway's "Game of Life." The real computer on someone's desk, however, is made out of doped silicon. The fundamental reasoning is flawed :
1) The human brain does X
2) A model based on Y does X
3) A model based on Y does Z
Therefore,
4) The human brain is based on Y
5) The human brain does Z
Neither conclusion can be supported within this argument. Frankly, I don't think it matters if your framework "is sufficient for a basis of all brain processes." What matters is if your framework is an accurate description of the basis that the brain uses, something that only the neurologists can tell us.