• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Logic vs emotion

BillHoyt said:
Nonsense. The key here is the one you keep PoMo dancing around. Postmodernism might imply deconstruction, but deconstruction does not imply postmodernism. You committed the fallacy of affirming the consequent.

How many more fallacies are you going to treat us to?

Bill, How many nominal fallacies of crying fallacy where there are none, are you going to continue to commit? Your claim of fallacy has been soundly defeated. Your further whining only goes to show you don't know a fallacy from a hole in the floor.
 
Suggestologist said:
Bill, How many nominal fallacies of crying fallacy where there are none, are you going to continue to commit? Your claim of fallacy has been soundly defeated. Your further whining only goes to show you don't know a fallacy from a hole in the floor.
I specified the fallacy. It is up to you to demonstrate how the argument was not fallacious.

deconstruction is categorized as postmodernistic; that doesn't preclude it being categorized under other categories as well.
If I write something that strikes you as "deconstructionist," that does not mean it is. Furthermore, you cannot jump from thinking you see something "deconstructionist" to concluding it is postmodern. Work out the syllogism on your own to see it is the fallacy of affirming the consequent.
 
BillHoyt said:
I specified the fallacy. It is up to you to demonstrate how the argument was not fallacious.


If I write something that strikes you as "deconstructionist," that does not mean it is. Furthermore, you cannot jump from thinking you see something "deconstructionist" to concluding it is postmodern. Work out the syllogism on your own to see it is the fallacy of affirming the consequent.

I've already delineated why you're mistaken.
 
drkitten said:
No, the situation is described by the logical syllogism that you have presented, but there's no evidence at all to support the idea that that's how the child actually reasons.

Nor does claiming that logic is important imply being good at it.
 
BillHoyt said:
"The most logical" was your insertion, not mine. I made no such claim.

These were your words: "No, you did the most logical thing: you went to your fridge." It's too late for you to edit them.

I cannot think of a way of responding meaningfully to this assertion, but it seems to me that the thing I would describe if I could makes further discussion impossible. In any event, I am impressed with your certitude. Have a nice day.
 
epepke said:
These were your words: "No, you did the most logical thing: you went to your fridge." It's too late for you to edit them.

I cannot think of a way of responding meaningfully to this assertion, but it seems to me that the thing I would describe if I could makes further discussion impossible. In any event, I am impressed with your certitude. Have a nice day.

I have neither reason or intent to edit my words. I wrote them in the context of that particular decision. You elevated that to the universal with regards to truth tables.

I have used that phrase twice so far. The one you just alluded to, in full context:
You wanted milk. So, you did what? Take a flight to a Wisconsin dairy? Ran out to the car dealership to buy a car? Go to a dairy cattle auction and buy a heffer? No, you did the most logical thing: you went to your fridge
In the context ot this example, the fridge is the most logical thing.

I also used it here:

No, you learned, over time, what seemed the most logical way to get your needs met.
I would have thought the qualification makes it clear that I am not asserting that we always arrive at the most logical conclusions.
 
People are inherently logical in the same sense that our brains are inherently arithmatical.

Each and every neuron is a tiny computer dedicated to arithmatic operations, and combined they form an immensely powerful computational system capable of performing intricate and subtle operations.

But if you provide a person with a list of two-digit numbers, he'll probably have great difficulty adding them up in his head.
 
Wrath of the Swarm said:

Each and every neuron is a tiny computer dedicated to arithmatic operations, and combined they form an immensely powerful computational system capable of performing intricate and subtle operations.

Um,... no.

I suggest presenting this statement in the neurology department of your local hospital. Preferably right after the chief of neurology has just taken a large swallow of very hot coffee. The results should be amusing.

We've certainly got models of neurons (the McCulloch-Pitts 1943 model is one of the best-known and most influential) that use numbers to represent the behavior of real neurons. But these are models, and oversimplified ones at that. In one of the best books on artificial neural networks, Hertz et al. (1991) identified many, many problems with real neurons that are not incorporated into this "tiny computer" metaphor. From p. 4 (so it's not exactly a subtle issue hidden in a footnote):

  • Real neurons are often not even approximately threshhold devices as described by McCulloch-Pitts
  • Real neurons perform non-linear and non-arithmetic functions on their inputs
  • A real neuron produces a sequence of pulses, not a numeric output value; interpreting the firing sequence as an arithmetic quantity is problematic
  • Neurons do not have the synchrony usually associated with "tiny computers"
  • The performance of a neuron can vary wildly and stochastically based on the chemical environment in which it finds itself

I could add to this list that not only are the outputs from neurons non-numeric (the firing pulses often cannot be interpreted as real numbers), but simpilarly the inputs are non-numeric for the exact same reason. The description of a neuron as a tiny little arithmetic computer is useful but untrue -- rather like the "atom as a tiny little solar system with planets" metaphor, or the "electricity like water flowing through a wire" metaphor. You can get useful results, but they don't really describe the reality with accuracy worth a half cup of warm spit.
 
Originally posted by drkitten

Real neurons are often not even approximately threshhold devices as described by McCulloch-Pitts
I find that disturbing, because it completely undermines what little understanding I thought I might have about what neurons do, leaving me without any conceptual handle whatsoever. Is there an offer in there somewhere to replace it with something?
Real neurons perform non-linear and non-arithmetic functions on their inputs.
I've seen that suggested various places, but it is usually delivered with some disclaimers regarding its speculative nature, at least with regard to individual neurons versus groups of neurons.
A real neuron produces a sequence of pulses, not a numeric output value; interpreting the firing sequence as an arithmetic quantity is problematic
I don't see any fundamental reason to consider such an approach invalid. Counting raindrops might be considered problematic too, but it still seems reasonable to assume that a precise number of them must be landing within a given area at any one time, and that, independent of our ability to precisely count them, this number must vary from one moment to another by an certain amount. It might be said that this amount becomes numeric the instant we use numbers to describe it, but that seems like a bit of a philosophical indulgence to me.
I could add to this list that not only are the outputs from neurons non-numeric (the firing pulses often cannot be interpreted as real numbers), but simpilarly the inputs are non-numeric for the exact same reason
Could not the same complaint also be brought against just about any attempt to model a real-world phenomenon mathematically? I mean, where are the real numbers, really? It's very hard for me to view what happens at the lowest levels of a modern digital computer as the manipulation of actual numbers either.
The performance of a neuron can vary wildly and stochastically based on the chemical environment in which it finds itself
The performance of a transistor varies with its environment too, though not over as wide a range. As for stochiastically, again, I think that remains to be established with confidence.
 
Dymanic said:
I find that disturbing, because it completely undermines what little understanding I thought I might have about what neurons do, leaving me without any conceptual handle whatsoever. Is there an offer in there somewhere to replace it with something?

The Hertz, et al. book I cited is a reasonable place to start, if you can find it in the library (it's probably not worth buying just for this purpose). Some of the original PDP literature (for example, "Certain Aspects of the Anatomy and Physiology of the Cerebral Cortex," by Crick and Asanuma, in the standard McClelland/Rumelhart Parallel Distributed Processing) discusses this issue as well.

It shouldn't completely undermine your understanding; the McCulloch-Pitts model is used for a reason, because it's an understandable and computationally tractable model of how many neurons do function. Don't throw the baby out with the bathwater. But by the same token, don't assume that what is true for the bathwater is also true for the baby.


I don't see any fundamental reason to consider such an approach invalid. Counting raindrops might be considered problematic too, but it still seems reasonable to assume that a precise number of them must be landing within a given area at any one time, and that, independent of our ability to precisely count them, this number must vary from one moment to another by an certain amount. It might be said that this amount becomes numeric the instant we use numbers to describe it, but that seems like a bit of a philosophical indulgence to me.


This is a reasonable assumption only if you are willing to make the supporting assumption that one raindrop is pretty much the same as another, and that the distribution of raindrops in space and time is irrelevant below a certain level, and that the only information that the user is interested in is the total amount of rainfall. With raindrops, these are probably pretty good assumptions.

On the other hand, I could make similar statements about the distribution of ink on a page, that independent of our ability to precisely count (or measure them), the amount of ink on the page must vary from one page to the next. This statement, while true, misses a very important factor in printing. Our assessment of the page as readers is not based on the amount of ink on the page (in fact, we might consider that to be largely irrelevant), but on the distribution of ink. Ink distributed in a certain way becomes one letter, while the same amount of ink distributed in another way becomes a different letter. The total amount of ink on a page is at best a red herring, and at worst is totally misleading as a guide to the page.

Using this analogy as an illustration, you can see that there is a lot of information in the distribution of neural pulses -- in the signal itself -- that is essentially abstracted away when you apply a single numeric "firing rate" or "activation state" to a McC-P "neuron." We have substantial evidence that this kind of information can substantially affect neural behavior. But to the best of my knowledge, no one's been able to apply this evidence and these observations to building computational models of human-scale cognition [yet]. It's an active and ongoing research area.


Could not the same complaint also be brought against just about any attempt to model a real-world phenomenon mathematically? I mean, where are the real numbers, really? It's very hard for me to view what happens at the lowest levels of a modern digital computer as the manipulation of actual numbers either.

The complaint isn't about an attempt to model. The complaint is about the attempt to read the properties of the model as properties of the real world. The map is not the territory -- the menu is not the meal. More precisely, the neuron is not "a tiny computer dedicated to arithmatic [sic] operations," although modelling it as one can produce interesting results. The problem is that the model deliberately oversimplifies (as all models do), and in doing so may lose or distort information to the point of becoming misleading.
 
Originally posted by drkitten

The Hertz, et al. book I cited is a reasonable place to start, if you can find it in the library (it's probably not worth buying just for this purpose).
Sounds like something I'd get a lot out of. I'll see about ordering it (from the library, that is).

This is a reasonable assumption only if you are willing to make the supporting assumption that one raindrop is pretty much the same as another, and that the distribution of raindrops in space and time is irrelevant below a certain level, and that the only information that the user is interested in is the total amount of rainfall. With raindrops, these are probably pretty good assumptions.
Let's say (without torturing the raindrop metaphor any further) that what we are assuming is that neural outputs are trains of all-or-nothing spikes, one of which (the spikes, not the trains) may be assumed to be pretty much like another (i.e., that differences in voltage or waveshape do not play a significant role, the coding system instead involving temporal differences in the arrival times of the signals at their destinations), and that this distribution in space and time (without dismissing anything above level of Plank time) is exactly what we are interested in. Do you see the problematic nature of quantifying those distributions as practical, or fundamental?

Our assessment of the page as readers is not based on the amount of ink on the page (in fact, we might consider that to be largely irrelevant), but on the distribution of ink.
We might think of measuring the total amount of ink on the page as analagous to the type of information produced by (say) fMRI scanning. Limited, certainly, but not entirely useless; it can tell us something about the amount of information transfer taking place in a specific region.
you can see that there is a lot of information in the distribution of neural pulses -- in the signal itself -- that is essentially abstracted away when you apply a single numeric "firing rate" or "activation state" to a McC-P "neuron."
I can't, I'm afraid. Unless all you are saying is that we can catch the signal, but not the message (that goes in the 'painfully obvious' bin). Without a doubt, the substantive informational content resides above the signal level, (or below that level, at the level of physical structure) and access to that coding scheme (or, more likely, multiple superimposed coding schemes, unfortunately) would be more revealing. But it is not clear to me how it is that this understanding has been abstracted away by focusing on firing rate and activation state. We had nothing to abstract away in the first place if we never had the first clue about the coding scheme(s) in use above the signal level anyway.

The complaint isn't about an attempt to model. The complaint is about the attempt to read the properties of the model as properties of the real world. The map is not the territory -- the menu is not the meal.
Yes, yes, yes. But that's always a problem, isn't it? The only place I can think of where it is not a problem is when symbols are being manipulated according to the rules of a formal system of mathematics, with no hint of suggestion that the results will be mappable to any real world phenomenon whatsoever, and no guarantees regarding the consequences of attempting to do that. You presented it so as to suggest that this is somehow more of a problem in modelling neural activity than in anything else.
More precisely, the neuron is not "a tiny computer dedicated to arithmatic [sic] operations"
Now that I agree with, with only the small reservation that a single transistor or logic gate hardly deserves the distinction either.
 
Dymanic said:


Let's say (without torturing the raindrop metaphor any further) that what we are assuming is that neural outputs are trains of all-or-nothing spikes, one of which (the spikes, not the trains) may be assumed to be pretty much like another (i.e., that differences in voltage or waveshape do not play a significant role, the coding system instead involving temporal differences in the arrival times of the signals at their destinations), and that this distribution in space and time (without dismissing anything above level of Plank time) is exactly what we are interested in. Do you see the problematic nature of quantifying those distributions as practical, or fundamental?


Fundamental, as it relates not to the physics of the neuron, but of the topology of the representation as real numbers

Perhaps this example might make it clearer. Really, it's straight from the pages of Minsky and Pappert (Perceptrons, 1969 [I believe]) and their proof of the limitations of perceptron learning, but it applies to Mc-P neurons in general. Assume that a neuron produces a sequence (you can even assume a length of two) of wide and narrow pulses, in any order. I will further assume, without substantial loss of generality, that the "activation" function you impose upon this neuron treats a "wide" pulse as being more active (having a higher numeric value) than a narrow pulse.

I will also assume a very simple network of two neurons, with the output of one attached to the input of the second.

I challenge you to explain how the threshhold of the second Mc-P neuron could be set to fire if and only if there were both narrow and wide pulses in the output of the first. More specifically, I claim that this is impossible, by the Minsky/Pappert XOR proof.

Let's start out by assuming that a sequence of two short pulses (dit-dit) corresponds to an activation function of X, which is less than the threshhold T of the second neuron (X < T). However, both dit-dah (Y), and dah-dit (Z) trigger the second neuron, so Y>T and Z>T. Now, consider the activitation level W corresponding to a pattern of dah-dah. Since this contains more wide pulses than either dah-dit or dit-dah, it must be even greater than Y or Z. But if W > Y, and Y > T, then W > T, and the second neuron will fire (incorrectly).

More generally, I claim that for the four firing patterns dit-dit, dit-dah, dah-dit, and dah-dah, ANY mapping you produce onto the real number line will of necessity have a highest and a lowest element. It is impossible for an Mc-P neuron to respond only to the middle two elements, because the threshhold function of an Mc-P neuron can only respond to linearly separable areas of input space. In other words, there is a particular response that can be learned by a real neuron, but not by an Mc-P neuron; the Mc-P model has limitations imposed directly by the assumption that the output of the prior neuron can be treated as a real number instead of as a genuine time-varying pulse sequence of varying widths.

This has some other implications. It's impossible to build an Mc-P neuron that acts as a voltage follower or delay line, because the mathematics of the Mc-P model don't permit it. It also can't be built as a rectifier, either half- or full-, nor can it be used as a voltage regulator (either band-stop or band-pass). On the other hand, we've got physical evidence of neurons that act as delay lines and/or voltage followers in the brain.

What we need instead is a more detailed (while still computationally tractable) model that can handle input and output signals that are neither numeric, nor necessarily even well-ordered. (At which point, we're almost talking about a Turing machine here, but, like, whatever.....) Unfortunately, no one's been able to satisfy all those constraints at the same time [yet], nor do I expect success in the immediate future.
 
Originally posted by drkitten

What we need instead is a more detailed (while still computationally tractable) model that can handle input and output signals that are neither numeric, nor necessarily even well-ordered.
So XOR cannot be achieved in a single-layered system. AND, OR, and NOT aren't so tough, though. The possibility that we may be missing something mysterious, disorderly, and non-numerically-mappable is intriguing. But before we go off in search of that, why can we not simply assume a multi-layered system?
 
Dymanic said:
So XOR cannot be achieved in a single-layered system. AND, OR, and NOT aren't so tough, though. The possibility that we may be missing something mysterious, disorderly, and non-numerically-mappable is intriguing. But before we go off in search of that, why can we not simply assume a multi-layered system?

Because if we're talking about physical neurons, we can count the number of layers involved in the given experimental setup. In particular, we can show a single neuron (and therefore a single layer) doing things that would require an assumption of multiple layers.

The basic problem is that we've got physical neurons in Petri dishes that are doing things that Mc-P neurons can't do. Ergo, physical neurons aren't Mc-P neurons.
 
I've just found a *fascinating* discussion related to this here.

(I'm thinking of the section titled "Nerve Terminals" -- it's on page 32 of the MSWord Doc version of The Mindful Universe).

Not everyone seems to agree with everything Henry Stapp has to say (snark!) but he is at least a lot easier to follow than some writers on these subjects. (Which reminds me, it's about time I read Penrose's ENM again).
 
I believe we have gotten far a-field from the original discussion about "logic vs emotion." Nevertheless, let me comment on DrKitten's remarks and return to a personal observation about the matter of logic/emotion.
___________________________________________
drkitten said:
I will also assume a very simple network of two neurons, with the output of one attached to the input of the second.
...
More generally, I claim that for the four firing patterns dit-dit, dit-dah, dah-dit, and dah-dah, ANY mapping you produce onto the real number line will of necessity have a highest and a lowest element. It is impossible for an Mc-P neuron to respond only to the middle two elements, because the threshhold function of an Mc-P neuron can only respond to linearly separable areas of input space. In other words, there is a particular response that can be learned by a real neuron, but not by an Mc-P neuron ...
____________________________________________

As I recall, neurons, in general, abide by the "all-or-none" law. Either they fire or they don't. However, "frequency of axonic transmission" is a variable.

I contend that this design is sufficient for a basis of all brain processes - including "logic and emotion."

In physics, energy is described in terms of vectors (having direction and magnitude). If physics can reduce all interactions of the universe to this level, having a similar technique would allow the brain to equally represent all things of the universe. The "direction" within the brain is flow within neural circuits. Frequency of axonic transmission represents the magnitude.

Hold that thought for a moment ...

If the brain can mimic the 1st law of thermodynamics, perhaps it can mimic "survival of the fittest." In such a scenario, the brain would, first, receive and interpret input about the enviroment (a al Dominic Masaro's Fuzzy Logical Model of Perception or O. G. Selfridge's Pandemonium model). Within the perceptions, multiple behaviors will be recognized as being appropriate for a perceived environment. Basically, you have recognized what "fits" the environment or "what you can do" within the constraints of reality.

For instance, if you are in a grocery store looking at a can of tomato sauce, you have the option of picking it off the shelf, looking elsewhere, sitting down, doing jumping-jacks, running for the exit, hollering for help, or a myriad of other behaviors. Any of these behaviors "fit" the environment, and you have any of them to choose from.

Now, if you "want" a can of tomato sauce, logic says you should pick it off the shelf. However, "want" belies an energy empowerment to the decision. This underlying empowerment can be deemed "emotional," though decidedly of low emotion. Even so, if no other opportunities of higher "empowerment" exist, the strength of your desire for tomato sauce will likely "win" against its weak competitors (sitting down, running for an exit, etc.). Thus, "logic" would have an emotional underpinning.

Given the environment, of all of the behaviors you can ennact, the behavior which has the greatest internal empowerment (want, desire, etc.) will control behavior. In other words, of all the behaviors which "fit" the environment, the one of highest empowerment takes control. (This competition likely happens in the nucleus Reticularis Thalami of the brain - see James Newman and Bernard Baars.)

"Logic" is a behavior. If someone chooses "logic," it comes with all of the connotations, beliefs, and biases of the person. It appears "non-emotional" because it exhibits low energy usage. Why should it be naturally selected? By being a low energy user, it conserves energy - the most precious of all commodities to any organism. Thus, natural selection prefers "logic" and low emotion (energy conservation) over emotional excess.

Given the above scenario, to represent anything, the brain need only create an energy pattern to represent something (a neural circuit) and a magnitude with which to empower the circuit. The empowerment prepares for any competition presented by any future environment. In other words, the brain only needs "no dits," "dits," and a variable frequency of "dits." Neural circuits are either "on" or "off," and when they are "on," they are "on" at variable strengths.

Just a view from the peanut gallery ...
 
Welcome JAK

Originally posted by JAK

I believe we have gotten far a-field from the original discussion about "logic vs emotion."
That happens around here a lot, especially when the topic has anything to do with 'what makes humans tick'. I agree with Bill Hoyt's observation on page 1 that the OP's question was based on a false premise anyway.
As I recall, neurons, in general, abide by the "all-or-none" law. Either they fire or they don't. However, "frequency of axonic transmission" is a variable.
That's what I thought too. I was hoping Drkitten would elaborate a little more on his claim that: we've got physical neurons in Petri dishes that are doing things that Mc-P neurons can't do.
JAK:

If the brain can mimic the 1st law of thermodynamics, perhaps it can mimic "survival of the fittest."
The 'pruning' stage of early brain development certainly seems to qualify.
Thus, "logic" would have an emotional underpinning.
I think it was also suggested somewhere up in the thread that emotion has a logical underpinning.

Since the Stapp article I linked to seems relevant not only to this discussion but another currently active one, I've decided to follow a suggestion I just recieved to start a new thread on that.
 

Back
Top Bottom