• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Prescriptive, descriptive, and motivated morality

In arguing against that position, I would be hard pressed to find a less scientific source than newspaper advice columns. That being said, these columns are filled with perennial questions such as "a married relative of mine is having an affair, should confront him, should I tell his wife, or should I ignore the situation" and "a friend and co-worker of mine is stealing from the company I work for, should confront her, should I tell our boss, or should I ignore the situation ." I have met (and read about) many people who come to very different solutions to these situations. That's why I have a hard time believing there is a "universal" or even a widely-accepted system of morality.

One could argue that this is simply a biproduct of ambiguous reference, i.e. the newspaper column dramatically underspecifies what the situation is, and if all people rendering opinions had full knowledge of everything in the situation (rather than having to fill in gaps, which brings in obvious and irrelevant biases), then one might get a more consistent response. Similarly, not every situation is equivalent even if summarized similarly, so you can't compare how people judge separate acts of infidelity - it has to be the exact same one in all respects relevant to anyone.

I make no claim that it is true, but am pointing out the fallacy in your argument.
 
One could argue that this is simply a biproduct of ambiguous reference, i.e. the newspaper column dramatically underspecifies what the situation is, and if all people rendering opinions had full knowledge of everything in the situation (rather than having to fill in gaps, which brings in obvious and irrelevant biases), then one might get a more consistent response. Similarly, not every situation is equivalent even if summarized similarly, so you can't compare how people judge separate acts of infidelity - it has to be the exact same one in all respects relevant to anyone.

I make no claim that it is true, but am pointing out the fallacy in your argument.


Are you suggesting that if we take a specific case of infidelity and document all relevant and irrelevant details before giving the issue to people to decide, then "regardless of religious or cultural background, humans [would make] make the same decision"? I have a hard time believing that. In fact, if we used a specific case involving an employee stealing from an employer, am not even convinced that people of the same religious and cultural background would agree on the case.
 
Are you suggesting that if we take a specific case of infidelity and document all relevant and irrelevant details before giving the issue to people to decide, then "regardless of religious or cultural background, humans [would make] make the same decision"? I have a hard time believing that. In fact, if we used a specific case involving an employee stealing from an employer, am not even convinced that people of the same religious and cultural background would agree on the case.

The text you quoted but evidently did not understand said:
I make no claim that it is true, but am pointing out the fallacy in your argument.

:rolleyes:
 
saizai said:
It is unnecessary to teach anything else other than how to correctly understand the current situation of those entities, and how to predict the consequences of one's own actions with respect to them. Full morality will precipitate.

That simple. Hopefully the consequences are clear.

I disagree. Many people have the empathic abilities and come to diametrically-opposed position. Take physician-assisted suicide: people firmly against it and people firmly in favor of it both claim to be empathetic towards the people involved.

So what exactly would one teach in this situation in order to precipitate full morality?
 
So what exactly would one teach in this situation in order to precipitate full morality?

That is indeed an interesting question.

The argument for one who supports PAS is that they empathize with the patient, who (let's say) is in unmanageable pain that is likely to inevitably increase over time until they die, and therefore wishes to end their life early rather than drawing it out.

This is empathy with the patient, and the patient's desires, as seen from the patient's perspective of what is best for themselves.

Further, the empathy with the physician is that, while it is a difficult thing for the physician to do, it is within the scope of their practice inasmuch as they are called upon to relieve suffering for the benefit of the patient (here, the patient being the decider of what constitutes 'benefit'), and therefore the potential harm to the physician is minimal and the potential benefit (of a job well done and thanked for by the patient) significant.


Would you mind laying out the empathic argument of someone who opposes PAS, as I do not claim to understand it? I would then be happy to address your question.
 
That is indeed an interesting question.

The argument for one who supports PAS is that they empathize with the patient, who (let's say) is in unmanageable pain that is likely to inevitably increase over time until they die, and therefore wishes to end their life early rather than drawing it out.

This is empathy with the patient, and the patient's desires, as seen from the patient's perspective of what is best for themselves.

Further, the empathy with the physician is that, while it is a difficult thing for the physician to do, it is within the scope of their practice inasmuch as they are called upon to relieve suffering for the benefit of the patient (here, the patient being the decider of what constitutes 'benefit'), and therefore the potential harm to the physician is minimal and the potential benefit (of a job well done and thanked for by the patient) significant.


Would you mind laying out the empathic argument of someone who opposes PAS, as I do not claim to understand it? I would then be happy to address your question.

I agree that suicide and physician-assisted suicide are not immoral. However, the Catholic Church and some Protestant denominations are very strongly opposed to any form of suicide. They will argue that any patient wanting to die is not being rational and thus empathizing with an irrational or mentally-unbalanced patient is not an appopriate method of determining morality.

The church might also argue against abortion by saying that empathy with the unborn child leads them to be against abortion except in the most unusual of cases.

ETA: Also there are physicians who don't work on the model of my-highest-goal-is-to-relieve-suffering-for-the-benefit-of-the-patient, but instead believe that the highest goal is to do no harm (and consider death to be harm). They might empathize with the patient and agree that the patient has the right to commit suicide but want no part of the actions involved in suicide.
 
Last edited:
Prescriptive morality is still the dominant force, and probably the first. Somebody says that they know how people should act, from authority. People believe them. It plays out the way we have seen with fundamentalist religions of all kinds: it has some certain benefits (when the dictums are in fact good* ways to be ethical). It also has major drawbacks; I won't bother going into the details, since this has been discussed ad nauseam already.
What does "when the dictums are in fact good* ways to be ethical" mean?

Descriptive morality is where one simply says what people actually do. It is the thing that people who have religion will invariably have a problem with, because it gives no direction. It seems to say that everything is okay; people can do anything; chaos; etc. This isn't quite so, but nevertheless the problem is there.
That's because discussions of morality are normative and related to guiding behavior rather than describing the world we live in. Descriptive morality in this context seems to mean nothing other than a direct observation of the world. Maybe I'm reading this incorrectly.

The reason people really make moral decisions is because a balance of empathy. Trying to maximize the benefit for things with which they empathize, not particularly caring about ones with which they don't. This is true of both decisions we socially consider to be very ethical ones - e.g. self-sacrifice to save your family, a stranger, etc
I don't see how empathy applies to saving the life of a stranger. I may never empathize with a stranger, yet still risk my life to save them. I think this falls under a discussion of supererogatory acts which seem to have little to do with empathy.

- and of ones we don't - e.g. neo-Nazism, which is essentially a matter of empathizing overwhelmingly with one's race and not with others'. The reduction of this empathy towards someone is proven to reduce decisions that benefit them. (Viz. Zimbardo etc.)
But if those individuals have balanced the empathies in their own minds as a collective group, why aren't their empathies as valid as yours? Is it simply that you disprove of their empathies? Every denunciation implies a moral law from which to differentiate the good from the evil. What right gives you the authority to say the neo-Nazi is wrong if he is only empathizing with others who believe similarly?

The answer, IMO, is to acknowledge the realities of descriptive morality that I refer to above, and apply them as teaching. If people act for the benefit of things and people with whom they empathize, to the possible detriment of others, then the answer is simple: intentionally teach people to feel empathy with populations and entities whom we socially want to be benefited.
Who decides what populations and entities should benefit and why?

On a side note: I would find it very difficult to rationalize capitalism from this framework, and I think Adam Smith would agree, when capitalism often involves people working in their own best interests to the detriment of others. That's the nature of competition, there is a winner and a loser,
 
I agree that suicide and physician-assisted suicide are not immoral. However, the Catholic Church and some Protestant denominations are very strongly opposed to any form of suicide. They will argue that any patient wanting to die is not being rational and thus empathizing with an irrational or mentally-unbalanced patient is not an appopriate method of determining morality.

I would disagree with that; the only argument that you could consistently make without referring to a Yahweh-dictum would be that you are empathizing instead with that same patient in their "mentally-balanced" state.

That does become quite complicated, especially when each state claims that the other state is the undesirable one, and each is equally happy in their own way.

The church might also argue against abortion by saying that empathy with the unborn child leads them to be against abortion except in the most unusual of cases.

And I am totally willing to accept that argument. It is again a question of which harm - that to the child (is it sentient? that impacts the empathizability greatly) vs that to the mother or other involved parties - is more empathizable.

ETA: Also there are physicians who don't work on the model of my-highest-goal-is-to-relieve-suffering-for-the-benefit-of-the-patient, but instead believe that the highest goal is to do no harm (and consider death to be harm). They might empathize with the patient and agree that the patient has the right to commit suicide but want no part of the actions involved in suicide.

Not acting to relieve suffering is also harm through inaction, for which they have to be held equally liable.

It becomes a question of which harm is more empathizable, the suffering or the death.
 
What does "when the dictums are in fact good* ways to be ethical" mean?

Please reask referring to whatever was unclear about the asterisked part.

That's because discussions of morality are normative and related to guiding behavior rather than describing the world we live in. Descriptive morality in this context seems to mean nothing other than a direct observation of the world. Maybe I'm reading this incorrectly.

I don't understand your response. Elaborate / rephrase?

I don't see how empathy applies to saving the life of a stranger. I may never empathize with a stranger, yet still risk my life to save them. I think this falls under a discussion of supererogatory acts which seem to have little to do with empathy.

I disagree; I think you do empathize with the stranger. Not as much as you empathize with a friend, true, but still somewhat, and enough to take that action. It is actually neurologically certain that you do, assuming that you are one of the ~98% people for whom mirror neurons work in the usual way.

But if those individuals have balanced the empathies in their own minds as a collective group, why aren't their empathies as valid as yours? Is it simply that you disprove of their empathies? Every denunciation implies a moral law from which to differentiate the good from the evil. What right gives you the authority to say the neo-Nazi is wrong if he is only empathizing with others who believe similarly?

I don't understand this part either; rephrase / elaborate?

Who decides what populations and entities should benefit and why?

That is axiomatic. I don't think there is any way to argue it except on utilitarian grounds, which resolves to a question of zero-sum vs non-zero-sum games, and which one the world is. If it's non-zero-sum, then everyone empathizing (cooperating) with everyone is the optimum strategy (as proven by TitForTat in the famous iterative prisoner's dilemma).

On a side note: I would find it very difficult to rationalize capitalism from this framework, and I think Adam Smith would agree, when capitalism often involves people working in their own best interests to the detriment of others. That's the nature of competition, there is a winner and a loser,

See above - this is a classic case of zero-sum vs non-zero-sum.

It can be shown that the economy is non-zero-sum (i.e. you can generate "new" wealth rather than only redistributing) - but that I think is beyond the scope of this discussion.
 
Please reask referring to whatever was unclear about the asterisked part.
Who determines if the ethics are in fact good, and for what reasons?

I don't understand your response. Elaborate / rephrase?
Your statement seems to have said nothing. I think you're inventing your own terms which is not helpful. If by descriptive ethics you mean situational ethics I understand you, but that statement as is stands says/means nothing.

I disagree; I think you do empathize with the stranger. Not as much as you empathize with a friend, true, but still somewhat, and enough to take that action. It is actually neurologically certain that you do, assuming that you are one of the ~98% people for whom mirror neurons work in the usual way.
Why would we be neurologically predisposed to do such acts when such behavior negates millions of years of the evoloutionary processes? We didn't survive the ages by ignoring our instincts for self preservation. Nothing regarding supererogatory acts fits within the framework of this discussion.

I don't understand this part either; rephrase / elaborate?
I think you clearly understand my questions and are deliberately avoiding the issues it raises within your proposed eithical system...Let me simplify it. Why was the average foot soldier in 1940 Germany "evil" for fighting for Nazism? His empathies/beliefs guided him, as well as millions of others, to fight. Why were our (Allied) empathies right, and their's wrong?



Did anyone else have a hard time understanding my point?

That is axiomatic. I don't think there is any way to argue it except on utilitarian grounds, which resolves to a question of zero-sum vs non-zero-sum games, and which one the world is. If it's non-zero-sum, then everyone empathizing (cooperating) with everyone is the optimum strategy (as proven by TitForTat in the famous iterative prisoner's dilemma).

You're telling me that the decision of who benefits and why is evident without proof as to is arguing...when have humans EVER participated in universal empathy? The problem with your system is that it works great in theory, it just gets all f'd up when humans get involved...

It can be shown that the economy is non-zero-sum (i.e. you can generate "new" wealth rather than only redistributing) - but that I think is beyond the scope of this discussion.
[/quote]
Again, in the real world, there are winners and losers. Very few times in life to we see win/win situations, especially in economic/competition type scenarios.
 
Last edited:
Who determines if the ethics are in fact good, and for what reasons?

N/A, in my ('motivated') system.

Your statement seems to have said nothing. I think you're inventing your own terms which is not helpful. If by descriptive ethics you mean situational ethics I understand you, but that statement as is stands says/means nothing.
By "descriptive" I mean the sociological / neurological sense of "what does ethics actually mean in terms of behavioral correlates".

A more succinct summary is:
Prescriptive morality = What "should" we do? (A priori)
Descriptive morality = What "do" we really do? (And why, socially/neurologically speaking?)
Motivated morality = What should we do, given the answers to the previous line?

Why would we be neurologically predisposed to do such acts when such behavior negates millions of years of the evoloutionary processes? We didn't survive the ages by ignoring our instincts. Nothing regarding supererogatory acts fits within the framework of this discussion.
"Why" is a teleological question of evolution, which cannot be logically answered without resorting to "just so" type stories, so I won't answer that.

All I can say is that we are neurologically predisposed, as the current research into mirror neurons and empathy indicates.

I think you clearly understand my questions and are deliberately avoiding the issues it raises within your proposed eithical system...Did anyone else have a hard time understanding my point?
*shrug* I am not being deceitful; I honestly did not understand what you meant in that paragraph. If I were intending to ignore you, I would have just done so and not replied asking for clarification.

You're telling me that the decision of who benefits and why is evident without proof as to is arguing...when have humans EVER participated in universal empathy? The problem with your system is that it works great in theory, it just gets all f'd up when humans get involved...
I don't see any question or point of theory in this paragraph. If you are saying that in the past things have not worked optimally, then sure I agree, but I don't think that's necessarily relevant.

Again, in the real world, there are winners and losers. Very few times in life to we see win/win situations, especially in economic/competition type scenarios.
I advise you to look into the iterative prisoner's dilemma. TitForTat is the best strategy, and ideally it results in constant collaboration.

Also, I suggest you take a look at non-zero-sum theories of economics, and specifically about how wealth can increase overall rather than remaining a constant that is just redistributed and therefore zero-sum winner/loser.

But I think that arguing economic theory is beyond the scope of this discussion, so I won't go into that any further.
 
Last edited:
First. Let me preface my response with a general statement. Let me state for future reference that I don't need any more gentle prompts to go here, read this, look into this. I don't appreciate the intellectual arrogance, it doesn't look good on anyone... We're all very impressed that you can use words like teleological and so forth, but for the future, you can save the Philosophy 101 vocab quiz for the classroom. Thanks. [/Soap Box Off]

After reading your sig and previous threads I should have avoided this conversation all together. Humility clearly isn't one of your empathies...

"Why" is a teleological question of evolution, which cannot be logically answered without resorting to "just so" type stories, so I won't answer that.

All I can say is that we are neurologically predisposed, as the current research into mirror neurons and empathy indicates.
If it has to resort to "just so" answers in the first place it's fallacious from the start.

Convenient for you that this moral framework of your's lives in a vacuum away from practical effects. :boggled: If everyone could be reduced to just a meat machine we'd have a perfect world, but humans often act irrationally within a society, and sometimes for the best. Paging Dr. King...

*shrug* I am not being deceitful; I honestly did not understand what you meant in that paragraph. If I were intending to ignore you, I would have just done so and not replied asking for clarification.
Right. Maybe someone who isn't playing games will answer it instead.

I don't see any question or point of theory in this paragraph. If you are saying that in the past things have not worked optimally, then sure I agree, but I don't think that's necessarily relevant.
No this is a statement about how the REAL world actually behaves. Your theory only works with ideal participants. Humans don't behave in a linear fashion...

I advise you to look into the iterative prisoner's dilemma. TitForTat is the best strategy, and ideally it results in constant collaboration.
Yeah, I've seen it before...too bad human beings don't operate like machinery or you'd have something there.

Also, I suggest you take a look at non-zero-sum theories of economics, and specifically about how wealth can increase overall rather than remaining a constant that is just redistributed and therefore zero-sum winner/loser.
But I think that arguing economic theory is beyond the scope of this discussion, so I won't go into that any further.[/quote]
Yeah. I'll stick with books like The Wealth of Nations which have ideas that work in the real world.
 
Last edited:
First. Let me preface my response with a general statement. Let me state for future reference that I don't need any more gentle prompts to go here, read this, look into this. I don't appreciate the intellectual arrogance, it doesn't look good on anyone... We're all very impressed that you can use words like teleological and so forth, but for the future, you can save the Philosophy 101 vocab quiz for the classroom. Thanks. [/Soap Box Off]

It's not really meant to sound that way. If I assumed that you knew the stuff I was referring to, that would be arrogant too, ne? And to adequately answer your question, I feel I have to refer to it. So, "look into x" is the politest way of doing that that I know.

And I'm not trying to use particularly difficult vocab; it doesn't really occur to me to do that. I just use whatever the appropriate word seems to be.

You're welcome to believe me or not of course; all I can say is that I am sincere.

After reading your sig and previous threads I should have avoided this conversation all together. Humility clearly isn't one of your empathies...
Clearly not; I'm an arrogant SOB. I just try to be a correct and kind arrogant SOB who readily admits when he's wrong. Or arrogant. :)


If it has to resort to "just so" answers in the first place it's fallacious from the start.
If "it" is "trying to explain teleologically what the 'purpose' of evolutionary forces is", then yes you're right.

All you can say is that it has certain possibly adaptive or maladaptive or neutral results. Even a trait being maladaptive doesn't necessarily mean it will die out; it just has to be not maladaptive enough to kill people off.

So - "why" did humans develop wired-in empathy? *shrug*

"How" - that's answerable, but I don't know the answer since I know very little about the neurological evolution of humans (and haven't seen anyone who does; I don't think the data on which to base it is available... but it'd be an interesting question no doubt).

"That" humans did develop it? Yes, very definitely; if you want details I can point you to a bunch of current research in the matter. It's a very hot topic in neuroscience these days.

Convenient for you that this moral framework of your's lives in a vacuum away from practical effects. :boggled: If everyone could be reduced to just a meat machine we'd have a perfect world, but humans often act irrationally within a society, and sometimes for the best. Paging Dr. King...
It doesn't live in a vacuum; I just don't see any way to say what the practical effects "should" be except a priori, and that would be no different than the prescriptive morality that I disagree with.

And if anything, my founding morality on empathy is the very opposite of treating people like 'meat machines', unlike many other versions of morality.


As for your other comments, you say that humans don't operate in the way I describe. Show me how that's true (except when they are following a commandment a priori, e.g. theologically, that overrides what their empathy would decide) and I'll be happy to amend my theory.

Unlike prescriptive morality, I don't say that there is an 'ideal' morality - I'm neutral about that, since I see it as an axiom about which people disagree. I just say that if everybody had more empathy for their neighbors, then you would get more cooperative behaviors, and therefore (assuming at least a certain amount of non-zero-sum in the world) things would get better for everyone.

I just step short of saying that there is some reason things "should" get better for everyone - that would be axiomatic. I think it's a good idea yeah, but that's because that happens to be my axiom; I don't insist that anyone else share it.

I didn't claim anywhere that that is currently true; indeed, group psychology everywhere acts against it, by increasing in-group empathy at the expense of out-group empathy. I do claim that this effect could be mitigated by changes in the methods of raising kids and various other social training (mostly implicit), and if it were done, there would be significant resulting changes in social psych with profound practical consequences.
 
Last edited:
That does become quite complicated, especially when each state claims that the other state is the undesirable one, and each is equally happy in their own way.

I'll accept that. My main point was that complicated (or at least partially complicated) situations will arise often enough to make it inaccurate to say "[it is] that simple."
 
I'll accept that. My main point was that complicated (or at least partially complicated) situations will arise often enough to make it inaccurate to say "[it is] that simple."

I think it is simple in that my description is accurate - whatever you have the most empathy with overall is the won that will win a conflicted or complicated situation. Certainly I agree that such situations are common; "simple" only refers to the underlying rules, not to the resulting behaviors.

Sorta like Conway's Game of Life - very simple rules, but Turing-equivalent complexity of outcome.

I think that you will find that for the situations you described, if you increase someone's empathy for one aspect over the other, that will change their resulting moral decision.

I can't say which one "should" have more empathy for; indeed the question doesn't really make sense to me.

It seems to me that it is optimal to have as much empathy as possible equally for everything and everyone, even though that will occasionally result in stalemates / coin-toss situations (like the 'warring personalities' thing you just responded to - though that would probably be tipped in the favor of the 'normal' personality once you add in empathy for the people being affected by this person). But that's my axiomatic belief; I don't see any way I can justify it rationally as being necessarily "true" in any useful sense that's not circular, so I leave it open.
 
OK, I'm a little confused now, Saizai. What you're saying is that if everyone lived by the golden rule we would all be better off? Well, yeah. Either you are descriptive or you're prescriptive when it comes to morality. I don't see any middle ground. You either mean this for everyone, in which case you are prescribing a moral system; or you are decribing your own particular system which we may find interesting but not particularly of any value.

All morality is motivated. Without motivation there is no valuation and, hence, no morality.
 
Sorry to be snarky earlier. I have a natural aversion to philosophers...

Unlike prescriptive morality, I don't say that there is an 'ideal' morality - I'm neutral about that, since I see it as an axiom about which people disagree.
Yet you still see neo-Nazism as something to be frowned upon, so you really AREN'T NEUTRAL because you're making a denunciation. You apparently believe one to be wrong and the other to be right correct?

I just say that if everybody had more empathy for their neighbors, then you would get more cooperative behaviors, and therefore (assuming at least a certain amount of non-zero-sum in the world) things would get better for everyone.
You know what they say, "If if's and but's were candy and nuts...." You're proposing something that, in theory is wonderful, but in reality doesn't exist, and there's no way around that. You're system presumptively rests on the inherent goodness of humanity...which I think is it's fatal flaw.

What happens when one groups "empathy" intersects and violates another group's "empathy"...
 
OK, I'm a little confused now, Saizai. What you're saying is that if everyone lived by the golden rule we would all be better off? Well, yeah. Either you are descriptive or you're prescriptive when it comes to morality. I don't see any middle ground. You either mean this for everyone, in which case you are prescribing a moral system; or you are decribing your own particular system which we may find interesting but not particularly of any value.

All morality is motivated. Without motivation there is no valuation and, hence, no morality.

:clap: Nice post.

This reminds me of when I asked someone if there were absolutes in the world, and they told me "no"... I wonder if they were absolutely sure about that.........

Purpose, or motivation, is intrinsic to any discussion of morality. Without purpose those discussing the issue will just argue in circles.
 
Yet you still see neo-Nazism as something to be frowned upon, so you really AREN'T NEUTRAL because you're making a denunciation. You apparently believe one to be wrong and the other to be right correct?

Incorrect. I believe that neo-Nazism results in harm to groups with which I empathize. I also believe that, for those neo-Nazis, my empathy-based morality perfectly describes their behavior and affective mindstate, and that to them, my behavior is probably immoral.

So, I find their behavior immoral from my perspective, but I do not find it in any sense "objectively" immoral or "wrong".

You know what they say, "If if's and but's were candy and nuts...." You're proposing something that, in theory is wonderful, but in reality doesn't exist, and there's no way around that. You're system presumptively rests on the inherent goodness of humanity...which I think is it's fatal flaw.

What happens when one groups "empathy" intersects and violates another group's "empathy"...
The part that does exist is that you can train people to have empathy for other groups. It is not necessary that group empathy has to conflict.

And that's the part that dictates a way to change the world if that is your goal. Not by convincing someone that their actions are immoral, but just by getting them to empathize with the victims of their actions, and to understand the consequences of their actions.
 
Incorrect. I believe that neo-Nazism results in harm to groups with which I empathize.
To which I would ask, "So what?"

So, I find their behavior immoral from my perspective, but I do not find it in any sense "objectively" immoral or "wrong".

I wonder if you were stuck in a concentration camp in WWII if you'd think differently?

Is it objectively wrong to hack a two year old up with knife? Would this practice ever be subjectively acceptable?

In light of you're response I think Ichneumonwasp's statement still stands:

"you are decribing your own particular system which we may find interesting but not particularly of any value."
 
Last edited:

Back
Top Bottom