• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Better the illusions that exalt us ......

Moreover, you can claim that apple tree sprung from the orange seed all you and that you are more moral and exalted for believing as much.

You might be exalted, you are probably not more moral, and you are definitely delusional regarding where the the apple tree came.


This pains me to say, but...well said.

...and if a, er um, snake let's say, were to hypothetically "encourage" woman to taste an apple born of the apple tree sprung from the orange seed and there were no witnesses then...


:D
 
This pains me to say, but...well said.

...and if a, er um, snake let's say, were to hypothetically "encourage" woman to taste an apple born of the apple tree sprung from the orange seed and there were no witnesses then...


:D

This pains me to say, but "thank you". Really.... and lol on the last part.
 
Moreover, you can claim that apple tree sprung from the orange seed all you and that you are more moral and exalted for believing as much.

You might be exalted, you are probably not more moral, and you are definitely delusional regarding where the the apple tree came.

So, I just realized that articulett said something that I nearly 100% agree with.

I just checked outside to see if the sky was falling. It's not. :relieved:
 
Last edited:
No, because ethicality depends on a theshold amount of harm being done, not just any at all.

I don't understand why people are arguing with me here, all I am doing is showing that every system can be reduced to utility theory given a suitable definition of utility.

It boggles my brain why some people are so adamant that they make decisions that have nothing to do with utility. How is such a thing even possible!?!?



What is your definition of utility? Is it happiness, maximized happiness for some group, well-being, what?

That everything in ethics ultimately boils down to pleasure/pain principles gets us nowhere and doesn't address issues of utilitarianism vs. deontology. Of course everything boils down ultimately to valuation, and valuation in humans depends critically on our basic motivational states/pleasure/pain/well-being.

We can use the Greek Eudaimonia if we want. I am not claiming that ethical decisions have nothing to do with happiness, and I don't know where you got that idea (though you certainly seem to have from somewhere). Our sense of fair play is based on certain standards of happiness/well-being. That isn't the issue that I addressed when I responded to Robin.

Where utilitarianism breaks down as a completely universalizable ethical code concerns these issues of fairness for particular individuals since the general idea of utilitarianism is maximization of well-being for the greatest number. When there are losers in the game, yet well-being for the greatest number is increased, we tend to cry foul. It doesn't do any good to get lost in the issue of "well that depends on utility too" because that isn't the issue. The issue when we discuss fair-play is that sometimes maximizing well-being for the greatest number strikes us as wrong -- because we have built within us this idea of fairness, that we should avoid the suffering of even the one if possible.

That is why the Omelas example is so poignant. You guys seem to me to have obscured the issue by trying to appeal to the utility of the one within utilitarianism. But that isn't utilitarianism, which does not try to avoid all harm, but to maximize well-being and decrease harm as much as possible. The problem is that in some examples even the one person being abused strikes us as unfair. Maximize utility all you want and harm continues in examples that deal with scapegoating.

If you re-define your utilitarian position to account for these examples, then you need to look critically at the implications, because my guess is that you will have defined harm in such a way that it becomes useless. When Mill discussed the harm principle in regards to speech, he meant it as I did in my example with the carrots and cookies -- actual physical harm. Not offense, not hurt feelings, but actual physical harm.

When harm is defined in terms of anything bad, then it shows up one of the key problems with utilitarian calculus -- how do you assign how much harm is being done?

In the cookie/carrot example -- yes, drawn directly from monkey experiments -- neither child/monkey is physically harmed. One perceives that s/he has been slighted when it gets the carrot (I think it was apples and turnips with the monkeys, but I could be wrong). I did not intend the example to be a non-utilitarian enterprise. You asked how to separate harm and fairness. Defining harm as physical harm, this is an example of unfair practice that does not involve harm.

Now if we define harm in terms of anything that someone doesn't like, then of course, everything boils down to harm/non-harm. But that doesn't get us anywhere in dealing with tough ethical decisions.

And that is not utilitarianism. That is just a definition of terms.

So, I will repeat again, the issue here deals not with whether or not everything can boil down to however you want to define utility. The issue that got me into this discussion was a response to something Robin said that struck me as wrong. The prior examples in this thread can be defined in a way that does not involve anyone ever discovering what happens to the comatose woman while everyone sits happily at home safe in their belief that nothing can harm her. But she still gets raped. And we still find that repulsive. The reason that we do this is because her personhood has been violated despite the fact that she could not, at the time, feel anything. We find that unfair even though she and no one else is aware of any harm occurring. We feel this because there are not only first order forms of harm/pleasure, but second and third order forms. She exists at a level, in that example, where she cannot experience, so she cannot feel the harm. But we, if we knew about the rape, would consider it wrong, knowing that at a higher level her body had been violated.

Utilitarianism does not classically deal with this sort of issue because there is no way to operationally define what is going on. Utilitarianism was originally rooted specifically in pleasure/pain principles, not in second order principles of duty/concern/ etc. Harm, at least in my reading of it, had to be experienced by someone rather than be defined in general terms -- like, a general rule of don't violate a person's body even if they cannot feel it. That is why, in the example, she is unconscious.

The Omelas example is another in which harm specifically happens to someone -- actually in the story it was a group, wasn't it? -- but their harm is offset by the good that accrues to everyone else. Harm occurs, well tough noogies, because life and pleasure are maximized for everyone else; and it is important for the example that they could not live so well without the sacrifice of the few. This is just a stylized scapegoating example, but it shows one of the weaknesses of utilitarianism -- how do you make these calculations? How do you decide that those few people being harmed is not worth all the benefit accruing to the others? There really isn't any way. Utilitarianism, as a universalized moral philosophy suffers the same problem that deontology does -- they are both empty propositions. They both sound great in theory, but when we try to fill them, we run into problems with both.

Personally, I think there is a very good reason why we run into these problems and why they seem like empty propositions. I think our brains are structured to use two separate general propositions to deal with moral issues and we shift between them. Those two propositions are consequentialism (which broadly maps to classical utilitarianism) and fairness (which broadly maps to deontology). Both are rooted in pleasure/pain/harm/well-being principles as they must be (they are means of applying values).
 
I don't think there is anything wrong with this. I don't care about the names people give things -- I care about what is. And if people want to think of the apparently deterministic world we live in as a carefully constructed product of God's will, that is fine with me -- at least they are admitting that as far as we can tell it is deterministic.

You can call an orange an apple as far as I am concerned. What you can't do is claim that an apple tree sprouts from an orange seed. Consistency is what matters.

If you call an apple an orange, eating it won't help protect you from scurvy. Doesn't matter how consistent you are about it.

It's the additional content that goes along with what something is "called" that can have very important effects on your lifespan, understandings and potential. "God' has a lot of extraneous clutter attached to it. Self-interest radically alters our viewpoints about ourselves, making altruism impossible. Determinism has severe repercussions, some seem to think, on whether people assume control of their attitudes or not. These are not just additional names. These are additional names with content assigned to them.

It is not enough to eschew "ideas that exalt us" simply because they are false. One must understand that their falsity has implications and effects. Some ideas are detrimental in that they inhibit human potential or prohibit learning or have some other unfortunate social consequence.
 
So, taking the rather distasteful example of the coma patient rapist, "harm" is still being conducted because, should it become known, a violation of the reasonable expectations, expectations arising from perceptions of harm when such behaviour is exhibited in the case of a non-comatose victim, of all those who obey normative restrictions is being enacted. Harm is being dealt to the perceived realm of reasonable moral expectations, and therefore harm is being done to all those who abide by them and hold their expectations in terms of them. And here you have a perfectly secular vision of universalizable ethics, based on perceptions and expectations rather than "truths."


Yes, that's fine, but it isn't utilitarianism, which was my point. This is an example of using a general principle -- violation of a body, violation of reasonable expectations -- that exists more properly in deontology which concerns more rule-following/duty issues. Deontology also concerns harm, only from a different perspective.

I think a universalizable ethics is possible, but that it needs to include both classical deontology and utilitarianism. Perhaps some form of virtue ethics that envelops both?

And I think that the word "truth" is improperly applied to this sort of discussion. Ethics are now, have always been, and will always be relative to who and what we are as creatures.
 
I think a universalizable ethics is possible, but that it needs to include both classical deontology and utilitarianism. Perhaps some form of virtue ethics that envelops both?

I don't think so.

Actually, optimizing "utility" (like efficiency, cost-effectiveness, profitability, reliability, feasibility etc. etc.) enters into many decisions I make.

But I cannot think of any moral decision I made that way.
 
I don't think so.

Actually, optimizing "utility" (like efficiency, cost-effectiveness, profitability, reliability, feasibility etc. etc.) enters into many decisions I make.

But I cannot think of any moral decision I made that way.

1. Any decision you make about how you should act is a moral decision.

2. That sounds like an unusually restrictive notion of utility, since classically utility was never operationally defined in terms of economic usefulness but rather in terms of happiness/pleasure/well-being.

Utilitarianism answers questions well that give deontology pause and vice versa. The really interesting thing is that both tend to converge on the same answer in most situations. There are precious few situations in which they differ in outcome.
 
If you call an apple an orange, eating it won't help protect you from scurvy. Doesn't matter how consistent you are about it.

Yeah but if the person is consistent, they will also understand that it isn't oranges (apples) that give them vitamin-C it is apples (oranges).

I am saying I don't care what names people give things as long as the underlying meaning reduces to the same thing.
 
What is your definition of utility? Is it happiness, maximized happiness for some group, well-being, what?

Usefullness. It is just that simple.

That everything in ethics ultimately boils down to pleasure/pain principles gets us nowhere and doesn't address issues of utilitarianism vs. deontology.

I know, thats why I wasn't arguing for one over the other. I am just arguing that deontology requires decisions to be made, and those decisions will be made according to the utility of the results.

Of course everything boils down ultimately to valuation, and valuation in humans depends critically on our basic motivational states/pleasure/pain/well-being.

Yes. This is all I am trying to say. An agent has a set of values at the time a decision is made. By definition, the decision that is most in accordance with those values will be of maximum utility to the agent. Thus the agent makes a choice according to the utility of the result.

So you could say that I define the utility of an action to be how closely in accordance with the agent's values that action is. This is why I have said, over and over, that my stance is little more than a tautology and offers no new information -- except possibly to illuminate the underlying principles that go on when a decision is made.

Yet, people like herzblut continue to assert that they are able to make a decision without relying on such a process. I just wish they would say how it is possible to make a decision without weighing the utility of the results.
 
Yes, that's fine, but it isn't utilitarianism, which was my point. This is an example of using a general principle -- violation of a body, violation of reasonable expectations -- that exists more properly in deontology which concerns more rule-following/duty issues. Deontology also concerns harm, only from a different perspective.

I agree -- that is why I explicitly said in a previous post that what I am talking about is just simple utility based decision making rather than the commonly understood notion of "utilitarianism."
 
1. Any decision you make about how you should act is a moral decision.
Not if the decision is determined by non-mental processes and objects and how they function. To prepare noodles for lunch I decide I should act in a certain way, because this is the way noodles are prepared. Well, according to my knowledge at least. :)

2. That sounds like an unusually restrictive notion of utility, since classically utility was never operationally defined in terms of economic usefulness but rather in terms of happiness/pleasure/well-being.
I don't find it restrictive to normally judge utility based on knowledge and experience. I find it appropriate.

Utilitarianism answers questions well that give deontology pause and vice versa. The really interesting thing is that both tend to converge on the same answer in most situations. There are precious few situations in which they differ in outcome.
That's interesting. I would disagree prima facie, but I'm open to your arguments!
 
Last edited:
Usefullness. It is just that simple.



I know, thats why I wasn't arguing for one over the other. I am just arguing that deontology requires decisions to be made, and those decisions will be made according to the utility of the results.



Yes. This is all I am trying to say. An agent has a set of values at the time a decision is made. By definition, the decision that is most in accordance with those values will be of maximum utility to the agent. Thus the agent makes a choice according to the utility of the result.

So you could say that I define the utility of an action to be how closely in accordance with the agent's values that action is. This is why I have said, over and over, that my stance is little more than a tautology and offers no new information -- except possibly to illuminate the underlying principles that go on when a decision is made.

Yet, people like herzblut continue to assert that they are able to make a decision without relying on such a process. I just wish they would say how it is possible to make a decision without weighing the utility of the results.


OK, then we agree, so I'm not sure why we got into this in the first place.
 
Not if the decision is determined by non-mental processes and objects and how they function. To prepare noodles for lunch I decide I should act in a certain way, because this is the way noodles are prepared. Well, according to my knowledge at least. :)


I don't find it restrictive to normally judge utility based on knowledge and experience. I find it appropriate.


That's interesting. I would disagree prima facie, but I'm open to your arguments!


You know the usuals -- when the Nazis come knockin', I'm planning to lie my face off about the nice Jewish couple hidden in the basement. There are situations in which it is impossible to treat all parties as an end rather than a means to an end. Classic deontology doesn't provide a good means to deal with those situations.

Now, there are ways to save deontology from the problem. One of the classics is the guy raping a woman down a dark alley. Now if I treat him as an end, then I have to allow the rape to continue. If I treat her as an end, then I stop the rape. Since he broke the moral sphere and began to treat her as a means to an end, then I make the decision to stop the rape. Same with the Nazis, since they treat one group as a means to an end. We can only make deontology work by appeal to some form of higher level ethics. But there are certainly ways to make it work.

Might have more later, but gotta run to see fireworks now.

ETA:

One of the others is the example of the five guys on the runaway railroad trolley plunging toward their death. You can flip a switch and save them by moving them onto another track, but that will kill the worker who is on that track. I don't see how the ends principle can answer that question. Appeals to utilitarianism take care of it easily -- flip the switch and kill the one guy.
 
Last edited:
But you know that even "utilitarians" have a strong aversion when they are asked the same question, but they have to push a fat guy on the tracks to save 5 people rather than flip a switch... this is true no matter what the religion or age or religion of the person asked... the only people whom it is not different for is people who have had a portion of their brain damaged so that they don't have a revulsion connected with their decision making activities. (google Marc Hauser if you are unclear of the studies I'm talking about)

So what is really going on is that we are wired to behave a certain way... and so we think to codify and justify why we are choosing what we do... we want to see ourselves as behaving rationally. If someone doesn't have an emotional reaction to the trolley question and sees it entirely in terms of lives saved without considering how he/she feels about themselves having to do the deed--then they are not a utilitarian--they are a person who has a small portion of the "conscience" damaged. We evolved to have a revulsion about causing harm to others... and it's hard to over ride that revulsion even when more lives are at stake. We know exactly what part of the brain is involved and how damage to that part effects the reasoning... it keeps the "emotions" out of the equation.

Culture can dampen or elevate that effect. Making men into soldiers that can kill other people requires this. You have a handicapped army otherwise. Moral systems would hopefully broaden whom we feel empathy and kinship with... whom we feel more willing to protect. But there is no illusion required for that at all.
 
Not if the decision is determined by non-mental processes and objects and how they function. To prepare noodles for lunch I decide I should act in a certain way, because this is the way noodles are prepared. Well, according to my knowledge at least. :)

So when I instinctively run to help someone who is in danger, without thinking about it, it is not a moral decision because there was no higher 'mental process' involved?
 
OK, then we agree, so I'm not sure why we got into this in the first place.

Well, it may have something to do with my criticism of the arguments of others in this thread -- it would appear to a neutral observer that I am defending utilitarianism.

This is not the case. I just happen to find their arguments absurdly contradictory, so I knock them down.
 
Was Wesley Autrey's decision moral?

http://en.wikipedia.org/wiki/Wesley_Autrey
http://www.nytimes.com/2007/01/03/nyregion/03life.html
http://www.time.com/time/specials/2007/article/0,28804,1595326_1615754_1615746,00.html
http://www.cbsnews.com/stories/2007/01/03/national/main2324961.shtml

There was no time to ponder various utilities... something had to have been programmed... we evolved to care for the vulnerable and those in danger-- some more so and more readily so than others. Culture can help or hinder this process. Many other mammals show these features. Certain traits evolve because it ensures those with the traits pass on more of those traits.
 
Well, it may have something to do with my criticism of the arguments of others in this thread -- it would appear to a neutral observer that I am defending utilitarianism.

This is not the case. I just happen to find their arguments absurdly contradictory, so I knock them down.

I believe they started to define and redefine utilitarianism because they wanted to extrapolate that skeptics or atheists make their moral decisions in a different or less savory way than they do (that is using "Utilitarianism" as their guide (?))... thus justifying the "illusions that exalt" or whatever is they imagine they are deriving their moral choices from that differs from the straw man view of where those hard core jrefers are getting their morals from.
 

Back
Top Bottom