No, because ethicality depends on a theshold amount of harm being done, not just any at all.
I don't understand why people are arguing with me here, all I am doing is showing that every system can be reduced to utility theory given a suitable definition of utility.
It boggles my brain why some people are so adamant that they make decisions that have nothing to do with utility. How is such a thing even possible!?!?
What is your definition of utility? Is it happiness, maximized happiness for some group, well-being, what?
That everything in ethics ultimately boils down to pleasure/pain principles gets us nowhere and doesn't address issues of utilitarianism vs. deontology. Of course everything boils down ultimately to valuation, and valuation in humans depends critically on our basic motivational states/pleasure/pain/well-being.
We can use the Greek Eudaimonia if we want. I am not claiming that ethical decisions have nothing to do with happiness, and I don't know where you got that idea (though you certainly seem to have from somewhere). Our sense of fair play is based on certain standards of happiness/well-being. That isn't the issue that I addressed when I responded to Robin.
Where utilitarianism breaks down as a completely universalizable ethical code concerns these issues of fairness for particular individuals since the general idea of utilitarianism is maximization of well-being for the greatest number. When there are losers in the game, yet well-being for the greatest number is increased, we tend to cry foul. It doesn't do any good to get lost in the issue of "well that depends on utility too" because that isn't the issue. The issue when we discuss fair-play is that sometimes maximizing well-being for the greatest number strikes us as wrong -- because we have built within us this idea of fairness, that we should avoid the suffering of even the one if possible.
That is why the Omelas example is so poignant. You guys seem to me to have obscured the issue by trying to appeal to the utility of the one within utilitarianism. But that isn't utilitarianism, which does not try to avoid all harm, but to maximize well-being and decrease harm as much as possible. The problem is that in some examples even the one person being abused strikes us as unfair. Maximize utility all you want and harm continues in examples that deal with scapegoating.
If you re-define your utilitarian position to account for these examples, then you need to look critically at the implications, because my guess is that you will have defined harm in such a way that it becomes useless. When Mill discussed the harm principle in regards to speech, he meant it as I did in my example with the carrots and cookies -- actual physical harm. Not offense, not hurt feelings, but actual physical harm.
When harm is defined in terms of anything bad, then it shows up one of the key problems with utilitarian calculus -- how do you assign how much harm is being done?
In the cookie/carrot example -- yes, drawn directly from monkey experiments -- neither child/monkey is physically harmed. One perceives that s/he has been slighted when it gets the carrot (I think it was apples and turnips with the monkeys, but I could be wrong). I did not intend the example to be a non-utilitarian enterprise. You asked how to separate harm and fairness. Defining harm as physical harm, this is an example of unfair practice that does not involve harm.
Now if we define harm in terms of anything that someone doesn't like, then of course, everything boils down to harm/non-harm. But that doesn't get us anywhere in dealing with tough ethical decisions.
And that is not utilitarianism. That is just a definition of terms.
So, I will repeat again, the issue here deals not with whether or not everything can boil down to however you want to define utility. The issue that got me into this discussion was a response to something Robin said that struck me as wrong. The prior examples in this thread can be defined in a way that does not involve anyone ever discovering what happens to the comatose woman while everyone sits happily at home safe in their belief that nothing can harm her. But she still gets raped. And we still find that repulsive. The reason that we do this is because her personhood has been violated despite the fact that she could not, at the time, feel anything. We find that unfair even though she and no one else is aware of any harm occurring. We feel this because there are not only first order forms of harm/pleasure, but second and third order forms. She exists at a level, in that example, where she cannot experience, so she cannot feel the harm. But we, if we knew about the rape, would consider it wrong, knowing that at a higher level her body had been violated.
Utilitarianism does not classically deal with this sort of issue because there is no way to operationally define what is going on. Utilitarianism was originally rooted specifically in pleasure/pain principles, not in second order principles of duty/concern/ etc. Harm, at least in my reading of it, had to be experienced by someone rather than be defined in general terms -- like, a general rule of don't violate a person's body even if they cannot feel it. That is why, in the example, she is unconscious.
The Omelas example is another in which harm specifically happens to someone -- actually in the story it was a group, wasn't it? -- but their harm is offset by the good that accrues to everyone else. Harm occurs, well tough noogies, because life and pleasure are maximized for everyone else; and it is important for the example that they could not live so well without the sacrifice of the few. This is just a stylized scapegoating example, but it shows one of the weaknesses of utilitarianism -- how do you make these calculations? How do you decide that those few people being harmed is not worth all the benefit accruing to the others? There really isn't any way. Utilitarianism, as a universalized moral philosophy suffers the same problem that deontology does -- they are both empty propositions. They both sound great in theory, but when we try to fill them, we run into problems with both.
Personally, I think there is a very good reason why we run into these problems and why they seem like empty propositions. I think our brains are structured to use two separate general propositions to deal with moral issues and we shift between them. Those two propositions are consequentialism (which broadly maps to classical utilitarianism) and fairness (which broadly maps to deontology). Both are rooted in pleasure/pain/harm/well-being principles as they must be (they are means of applying values).