• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

David Hume vs. Sam Harris

If we get back to the CORE of the argument, It is NOT about weather science should answer moral questions, but about if it actually can answer moral questions.
I think the core of the issue is whether or not it should, but you show no evidence that it even can.

The "apocalypse": Science is not indicating that the world is going to end any time soon. It is morally unjust to ask people to give up their worldly possessions in anticipation that it will.
Here you are doing the exact thing that Hume complained about; jumping from "is" to "ought" without any logical connection between them.

An example Sam Harris uses is the Burqa: Certain theocracies require women to wear them in public. But, if we can demonstrate scientifically that it degrades their well-being,
Which I doubt you can.

how would they be anything less than morally reprehensible.
From "is" to "ought" in one fell swoop. Still missing: logical argumentation.

When kidney dialysis machines were rare, science was able to develop a workable solution to the problem of who should be allowed access to them;
Was it "science" that produced this solution? Or just pragmatic and moral people?

when all other directions of thought on the manner become either a confusing mess or a controversial outrage.
Did science prove the other directions were wrong? Or did people just try to avoid confusion and controversy?

I have now answered the question "Can science answer moral questions?" with the word "Yes".
You have neither answered the examples in any scientific way or answered whether science can answer moral question in any satisfactory manner.

Even if you do not like my answers, they are still answers provided by science.
No, they are not. They are not scientific questions, answers to them were not provided by science... and they weren't really answered.

Perhaps you can think of some moral problems you believe science could not answer?
Any moral problem will do. Since you seem to think that the examples you gave yourself were answered by science, I haven't the foggiest what sort of moral problem you might agree science can't answer.

But, that would still be a long way off from "NO! It is impossible in principle!!!" I have shown it is already being done, in practice.
No, you haven't. You haven't provided any scientific evidence, logical reasoning or any answers to moral dilemmas.
 
An example Sam Harris uses is the Burqa: Certain theocracies require women to wear them in public. But, if we can demonstrate scientifically that it degrades their well-being, how would they be anything less than morally reprehensible.

And how many questions does "degrades their well-being" beg?

It's a transparent attempt to pretend that "well being" is something with some kind of scientific meaning. In fact, we can define "well being" to be almost anything we want, and feed in the question and get out almost any answer.
 
This will depend on the definition of the terms that we're using in this discussion. If we define well-being and by morally good we mean the overall well-being of a society, then we have a scientific useful concept, and that means that science can, in principle, answer these questions.

However, the much broader concept referred to when people use the expression morally good (that includes other consequentialist approaches or a deontological one) will probably continue to exist regardless of how we redefine morally good in our effort to find a scientific answer for what's morally good. This is a much more problematic concept for science to deal with than the previous one, because it's too broad and varies from individual to individual. It doesn't seem to be useful to science, and I don't see how we could solve this problem (mind you, in a merely philosophical sense, because I don't think there is a problem).

And this is not about semantics, but about concepts. It doesn't matter how we name it, what matters is what we mean. So:

Morally good1: refers to the desired outcomes and behaviors expressed by different individuals.
Morally good2: refers to the overall well-being of a society.

In this thread and other similar threads, people use roughly the same words, but not always the same concepts. Morally good1 and morally good2 are often interchanged in the arguments, but are not the same things.
 
Morally good1: refers to the desired outcomes and behaviors expressed by different individuals.
Morally good2: refers to the overall well-being of a society.

These two things are highly inter-related.
 
Wowbagger, it's evident you simply don't understand the problem.

Nobody is saying that you can't use science and non-scientific moral value judgments together to bring about outcomes you desire.

The problem is that it's impossible, ever, to scientifically validate said non-scientific moral value judgments, nor to form moral value judgments without them.

Harris and everyone else who claims to have solved this problem have just smuggled in a non-scientific moral value judgment somehow and tried to pass them off as science.
 
A hard tackle and a foul are also highly inter-related. But they aren't the same thing.

You say scientists can answer questions about the what is best for the well-being of society. However, attitudes and thoughts about morality are part of this.
 
You say scientists can answer questions about the what is best for the well-being of society. However, attitudes and thoughts about morality are part of this.

Once we define well-being, yes.

But it's rather the opposite. Questions about the well-being of society are part of the attitudes and thoughts about morality.

You can't take a part for the whole. The concepts I'm referring to are different, and not interchangeable.
 
Morally good1: refers to the desired outcomes and behaviors expressed by different individuals.
Morally good2: refers to the overall well-being of a society.
Morally good3: a comprehensive view that encompasses BOTH the desired outcomes expressed by different individuals AND the overall well-being of society.

By recognizing how "MG2" emerges from a lot of "MG1s", we can cover both effectively, within the same framework of thinking. And, we can call that "MG3".

I will respond to everyone else, later.
 
Morally good3: a comprehensive view that encompasses BOTH the desired outcomes expressed by different individuals AND the overall well-being of society.

Sorry, that's impossible. To do that, morally good1 and morally good2 should be totally compatible. They're not.

By recognizing how "MG2" emerges from a lot of "MG1s" [snip]
Wrong. "A lot" is vague, and incomplete. You can't take a part and claim you're dealing with the whole.
 
Morally good3: a comprehensive view that encompasses BOTH the desired outcomes expressed by different individuals AND the overall well-being of society.

By recognizing how "MG2" emerges from a lot of "MG1s", we can cover both effectively, within the same framework of thinking. And, we can call that "MG3".

This can't be done without some non-scientific moral value judgments about how to reconcile differences in desired outcomes, how to reconcile differences of opinion about what constitutes the overall well-being of society, and the possibility that we might want to call some desires or outcomes evil regardless of what any body else thinks.
 
Once we define well-being, yes.

But it's rather the opposite. Questions about the well-being of society are part of the attitudes and thoughts about morality.

You can't take a part for the whole. The concepts I'm referring to are different, and not interchangeable.

Think long-term. How you set up education and other aspects of a society determine the attitudes and thoughts about morality. It isn't like they just pop up out of nowhere.

I think well-being in general is pretty easy to define going by psychology. It allows for a lot of flexibility, at least before analysis on maximizing it. Mental well-being doesn't seem, generally speaking, more arbitrary the the idea of of physical well-being. It might be in a very, very general sense, but not once you take into account the fact that we are humans.
 
B, most likely would be better, though it is a matter of scientific inquiry. You act like "are you happy with this result" is the sole measure of satisfaction and health. That's decidedly NOT the case.
I do not act like that. I was responding to your specific claim that we could, "...setup a study to examine which method maximizes satisfaction and then implement that voting system." My entire point was that satisfaction alone was not sufficient. You'll have to bring in other values.

And then the rest of your argument pretty handily demonstrates my point.

Group A is going to have more disaffected people, more polarization, and I think this would very likely produce more problems on many levels. So I would rather expect B would be shown to be objectively superior as far as the health and well-being of society are concerned.

Now, I suppose the health and well-being of society might be "values" to you, but I don't really see how this is the case anymore than the health and well-being of an individual is a "value." And yeah, this IS a value, but it is one that's rather inherent to being a human (who is a social creature).

Let's look at the group as a giant being.

You have person A:
He has 40% of his body in perfect health. His legs and arm, however, are in extremely poor shape due to burns, broken bones, and other damage (40%). This is putting some significant strain on some parts of his body as he tries to heal (the 4's).

Person B:
This guy has a major problem with one of his legs and perhaps a hip. Everything else about him is doing pretty well, though not perfect.

Who is healthier? Seems pretty clear it is person B. Anyone who looked at Person A and said "yeah, you might lose your legs and arm, but man, you look GREAT otherwise! Don't worry! At least you aren't like that poor schlub B, eh?"
What if person B's two major problems are deafness and blindness? Personally, I'd rather lose all four limbs than those two senses. Of course, that's a judgment call, and I won't pretend that it is objectively the better decision.

And there's the rub. You talk about values inherent to being human, and I agree that there is a great deal of overlap in certain areas of what each person values. Yes, nearly all of us value well-being (however we define it), so it might seem a good place to start building an objective moral system. But we also tend to value our apparent autonomy. I don't want you (or anyone) telling me which of those two groups is the better choice for me. I want to feel like I chose it. You can't prop up one nigh universal human value at the cost of others, or at least you can't do it for objective reasons.

This is, admittedly, a thought experiment. I'm not saying it is definitely the case. I am saying science can determine if this analogy is valid and if not what is a good way to look at it. Science can do this by examining the consequences of different voting methods.
I'm not saying science can't investigate this. I'm not even saying science can't, in principle, return to us the full consequences of each scenario. I'm saying that it can't say which set of circumstances we ought to prefer.


Now, perhaps scientific inquiry would show two or more systems have no difference in the short and long-term satisfaction and well-being of society. If that's the case, then it doesn't matter which one you go with...they are equivalently good.
Thank you. Objectively equivalent scenarios are the final nail in the coffin of the hopes for an objective system of morality. Simply put, if your options are objectively equal, whatever it is you use to decide between them is not objectivity. You have to sneak something else in.
 
I am actually reading The Moral Landscape , now, but it's slow reading because I feel like I am not grasping the concepts. Based on the arguments here, I now feel positive that I am not getting it. I was taking the proposal from a Humanist approach. A society where laws aren't created, preventing the individual from flourishing; such as law forcing women to wear burqua's, genital mutilation, or "marriage protection acts." There are behaviors that are allowed or overlooked based on the cultural beliefs of the practicing groups. The scientific method can be used to weigh the possible benefits or negative outcomes of such practices to decide whether or not these practices should continue. If I've completely missed the boat, go easy on me, I'm only on chapter 3.
 
I've just finished reading Sam Harris' book today and by chance this thread popped up.

But it seems to me that he only really gets around the fact-value problem rhetorically:

He can say, and he does, that there are clearly ways of living that are so bad that no one could possibly want to experience them. In fact, given his contention that certain things like misery and happiness are brain states it would be incoherent to say that some people favoured being miserable. Presumably then people know that misery is bad and therefore should be avoided. But when asked where this "should" appeared from, he basically scoffs and says this is a silly question: those that don't think we ought to avoid misery are not worth taking seriously.

In the Afterword to the book he makes what I think is a new claim which is that a science of morality would simply be grounded on the same fundamental values as, say, medicine and that the value of favouring the personal and collective well-being of conscious creatures should be presupposed. (I think he's floundering a bit here and has begun trying to question-beg the fact-value problem away.)


[By the way, it seems that the fact-value distinction and the is-ought distinction are not identical even though they often overlap.]
 
What scientific experiment can we do to determine what items make up the healthiest, most correct diet?

You would first need to define health. And the you can use science to bring it about.

Similarly, if we agree on a definition of morality, we can then use science to bring it about. However, the definition of morality is not a scientific issue.

And note what Sam Harris claims. The subtitle of his book is "How Science Can Determine Human Values". That is, he doesn't say that science can help us bring about what we value (I'm not aware of anyone disputing that), but rather that it can tell us what we should value. This is a very, very strong claim and of course, he fails to prove it. Which he sometimes seems to admit, as he says that well-being is the only thing we could possibly value. But again, he says that science can determine (not merely help to achieve) human values. Again, he doesn't know what he is doing. Or if he does, then tell me when science discovered we should value well-being.

A scientific worldview however does tell us how we should think about morality.
 
I have now answered the question "Can science answer moral questions?" with the word "Yes".

Even if you do not like my answers, they are still answers provided by science.

/quote

Wowbagger, I am learning alot from your arguments and find myself in 100% agreement with you (as I am 100% in agreement with Sam Harris, and nearly 100% in disagreement with his detractors vis a vis 'The Moral Landscape'.)

Good work!
 
In this post I will present what I hope is a framework for clarifying this sort of controversy, because I see multiple things getting muddled into here. Along the way I will express my own general opinions on the matter. And, I will probably refer to this post, a lot, in my upcoming replies.

We can break down the possible roles of science, in moral decision making, into three areas. It is important to emphasize that this is only an abstraction for the purpose of identifying such roles. It is NOT meant to imply that moral decisions are made in such a linear fashion.

The First Mile: The judgment on what values or tools to use in making the decision.
The Middle Distance: Using those values and tools to see what they have to say on the subject.
The Last Mile: What ultimately provides the answer you are going to use, in the end.

Again, we are NOT assuming morality is so linear. I am merely breaking down the points where science can be inserted.

I am going to assume that using science in the Middle Distance is not going to be controversial, at all, on this thread. It simply means using science as a tool can help when making moral decisions. I think that is all well and good. And, I don't expect much of a contest on that point. Right?

What I, personally, would rather debate about is the Last Mile. I think I can build a strong case that science can, ultimately, provide the best answers to moral decisions. And, furthermore, there is already evidence that this will be the way of the future. (David Hume be damned.)

However, most of you seem to be focusing primarily on the First Mile. The Argument Against using science, in the First Mile, generally goes like this:

"You must make a value judgment to use science, and that value judgment is not, itself, science. It is a value judgment."

There are, in fact, two completely different approaches I can take to respond to that statement. They both seem to contradict each other. But, I can probably defend either one fairly well, if I had to:

Approach to First Mile Argument #1: Admit that it makes a good point.
I could respond that quoted statement by saying this:

"Yeah, okay, fine! So, the first step is a value judgment. Big freakin' deal! As long as all the other steps are science, science, science... all the way to the end, one can STILL make a case that, for all intents and purposes, science is STILL making moral decisions!"


Approach to First Mile Argument #2: Build a Case for Science in the First Mile
This is, admittedly, a lot more difficult to convince people of. Last couple of times I tried to do this, it went down a lot of weird "rabbit holes", that I would rather not climb into, right now. In general, I could make another effort. Perhaps build a case like this:

"The advantages of science, being (mostly) its provisional nature and empirical reliability, are so compelling, they will override any and all other values anyone might have considered in the past. And, even the 'values' of provisional and empirical systems, can themselves, be scientifically deduced so they are not truly non-science values, either. Etc."

But, frankly, I think the First Mile is the least important thing to debate over. And, the fact that I am willing to defend either one of two contradictory statements demonstrates how silly I think it is to do so.

I would much rather debate the merits of Science for the Last Mile. Is there any good, compelling reason not to base moral decisions ultimately on what science has to say on the matter? I have not found any, yet, myself.
 

Back
Top Bottom