No, this is a huge issue. Because you are going to have to make trade-offs between those "aspects" you mentioned in your previous posts.
No doubt there will be trade-offs! There are trade-offs in any moral system you propose! Unless you can name one without any, what-so-ever.
Science may have a
different set of trade-offs to consider, than other moral systems. (A trade-off of trade-offs)
But, the mere presence of trade-offs should not be a detractive factor in using it. Nor is the presence of such trade-offs indicative that such an exercise is impossible in principle.
When two or more of these measures are in conflict, which wins out?
THAT is a good question! One we may even face someday.
Obviously, the specifics will be dependent on the circumstances. But, in general,
they can be resolved in ways science generally resolves its controversies.
For example: Let us say there is one study showing us that kidney dialysis machines are best left only to the wealthiest of society, and another shows us that they are should be assigned to every patient who needs one, in the order of when they got on the list for getting one.
We could resolve this by going with both solutions, in two different areas. In the long run, one might be proven to be better for society than the other, in ways the previous studies did not measure.
There are certainly non-consequentialist ways of approaching morality; in deontological theories, the moral good inheres in actions themselves, not in their ultimate consequences.
The competition-prone nature of our evolutionary heritage makes these non-consequentialist approaches unrealistic. Such ideals, though well intentioned, would likely leave ALMOST ALL of society's population open to the parasitic exploitation of a few who choose to think differently.
John Allen Paulos devised a nifty little
hypothetical Democratic primary back in 1992. In it, the delegates ranked their choices from first to fifth. He then presented arguments that each candidate could put forward to demonstrate that they won the primary - that they were the top choice of the voters.
A pickle, to be sure, but not a completely insurmountable problem.
As Drachasor posted: You could implement each of those voting methods in different places. Over the long run, some would yield more satisfied voters than others.
Though, I would also add that much of that outcome can already be predicted by studying real-world examples of some of those methods. We can measure how satisfied constituents are with the general way results are handled.
I would predict that those results that are more straight-forward to measure, AND tend (on average) to show the greatest difference between the candidates, would be more satisfying. And, those where the margin of winning tends (on average) to be small, would be less satisfying. But, I could be wrong about that. I am willing to hear of studies that show otherwise.
And values are inescapably subjective. Moral choices are also based on values, and will run into the same problems.
Answering moral questions with science does NOT need to imply "First Mile Science". I think even Sam Harris agrees that it has to start with a value judgment to use science. But, that is no excuse not to use science to answer moral questions.
I once went into a bit of a rabbit hole trying to explain how even "First Mile Science" could be implemented. But, for now, I am going to skip that exercise. I will see if I can find the thread for those who are interested in enduring it.
All you have done is side-stepped his arguments by denying them. You have not shown a single one of them to be in error.
I am trying to outline frameworks for which questions can be answered, as they crop up in society. If it sounds like I am side-stepping, it is only because I do not know the details under which they will crop up, yet.
My point is to show that they CAN, in principle, be answered by science. NOT that I have all of the answers for all of them, yet. Just because I do not possess all of the answers does NOT mean Science will never be able to address and answer these conundrums you put before me.