• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Richard T. Garner and "Beyond Morality"

By the way, Mackie distinguishes between what he calls "first-order moral claims" (or normative ethics) and "second-order moral claims" (or meta-ethics).

While denying the existence of objective moral values could be a form of normative ethics as in "I don't believe in morals so I just do what I like", Mackie is making a meta-ethical claim, and in fact favours something of a utilitarian ethical approach in the part of the book that makes a first-order case for morality.
 
Well, I suppose you could take a vote or do a survey, but even then you haven't demonstrated what you set out to do which is to show an objective morality which presumably requires an objective value.
I agree.

A survey might be useful as a starting point for investigating morality, since a large chunk of our research has to be from hindsight. But, popular opinion could still be wrong about what they should do. So, we need something better!

The problem here is that you cannot demonstrate that "morality stabilizes on a particular value".
There are several ways to do this:

1. Look at historic data. If decisions that lead to better consequences for everyone in a given society tend to stick around longer, and stronger, than those that are either detrimental to everyone, or benign to everyone: That would be a hint of stability for the value of consequentialism.

To give a counter example: If we found decisions that stick to a declared set of laws stick around longer, and stronger, even when said laws are detrimental to various aspects of everyone's well being, then that would be a hint of stability for rule-based morality.

2. Do experiments. I have not seen one that directly tests this idea of stability, yet. But, the theory does make indirect predictions about various other things. My expertise in this field is lacking, but based on conversations with people who are: It seems most experiments are consistent with those indirect predictions, so far.

That, alone, does not prove anything. But, it's a start. And, some of those experiments could be modified to test the idea more directly.


3. Fall back on the language and concepts of natural selection. Natural Selection stabilizes on the proto-value of survival and reproduction. It would be very surprising if morality didn't stabilize on something similar at the scale of societies, assuming our values largely emerged out of that natural process.


4. Ask someone who takes another view of morality, for example, that it should be rule-based or justice-based, etc. Their answers will generally fall back on consequences, either for everyone or themselves.

Even moral abolitionists seem to be bound to consequences, when tasked to answer questions normally associated with morality.


And besides: The critiques against consequentialism don't work: The assumption that we "can't calculate" the right answer is wrong, once we realize it is made for us by nature, in the form of societal welfare.

And, the distaste that something that is currently considered immoral might actually be good for us (Garner uses the fictional example of beating a criminal in public) - Ignores the idea that our morality could change: Once our future selves accept the vale of that thing, we will look back on our past selves as the monsters for not doing it.


Mackie is making a meta-ethical claim, and in fact favours something of a utilitarian ethical approach in the part of the book that makes a first-order case for morality.
Tell me something I already wouldn't have guessed at.

I didn't read Mackie's book. But, I read Garner's. And, Garner also takes something of a utilitarian ethical approach, in the end. He just doesn't realize he's tapping into the emergent property of morality to get there!
 
Last edited:
Just a reminder, Wow. You never really showed that your theory was falsifiable, except by appeal to a cryptic remark which seemed to suggest that, at the appropriate scale, moral progress does not appear cyclical even in the short term. I asked for clarification, but you ignored me.

At this point, I'm standing by my earlier assessment. In general, claims that a certain function has a limit are not falsifiable (absent special conditions), and your claim is no different in this respect.

Your thesis that moral opinions tend to a limit is unfalsifiable. Hence, it meets one of the oft-used criteria for pseudoscience.

If I read you right, this is something new.

Every generation, we should see moral "improvement" or else your theory is wrong -- is this what you mean?

Notice that this is much more specific than what I thought you said. I thought you were merely committed to our moral progress approaching a fixed point "in the limit", which places no restriction at all on finite behavior. But now you've said that we should see improvement "between generations" (whatever that means).

I suspect that I misunderstand you, however. So, let's be clear. Suppose I am tracking prevailing moral opinions over time, or social well-being or -- well, or what? Tell me what I should be tracking over time, and tell me which observations are inconsistent with your hypothesis.




In fact, I don't see anyone else arguing that this thesis is a consequence of natural selection at all, so I'm not sure I see the close connection you suggest.

But never mind. Let's focus on the data, and not the theoretical underpinning of the theory.



Stunning! Ants will eventually stop interspecies conflict.

Well, I appreciate the explicit answer. I can't say that I find any real plausibility that natural selection is so predictable and acts in a species-wide manner like this, but at least it is clear what you're committed to.

Would it bother you if entomologists, well-read in the theory of natural selection, did not find this prediction persuasive?



You rather missed my point. The imaginary theist's argument is similar to your argument. I don't find the imaginary theist's argument persuasive in the least, and for the same reasons, I don't find your argument persuasive in the least.



So, let me remind you what you said earlier.



Now, I value my truck. I care about it. Science can surely demonstrate that if I vandalize my truck, it will be bad for my truck, which I care about. (In particular, it will lower the value of the truck, which is important to me, it will make it less comfortable to drive, which is also important, and so on.)

Now, I asked, "What is morally wrong with doing things that are bad for things I care about?" You have answered: nothing, per se. Nothing at all is morally wrong with doing things that are bad for things I care about, unless someone other than I am impacted.

So, you agree you misspoke earlier, yes?




But not wrong because it damages something I care about. Wrong only insofar as it harms the interests of others. This is not relevant to your claim, which was that everyone who is rational recognizes that it is immoral to damage things that they care about.

If I and only I care about something, and my actions to damage it have no negative impact on others, then there is nothing at all immoral if I damage the things I care about.
 
I agree.

A survey might be useful as a starting point for investigating morality, since a large chunk of our research has to be from hindsight. But, popular opinion could still be wrong about what they should do. So, we need something better!


There are several ways to do this:

1. Look at historic data. If decisions that lead to better consequences for everyone in a given society tend to stick around longer, and stronger, than those that are either detrimental to everyone, or benign to everyone: That would be a hint of stability for the value of consequentialism.

To give a counter example: If we found decisions that stick to a declared set of laws stick around longer, and stronger, even when said laws are detrimental to various aspects of everyone's well being, then that would be a hint of stability for rule-based morality.

2. Do experiments. I have not seen one that directly tests this idea of stability, yet. But, the theory does make indirect predictions about various other things. My expertise in this field is lacking, but based on conversations with people who are: It seems most experiments are consistent with those indirect predictions, so far.

That, alone, does not prove anything. But, it's a start. And, some of those experiments could be modified to test the idea more directly.


3. Fall back on the language and concepts of natural selection. Natural Selection stabilizes on the proto-value of survival and reproduction. It would be very surprising if morality didn't stabilize on something similar at the scale of societies, assuming our values largely emerged out of that natural process.


4. Ask someone who takes another view of morality, for example, that it should be rule-based or justice-based, etc. Their answers will generally fall back on consequences, either for everyone or themselves.

Even moral abolitionists seem to be bound to consequences, when tasked to answer questions normally associated with morality.


And besides: The critiques against consequentialism don't work: The assumption that we "can't calculate" the right answer is wrong, once we realize it is made for us by nature, in the form of societal welfare.

And, the distaste that something that is currently considered immoral might actually be good for us (Garner uses the fictional example of beating a criminal in public) - Ignores the idea that our morality could change: Once our future selves accept the vale of that thing, we will look back on our past selves as the monsters for not doing it.



Tell me something I already wouldn't have guessed at.

I didn't read Mackie's book. But, I read Garner's. And, Garner also takes something of a utilitarian ethical approach, in the end. He just doesn't realize he's tapping into the emergent property of morality to get there!

Good luck in the debate.

By now, I have no more patience with discussing these issues.
 
Thanks!

I might just need it.

I've got to say, public debate takes more courage than I've got. I am not fast on my feet, and I'm not keen to try an argument in front of an audience.

So, good luck to you.
 
I assume you accept the Theory of Evolution, right? If so, can you answer this:

"How do we REALLY KNOW what is 'most fit' for a species?!"

The short answer is that we typically know, from hindsight, what WAS the most fit, of the available gene variations, over time.

But, I imagine philosophers could break into the same arguments about it, as we are for morality:

"Oh sure, survival and reproduction is good and should be maximized. But, that is merely the axiom you are starting with. Can we REALLY claim that Natural Selection should be selecting for those?”

Even though most of our knowledge about Natural Selection came from hindsight, we can STILL make large-scale predictions about its future. For example: We can predict that bacterial agents will adapt to be immune from anti-bacterial soap, once too many people are using it too often.

The proximate details might be harder to unravel: We might NOT be able to figure out which specific adaptations those bacteria will take. But, that does not make the larger statement any less accurate.

In the same way, moral truth can be knowable and verifiable. So far, most of our knowledge has to come from hindsight: What worked for the best consequences of society, in the past. But, we might be able to make large predictions about our future morals based on what we figure out about it.

I'm trying hard to understand what you could possibly mean. What bearing does an ability to predict the future trends of moral belief, have on whether there is objectively true moral beliefs?

Also you still haven't defined a key concept in your system. What are the features of moral values that have 'stabilised'? How do we know when a value has stablished to the point where we should accept it as an objective moral truth? It sounds like a tremendous philosophical fiction.

That is an assumption they are making, that is not backed by anything.

So, in the same way, if an Atheist says that religious beliefs are just the product of our genes, our environment and human invention, and they don't refer to real objective entities, you would also say that is an assumption that is not backed up by anything? Plenty of people have religious beliefs. The Atheist says these are a fiction. Plenty of people have moral beliefs, the Moral Error theorist says these are a fiction. Is the onus on the Atheist to disprove that that religious beliefs reflect objective truth? Is the onus on the moral theorist to disprove that moral beliefs are in some way objectively true?

Moral truth happens to emerge as a property of human societies, but once emerged, it acts independently of our genes and human intervention. And, it is probably a couple of degrees of separation from the environment, as well.

The Moral Error theorist would say that moral beliefs emerge as a property of human society, but none of these beliefs are objectively true.

Natural forces, beyond our control, tend to bend our morals towards making murder wrong. This is not a preference. This is not an assumption. This, apparently, is an objective, empirical truth. And, it can be verified, in across multiple lines of investigation.

It is an empirical fact that many people have the moral value that murder is wrong, but that doesn't mean it is an objective truth that murder is wrong.

Funnily enough I see the arrogant insertion of philosophical fictions, into scientific realms, as reminiscent of the Creationist tone.

What, the scientific realm of moral philosophy? If I have inserted any fictions into this debate, please name them.

Morality does not really work that way. Well-being is WHAT ALL moral questions end up becoming about. There is no other stable manner in which morality can exist. (according to theory)

Quite cleary the question as to whether there is such a thing as moral truth, does not have to 'end up becoming' about well-being. Quite clearly the assumption that there are moral truths is built into any axiom that says that we ought to maximize well-being. As I said previously, this is science after the fact (at dispute).

The positions of heavenly bodies are what all astrological questions end up becoming about. That doesn't mean if we study the movement of the planets, we gain astrological knowledge.

It is NOT like we can just decide "'well-being is good and should be maximized". The decision was made for us, long ago, since before we were even humans. That is my point.

Well, then you are merely putting forward the naturalistic fallacy; that we ought to behave in the way that nature has 'decided', and that nature has decided that we ought to maximize well-being.

It is based on the evidence that well-being seems to improve, over time, in an inclined saw-tooth manner; and from scientific experiments that are consistent with the notion (even if it was not directly tested, yet). And, logically, it follows from Natural Selection.

So we ought to maximize well-being because well-being increases over time and it follows from natural selection? What about periods when well-being diminishes? Do those periods suggest that we ought to minimize well-being? Why should we base our moral views on whether well-being is increasing or diminishing? If there are moral truths, they should be true regardless of whether well-being is increasing or diminishing. Otherwise they are just relative notions.
Natural Selection = moral truth = naturalistic fallacy.

I hope you don't take my responses personally.
Not at all :)
 
Last edited:
The Moral Error theorist would say that moral beliefs emerge as a property of human society, but none of these beliefs are objectively true.
Error Theorists are looking for values that are inescapable and binding. And, I am trying to communicate how they can take on those characteristics, at least at the societal level.

I will respond to your other points if I have time. Unfortunately, there is LOT'S to do tomorrow, and not just for SkeptiCamp.
 
Error Theorists are looking for values that are inescapable and binding. And, I am trying to communicate how they can take on those characteristics, at least at the societal level.

I will respond to your other points if I have time. Unfortunately, there is LOT'S to do tomorrow, and not just for SkeptiCamp.

All the best for your weekend and the debate.
 
I have a question that might bring clarity:

Did you mean to say there is an 'objectively best' set of rules of behaviour, i.e. that there is an objectively best description of the way that people tend to behave, that has no bearing on right and wrong, or good and bad?
That is true , but while the general rules will be similar for all humans (simply because rules unsuitable for humans will be selected out), the details will vary in space and time, because societies do, due to communication and technology.Culture, in short. Generally, doing something your society sees as taboo is a bad idea because you need the cooperation of society for your own safety.
A thousand years later and 3000 miles away, that taboo may look very stupid indeed.
Because when you say rules for behaviour, it makes it sound like those are rules prescribing what we ought to do (which is the same as a moral code).
I would use the words "of" and "for" interchangeably in this context.
A description of behaviour is only a rule in the sense of "Here's how humans behave as a rule." A statistical average.
"Drive on the left in the UK" on the other hand is a prescriptive rule, prominently displayed at all UK airports. Failure to follow it kills foreign visitors (and residents of course) every year.

Behaviour that gets you killed, or that puts you at odds with your community, is unlikely to be behaviour that bestows an advantage on most individuals. There are some circumstances though, where it will. If you kick against the traces and succeed, then others will emulate or follow and you may change your society. That can be very good for society, or terribly bad. (At the risk of Godwinning the thread, Hitler was an outstanding person who changed Austro-German society. For several years that was a very good thing for many , perhaps most, Germans. Then it became a very bad thing, because the larger community balked, big time. The world community.
Nelson Mandela was another such, whose influence was wholly different , but resulted in a similar flip of a society from one course to another.

Neither man was "normal" in the behavioural sense. If everyone was a Hitler, human society would be very strange indeed. But if everyone was a Mandela, it might be even stranger, because societies in which everyone is honest and nice are societies on which we have little data, because of their extreme rarity.

Everyone is currently speaking of how Mandela's example of tolerance and forgiveness is a good one. But is it an effective one? How many politicians or people in general have praised him, but not actually copied him?

In Nazi Germany on the other hand, many people lapped up the ideology of hatred with alacrity- some out of simple fear, some because they liked to dominate others. We all know the petty jobsworth who would be first into a uniform if it would give him the ability to control his neighbours without getting thumped. That was the Nazi Party.*

Human nature is nasty, yet the idea that there are "good" ways to live seems to be grounded in the same biology as all other behaviour. Whether altruism is ultimately "selfish" genes looking after themselves or not, cooperation within limits and with controls against cheats is the only way 7 billion carnivorous animals can get on together.
I think it's important we don't make stuff up about "right" and "wrong" and "moral", that's all. It's too important for that.
We have to face our nature squarely and cut the coat of our society according to the cloth. Pretending there are ultimate rights and wrongs written into the fabric of the universe won't cut it.



* The Nazis / Pol Pot, name your tyranny of choice, all had prescriptive rules for behaviour. If you equate "rules FOR behaviour" with "Moral code", you open a can of worms.
 
Last edited:
Human nature is nasty, yet the idea that there are "good" ways to live seems to be grounded in the same biology as all other behaviour. Whether altruism is ultimately "selfish" genes looking after themselves or not, cooperation within limits and with controls against cheats is the only way 7 billion carnivorous animals can get on together.
I think it's important we don't make stuff up about "right" and "wrong" and "moral", that's all. It's too important for that.
We have to face our nature squarely and cut the coat of our society according to the cloth. Pretending there are ultimate rights and wrongs written into the fabric of the universe won't cut it.

The problem this analysis has for me is that you are essentially appealing to some meta-ethical standards while claiming, at the same time it seems, that such meta-ethical standards are mere fictions.

What do you mean "human nature is nasty"? Is that an objective evaluation or do you just happen to think that human nature is nasty?

And what is "It's too important for that" supposed to mean? What is "too important"? And by what objective standards are you calling "it" important?
 
Hmmm.....I am starting to suspect that I might have lost this round...

Well, I have to say I'm really not surprised going by the evidence of this thread. It's not only that I did not agree with your argument to begin with, it is more that I don't really think you ever made a case for countering the idea that "There is no objective morality".

I'd be interested to watch the debate to see what kind of arguments came up, though. When will there be a video of the event?

On another note, was the event itself a success? I certainly hope so, and if anything it is a positive that you can concede the argument as it shows an appreciation of skeptical rationalist values.
 
The good news is that the debate, itself, seemed to be entertaining and interesting for the audience.

Based on the feedback I got, I lost primarily because I seemed "unprepared" for the battle. I didn't hear anyone claiming my ideas were "kooky", yet, as you folks are implying. I tried to stick to things I had references for. Though, I hardly had time to actually cite them, at least the science was there, somewhere.

But, there will be an anonymous on-line survey, soon. We'll see how people REALLY felt, with the results from that.

This was, of course, the first time I ever did a live, formal(ish) debate, in front of an audience. I have debated people on the Internet. And I have debated people live, in an informal, improvised situation. But, not lectern vs. lectern with time constraints (which I was totally blowing off) and structure, and stuff. So, at the very least, this was good practice for that sort of thing.

I ended up rewriting much of my opening remarks towards the last minute, which meant I had to read them off my smartphone, instead of memorizing most of it, which looked bad. And, I placed emphasis on defining emergent properties a little more, which I had fewer citations for in the context of morality; and reduced Natural Selection to a token mention, which I DID have more citations for.

But, the WORST moment came when I had to deliver a key rebuttal question to my opponent, and my brain completely froze. I was supposed to ask if all he was doing was systematics, and not really answering questions. But, first I couldn’t remember that word, which is embarrassing enough, because it is one of my favorites. (Imagine Yo Yo Ma suddenly forgetting the word "Cello", when asked what instrument he was playing.) But, just as I was about to utter synonyms such as "taxonomy and nomenclature and stuff", my brain totally bonked out on what I was even supposed to ask, in the first place, for a minute. So, my opponent filled the time dishing out a few more barbs at me.
I DID ask that question a few minutes later. But, it just looked terrible while I was struggling to churn it out. And, at least his answer was largely satisfying to my side: He admitted that, yes, that is what philosophy does. It lays the groundwork, as he called it, for questions to be answered by science. Though, he did claim you can't answer then without that groundwork, first.

The audience seemed to enjoy the debate, itself, enough that they wanted it to go longer. We were going to have had a second round in the last hour of the day, when no one else was on the schedule. (And those who had ideas for filling the time said they would rather see the second round.) But, it turns out, much to my frustration, we were forced to vacate the room. The reservation got screwed up, somehow: It was only until 5PM instead of 6PM.

Me and Gregory are going to iron out another plan for a second round. And, I would rather have it done sooner rather than later.

We had pre-debate and post-debate paper surveys, for people to fill in, to see how many changed their minds. In the pre-debate survey, a few more people started on my side than his. In the post-debate survey, both of our sides lost lots of votes, to the I don't Know column. However, I lost more votes than he did. So, that indicates I lost the debate, overall, I guess, on that score.

So, even though I lost, I still consider the session to be successful in the context that everyone enjoyed the debate and probably learned a lot. I think they even learned a lot from me, even if they don't think it was sufficient for defending my side.

Well, I have to say I'm really not surprised going by the evidence of this thread. It's not only that I did not agree with your argument to begin with, it is more that I don't really think you ever made a case for countering the idea that "There is no objective morality".

Like I said, no one, so far, claimed I came off like a kook. I believe I lost primarily because I came off ill-prepared in several ways.

I'd be interested to watch the debate to see what kind of arguments came up, though. When will there be a video of the event?
I am not sure, yet, exactly when the video will be up. I might post two versions: One unedited and raw, in case anyone wants to see that.

And, another with light editing to reduce the parts where I was wracking my brains over the systematics question, and perhaps any other moments of relatively dead air, if they exist. I would NOT cut any actual content from the edited version.

On another note, was the event itself a success?
Yes, it was! Except for the abrupt ending, where we got kicked out of the room, everything else went off without a hitch!

I certainly hope so, and if anything it is a positive that you can concede the argument as it shows an appreciation of skeptical rationalist values.
I knew going in that I was the underdog. That helps.

Did I mention that I opponent once gave a lecture about rhetoric at a previous SkeptiCamp?
 
The good news is that the debate, itself, seemed to be entertaining and interesting for the audience.

Based on the feedback I got, I lost primarily because I seemed "unprepared" for the battle. I didn't hear anyone claiming my ideas were "kooky", yet, as you folks are implying. I tried to stick to things I had references for. Though, I hardly had time to actually cite them, at least the science was there, somewhere.

But, there will be an anonymous on-line survey, soon. We'll see how people REALLY felt, with the results from that.

This was, of course, the first time I ever did a live, formal(ish) debate, in front of an audience. I have debated people on the Internet. And I have debated people live, in an informal, improvised situation. But, not lectern vs. lectern with time constraints (which I was totally blowing off) and structure, and stuff. So, at the very least, this was good practice for that sort of thing.

I ended up rewriting much of my opening remarks towards the last minute, which meant I had to read them off my smartphone, instead of memorizing most of it, which looked bad. And, I placed emphasis on defining emergent properties a little more, which I had fewer citations for in the context of morality; and reduced Natural Selection to a token mention, which I DID have more citations for.

But, the WORST moment came when I had to deliver a key rebuttal question to my opponent, and my brain completely froze. I was supposed to ask if all he was doing was systematics, and not really answering questions. But, first I couldn’t remember that word, which is embarrassing enough, because it is one of my favorites. (Imagine Yo Yo Ma suddenly forgetting the word "Cello", when asked what instrument he was playing.) But, just as I was about to utter synonyms such as "taxonomy and nomenclature and stuff", my brain totally bonked out on what I was even supposed to ask, in the first place, for a minute. So, my opponent filled the time dishing out a few more barbs at me.
I DID ask that question a few minutes later. But, it just looked terrible while I was struggling to churn it out. And, at least his answer was largely satisfying to my side: He admitted that, yes, that is what philosophy does. It lays the groundwork, as he called it, for questions to be answered by science. Though, he did claim you can't answer then without that groundwork, first.

The audience seemed to enjoy the debate, itself, enough that they wanted it to go longer. We were going to have had a second round in the last hour of the day, when no one else was on the schedule. (And those who had ideas for filling the time said they would rather see the second round.) But, it turns out, much to my frustration, we were forced to vacate the room. The reservation got screwed up, somehow: It was only until 5PM instead of 6PM.

Me and Gregory are going to iron out another plan for a second round. And, I would rather have it done sooner rather than later.

We had pre-debate and post-debate paper surveys, for people to fill in, to see how many changed their minds. In the pre-debate survey, a few more people started on my side than his. In the post-debate survey, both of our sides lost lots of votes, to the I don't Know column. However, I lost more votes than he did. So, that indicates I lost the debate, overall, I guess, on that score.

So, even though I lost, I still consider the session to be successful in the context that everyone enjoyed the debate and probably learned a lot. I think they even learned a lot from me, even if they don't think it was sufficient for defending my side.



Like I said, no one, so far, claimed I came off like a kook. I believe I lost primarily because I came off ill-prepared in several ways.

I am not sure, yet, exactly when the video will be up. I might post two versions: One unedited and raw, in case anyone wants to see that.

And, another with light editing to reduce the parts where I was wracking my brains over the systematics question, and perhaps any other moments of relatively dead air, if they exist. I would NOT cut any actual content from the edited version.


Yes, it was! Except for the abrupt ending, where we got kicked out of the room, everything else went off without a hitch!


I knew going in that I was the underdog. That helps.

Did I mention that I opponent once gave a lecture about rhetoric at a previous SkeptiCamp?

Thanks for the report. I'm sorry I couldn't be more help in the run-up to the debate.

As for the claim about philosophy being "systematics", I think a more accurate way of saying that is what philosophers call "a priori" (deductive/logical claims) knowledge, as opposed to "a posteriori" (i.e inductive/empirical claims) which is what scientists do.

This is why philosophers tend to focus on the coherence of the idea rather than the specific data. This does not mean that philosophers ignore data altogether; sometimes they will either make assumptions that X is correct because the data supports it and then try to see what the implications are for X in logical terms. Or perhaps they may attack the assumption X by showing there may be some flaw in the reasoning that the aforementioned data implies X.

In a sense, most us here were arguing that the data you presented did not imply what you said it did, namely it did not lead to the conclusion that there was a scientifically verifiable (or falsifiable) objective normative ethics.

This does not mean that that there is no objective normative ethics. Nor does it mean that any of the data you presented was wrong (although some of that may still be disputed independently), it was simply that your conclusions did not follow from your premises. (A further objection, in my opinion, was that many of your premises involved wild extrapolations which were themselves unprovable).

But anyway, I look forward to debate itself and the follow-up, if there is one. I would recommend a major streamlining of your argument however, as from a debating point of view, you present far too many targets.
 
As a follow-up to that, a less kind accusation about philosophy is that it is "just semantics". This charge is often made against philosophy by people who use either their intuitive notions of what something means or use the definition found in a dictionary and assume that this is adequate for a discussion.

However, as with most technical subjects, philosophers tend to insist on having rigourously defined (as opposed to loosely defined) concepts unless you can demonstrate that a loose definition is a better one for the purpose of the discussion (some philosophers in different ways have shown that "vagueness" can be important for the meaning of certain terms or that "family resemblances" in terms is to be preferred, but it is not a good idea to invoke this without doing the necessary work or you will probably be accused of special pleading).

I think that when it comes to "the science of morality" many people wading in to the debate are basing their ideas on intuition or assumptions without realizing it. And when asked to justify things which seem obvious to them begin to get annoyed as if there was nothing to justify in the first place. A good example is that many people believe that utilitarianism is obviously the best moral principle and make that a starting assumption (sometimes unconsciously) and become bewildered that others disagree that science is therefore best placed to show us what is the right thing to do. It certainly could be IF you have already justified utilitarianism and the specific form in which your utilitarianism is formulated, but so far that has been the main problem.
 
As a follow-up to that, a less kind accusation about philosophy is that it is "just semantics".
Systematics is a little more than just semantics. But, the point is moot because it was actually one of the few points my opponent accepted.

Some photos from the event are starting to pour in, from the attendees. I will have more and better ones, once the official photographer makes his available.

Here is the moment where I dished out the "Systematics!" question. The slide you are looking at was manually typed up, shortly before I gave it:

1490953_10201496773114216_2076763033_n.jpg


If only I had a slide with that word on it, earlier, I wouldn't have forgotten it.

This photo was also taken after I switched over from using my smart phone to a clipboard, so I can write down what I had to respond to, faster.

(And, I think the smudge on my shirt was actually from the camera. A similar smudge is on some of the other photos this same lady took.)

It certainly could be IF you have already justified utilitarianism and the specific form in which your utilitarianism is formulated, but so far that has been the main problem.
The "justification" is a natural imbalance that takes place within various moral systems:

All other forms of morality tend to change over time, to deliver better utility or consequences. And, they usually don't turn back.

And, whenever someone sacrifices utility or consequences to comply with some other form of morality, it doesn't last very long: Eventually those are overthrown or transformed to go back to better utility or consequences.

Historically, this seems to be the case. And, it follows from what we know about the science of evolutionary biology and social sciences.
 

Back
Top Bottom