Is Randmoness Possible?

On the other hand, why should anybody want to wait for the reply of somebody stupid enough to confuse the files randmoness2bri02.doc and randmoness2bri_etc02.doc?

Bri said:
Jan and I have been replying back and forth, commenting on each point, and it's gotten quite large. After she posts again, I am going to try to break it up into smaller posts based on each point.

Uhm, no. I'm a boy. Not that it matters much in this context.
 
jan said:
Uhm, no. I'm a boy. Not that it matters much in this context.

I mistyped! Sorry jan!

I should have been more careful since the same thing happens to me all the time!

-Bri
 
I apologize if my reply should sound a bit harsh here and there, although I am, strictly speaking, not responsible for my reply; my brain cells made me do it.

I also wonder: if the content of my reply was predetermined millions of years ago, why did it take me yet another week to write it?
 
1/10 The Missing Half

Ethics under Construction, Guilt versus Suffering


Before answering the details of your post, I would like to make some general remarks where I see the gaps in my concept. I am quite confident with my conception of mind, but see a problem with moral.

Assume I had the perfect theory of morality. This perfect theory of morality would force everybody to act morally, wouldn't it? Plato concluded that evilness is nothing but a lack of knowledge: everybody who knows the perfect theory of morality has to act according to it. Consequently, Plato's theory is a version of hedonism. This is sometimes a bit obscured: Plato says that it is better to suffer injustice than to commit injustice, that it is better to be punished than to commit crimes unpunished, and so on. But since he is a hedonist, and since it is obvious that punishment is unpleasant, he is able to proof the existence of an afterlife (since, otherwise, his theory of morality wouldn't work; a rather strange proof of the existence of the gods, but Kant's proof seems to be similar, if I got it right).

Instead of hedonism, I could base my morality on the positive law, our shared contract, that what we use to be able to live together. Or I could refer to naturalism. Or to a deity that decrees some behavior as moral. I am not content with any of those solutions, since I would prefer a base for morality that allows to be able to act moral in spite of current law or our nature or the gods. This morality should be neither based on hedonism, nor on nature, nor on metaphysics, nor should it rely on the positive law. Unfortunately, there seem few possibilities left. If "good" is not what is lawful, nor what is pleasant, nor what is natural, nor what is commanded by a god, what should "good" be?

It seems we both carry some irrational wishes with us: you want a libertarian free will, although you see that the chances are slim that you have one. I would like a base for morality different from anything I can imagine.

Therefore, the Ethics department of my philosophical views shows just a big "Under Construction" sign (give or take; as will be apparent below, I still have some ideas about ethics). Since concepts like guilt or responsibility require some kind of ethics to explain them, one half of these concepts is left unspecified.



For practical purposes, I tend to think moral decisions should be based on the avoidance of suffering. It seems to me that older theories worry less about the avoidance of suffering than about the avoidance of sinfulness. And your insistence to have a theory of responsibility may stem from a similar viewpoint: you want people to be able to be guilty. I want them to be harmless.

When, in the following, I equal a thermostat that becomes repaired with a thermostat that becomes punished, this may seem frivolous to you, since there seems to be a lack of guilt. But notice that for the avoidance of suffering, the concept of guilt is less important. From this point of view, punishment is not a compensation of guilt, but a method to avoid further suffering. Therefore I mentioned several posts ago, that I, would I be God, would condemn nobody to hell. Since hell would not help to avoid future sufferings, but instead perpetuate and guarantee them, condemning somebody to hell is, from my point of view, the worst one can do. On the other hand, if what you want is to compensate guilt or punish sin, hell may be a reasonable concept.

Nevertheless, it seems to me that my concept of free will is also able to explain the terms responsibility or guilt, at least the other half of them.
 
2/10 My Definition Is This

Free Will


Robert Nozick, Philosophical Explanations
No one has ever announced that because determinism is true thermostats do not control temperature.

It seems to me that you are playing "conquer the term" with me: if I use the term free will, you answer that my free will is not the real free will. If I talk about making decisions, you explain me that those are not real decisions. If I talk about choice, you let me know that those choices I am talking about are not real choices. It becomes tedious to explain what I mean with no terms left for me. But I hope (as the quote above suggests) that it is unproblematic to assume that a thermostat controls something, that it reacts according to its external environment, and that it is possible to distinguish between the thermostat and the non-thermostat (that is, the thermostat's environment, giving the thermostat its input and receiving its output).

If we forget, for a moment, about "real" free will, "real" decisions and "real" choice, it may become possible to see that a human being is not completely incomparable to a thermostat with regard to some of their shared properties. A human being also receives input and transforms it into some kind of output. How this is done is the subject of difficult investigations; obviously, it is far more complicated and unpredictable than the behavior of a thermostat.

We might, therefore, call a human being a black box, something for which we don't know the exact mechanism, because it is too complicated, contains too many feedback loops and too many details. A thermostat, on the other hand, might be labeled as a white box, since it is easy to understand and survey the behavior of a thermostat, and its behavior contains no secrets, mysteries or riddles.

In fact, since the thermostat is not the simplest thing imaginable (that would be something with no input or output at all, or for which the output is the unaltered input), even the thermostat is not completely white, but something like ivory white (that's the justification of saying that it has some kind of free will, although diluted behind recognition). Human beings, on the other hand, are often enough pretty predictable and unfree, so they are not pitch black, more some kind of dark anthracite. The behavior of a dog would be a dark gray box, the behavior of an insect would be a light gray box.

I used to think that complexity is the key ingredient that distinguishes a thermostat and a human being, that is, "darkness" and complexity are the same. With regard to free will, I now question this position. It is after all conceivable that we could built a machine showing a behavior even more complicated and unpredictable than ordinary human behavior; but this behavior might appear to be indistinguishable from random behavior (unless you take the trouble to analyze it completely, in which case it would reveal its deterministic nature), and I wouldn't call such a behavior showing free will. It seems necessary that the thing we examine shows some tendency to try to accomplish some goals, that is, it must have some agenda to be an agent.

The thermostat may lack a "real" agenda (since, after all, it's just a thermostat), but we have no trouble to identify its "apparent" agenda.

The exact details what makes a "box" "lighter" or "darker" are not completely known, therefore. It seems that the best that can be done is to take the route that Dennett has taken: complexity alone may be insufficient to describe the specific quality that makes a box darker; but whatever it is, whatever the specific details are, it is possible to characterize those properties: they are, usually, present when it is possible and, for practical reasons, necessary to describe the box in question as a conscious agent having an agenda, that is, to adopt the intentional stance.

I retract my characterization of Dennett as an operationalist. As I interpret it now, Dennett gives an operational definition of a trait that is assumed to be a real trait of a real thing. That is, free will is a real property of human beings, having intentions is also a real property, and the necessity to adopt the intentional stance to describe a thing having intentions is just a test to detect this property.

Not an infallible one: assume that there is a special kind of rocks, "c-rocks". A c-rock is a sentient being that has philosophized about life, the universe and everything and came to the conclusion not to engage in action, although it could, with the aid of telekinesis, interfere with the world. It just chooses not to. It is possible to describe the behavior of a c-rock, as we external observers observe it, like the behavior of any other rock. It is therefore not necessary to adopt the intentional stance to describe a c-rock and predict its future observable behavior. Our test then fails: according to this test, we would say that the c-rock lacks intention, free will, whatever, when in fact this isn't the case.

But conversely, if something passes the test, that is, it is felt necessary to adopt the intentional stance to describe its behavior, than in fact it has intentions.

This is, of course, a controversial claim and needs some support. Let's see what happens if we assume that it is false. Then it would be possible to built a being whose behavior can only be explained and predicted adopting the intentional stance; it would be indistinguishable from a human being. In other words, it would be a p-zombie.

If a p-zombie works indistinguishable from a human being, it is doubtful why natural selection would have created human beings in the first place, instead of p-zombies. I think I could try to find more arguments against the possibility of p-zombies (as opposed to real human beings), but would take the trouble only if you hold the position that they are, indeed, possible.

To be able to exercise free will, one must be able to control something (I hope you don't try to steal the word "control" too); therefore, the person abducted by aliens shows no free will, and so on. But those details may be better explained answering your post.
 
3/10 Manners of Speech

Ordinary language, Folk Psychology, Realism versus Operationalism


I don't think Dennett's position is that free will occurs in degrees. Instead, he simply redefines free will. His position seems to be that any object for which it is "useful" to consider an "intentional creature" should be considered to have free will. Basically, he ignores the definitions of "free will" that have been proposed, and along with it he ignores any lines that are commonly drawn between things that have free will and things that don't. In effect, he is saying that we should be allowed to say that anything we want has "free will" if it is useful to define it that way.

I don't see anywhere where Dennett claims that some things have more free will than others.

I think you are underestimating the necessity Dennett assumes to be at work when we decide to adopt the according stance. In theory, it is possible to judge everything from the physical stance. But this "in theory" misses something important: that it is impossible, or leads nowhere, in real life. It's not just a bit more convenient to treat human beings as carriers of intention — it's the only way we can deal with them. On the other hand: it can be useful to treat a thermostat as a carrier of intention. But it is not strictly necessary. You can go along without adopting the intentional stance with regard to the thermostat. And the intentional stance would be utterly useless if applied to a piece of rock. Therefore, Dennett's system is far more less arbitrary than it may seem at first glance.

About Dennett being a Realist (that is, mind, consciousness, free will and so on do really exist, they are not just a manner of speaking) see also the previous post.

OK, well unless I am misunderstanding, you are following Dennett's idea of simply redefining free will, but justifying this position by saying that your definition is just as useful as any other definition. If that's what you're saying, then you're conceding that Webster's definition of free will (what we're calling "libertarian free will") isn't compatible with determinism, but that some other (presumably equally useful) definition of free will might be. So, for the record, are you conceding my original proposition, that "libertarian free will" is incompatible with determinism?

If I remember correctly, I never claimed determinism to be compatible with libertarian free will. I think that libertarian free will is incompatible with everything, including dualism.

About this Webster's definition. It might be useful to repeat it here, since it seems to me that this is an important point:

Main Entry: free will
Function: noun
1 : voluntary choice or decision (I do this of my own free will)
2 : freedom of humans to make choices that are not determined by prior causes or by divine intervention

You see, this is not one definition, but two. The second definition is the definition of libertarian free will, and, per definition, incompatible with determinism. If "Webster's definition" is this second definition, I concede all you want about this definition.

But note that there are two definitions given. The first one doesn't mention cause, determinism or divine intervention. It just equals free will with decision making or choice making, and gives an example. What would be the natural opposite of the given example, that is, "I do this of my own free will"? It would be something like "I am doing it because I am forced to". Forced by what? To me, it seems, the most natural explanation would be: some external force, something like being held at gunpoint.

It is possible to expand this and develop a concept of "being forced by the firing of my neurons". But how often do people worry about this kind of force? Imagine the following sentence: "I didn't marry her following my own free will, I was forced by...". What kind of sequel would you expect? a) "my father-in-law and my brothers-in-law pointing with some riffles at me" or b) "my brain cells"?

"I do this of my own free will" means something like "it wasn't a shotgun-wedding", it doesn't allude to philosophical determinism or neuroscience. It treats a human being as a black box and refers only to forces outside the box.

I got the impression that for you, Webster's second definition is the one and only true definition of the term "free will". I try to avoid debates about definitions, but I don't think that that is fair. It is quite natural to use this term outside of philosophical debates, without any reference to determinism etc. And I try to propose and develop a definition that fits with this ordinary use, and I think that the same can be said about Dennett. That is why I used "free will<sub>1</sub>" and "free will<sub>2</sub>". If, for you, this is just some verbal trick and nothing besides libertarian free will is genuine free will, then I believe that free will doesn't exist, at least unless someone could show me a kind of free will that doesn't violate physicalism.

To prove that another definition of free will is just as useful as the "forking paths" and "source" models of free will is quite a burden to overcome.

I don't think that the forking paths or the ultimate source model are useful at all. More on that below.

On top of that, you need to show that your new definition is also compatible with determinism.

If free will is something we share with thermostats, and if we agree that thermostats are pretty deterministic, that doesn't seem like an impossible task.

I would say that the "forked path" and "source" definitions are far more common and are based on thousands of years of debate on the matter (Dennett's ideas are relatively recent).

I don't want to appeal to popularity. I know and agree that popular ideas can be completely wrong. Furthermore, any refined philosophical concept of free will can't be "common".

I think that the first definition of free will, according to Webster, is what people outside philosophy are using. If you are saying that philosophers tend to define free will as libertarian free will, you may be right. But perhaps I'm wrong and the second definition is also what people outside philosophy use. Should that be the case, then it doesn't affect my main position. Since, you know, popular ideas can be completely wrong.

Free will and determinism seem only to be an issue within a materialistic philosophy. Therefore, free will was a problem for the Epicureans, and maybe for Democritos (although I don't know of any quote where he discusses this problem). That would make two and a half thousand years of debate. But, since most of the time, materialism wasn't seriously considered, I think that the bulk of this discussion is only a few hundred years old.

And I still think that both the forked path and the ultimate source model are useless, despite of thousands of years of tradition.

The article refers to the definitions of free will and intent that Dennett uses as "folk psychological notions in the explanation of intentional action." I disagree with this characterization, because it seems to me that the "folk" notion of free will and intent are very different than Dennett's. For example, Dennett seems to argue that it is valid to consider a thermostat to literally "desire" the room to be a certain temperature, and to intentionally changes its own behavior to achieve its desired result. That a thermostat actually desires or intends anything seems a little silly to me, and I wouldn't say that it is a "folk" notion to consider an inanimate object to have free will (unless by "folk" you mean "silly").

The thermostat as an example was, I think, provided by another poster, not by Dennett, and I just adopted it. But Dennett himself uses a vending machine, so the difference shouldn't be that important. If one wants to establish "free will" (or, if you don't like this words to be used, "choicemakinglyness") is a matter of degree, it is sensible and useful to ask for the most simple and basic example that exhibits the controversial property.

From the Jargon File 4.4.7:

Semantically, one rich source of jargon constructions is the hackish tendency to anthropomorphize hardware and software. English purists and academic computer scientists frequently look down on others for anthropomorphizing hardware and software, considering this sort of behavior to be characteristic of naive misunderstanding. But most hackers anthropomorphize freely, frequently describing program behavior in terms of wants and desires.

Thus it is common to hear hardware or software talked about as though it has homunculi talking to each other inside it, with intentions and desires. Thus, one hears "The protocol handler got confused", or that programs "are trying" to do things, or one may say of a routine that "its goal in life is to X". Or: "You can't run those two cards on the same bus; they fight over interrupt 9."

One even hears explanations like "... and its poor little brain couldn't understand X, and it died." Sometimes modelling things this way actually seems to make them easier to understand, perhaps because it's instinctively natural to think of anything with a really complex behavioral repertoire as 'like a person' rather than 'like a thing'.

At first glance, to anyone who understands how these programs actually work, this seems like an absurdity. As hackers are among the people who know best how these phenomena work, it seems odd that they would use language that seems to ascribe consciousness to them. The mind-set behind this tendency thus demands examination.

The key to understanding this kind of usage is that it isn't done in a naive way; hackers don't personalize their stuff in the sense of feeling empathy with it, nor do they mystically believe that the things they work on every day are 'alive'. To the contrary: hackers who anthropomorphize are expressing not a vitalistic view of program behavior but a mechanistic view of human behavior.

Almost all hackers subscribe to the mechanistic, materialistic ontology of science (this is in practice true even of most of the minority with contrary religious theories). In this view, people are biological machines — consciousness is an interesting and valuable epiphenomenon, but mind is implemented in machinery which is not fundamentally different in information-processing capacity from computers.

Hackers tend to take this a step further and argue that the difference between a substrate of CHON atoms and water and a substrate of silicon and metal is a relatively unimportant one; what matters, what makes a thing 'alive', is information and richness of pattern. This is animism from the flip side; it implies that humans and computers and dolphins and rocks are all machines exhibiting a continuum of modes of 'consciousness' according to their information-processing capacity.

Because hackers accept that a human machine can have intentions, it is therefore easy for them to ascribe consciousness and intention to other complex patterned systems such as computers. If consciousness is mechanical, it is neither more or less absurd to say that "The program wants to go into an infinite loop" than it is to say that "I want to go eat some chocolate" — and even defensible to say that "The stone, once dropped, wants to move towards the center of the earth".

This viewpoint has respectable company in academic philosophy. Daniel Dennett organizes explanations of behavior using three stances: the "physical stance" (thing-to-be-explained as a physical object), the "design stance" (thing-to-be-explained as an artifact), and the "intentional stance" (thing-to-be-explained as an agent with desires and intentions). Which stances are appropriate is a matter not of abstract truth but of utility. Hackers typically view simple programs from the design stance, but more complex ones are often modelled using the intentional stance.

(I quoted a rather large portion, but since it's in the public domain, at least there are no problems with copy right).

I think that sums it up nicely. It should be noted that the use of the intentional stance with regard to computer programs is not restricted to programmers; users use it too. And although it may be less frequent that the intentional stance is adopted when the topic at hand are thermostats, it occasionally happens.

Admitted, if you would ask some John Doe why he uses such an anthropomorphic manner of speech when he is talking about computer programs, it is likely that he might answer that this is just a sloppy manner of speaking, a bad habit, and that of course he knows that only human beings have true intentionality, consciousness, free will and so on. But that's why I would hesitate to equal any kind of philosophy with the corresponding folk concept.

We certainly don't hold thermostats ethically responsible if they fail to keep the room a comfortable temperature.

If this is true, why do we try to repair them?
 
4/10 The Challenge

Expanded Definition and Explanation


If you still hold the opinion that some things can possess "more" free will than others, please give an example and a more concrete definition of what you mean by "free will" in that context. Also please explain why you consider that definition to be as "useful" as the common models of free will, and how you would use your definition of free will to distinguish between circumstances when we commonly hold a person responsible for their actions and those when we don't hold the person responsible.

For a definition, see Webster (Def. No. 1). For a more detailed explanation, see my second chapter. I think specific details of the concept depend on what we will learn about the human mind, how it works. Therefore, I think, I have an excuse not to give all the possible details.

About examples: it seems to me you partly ignored my example of a partly insane murderer, and my discussion of it. But see the next posts, and the next paragraph, for more of them.

Well, even a "tiny, tiny little bit of intent" is intent. It seems to me that you either have the capacity to form intent or you don't, and you're saying that a thermostat does. Therefore, a thermostat possesses free will by your definition of free will. If that's true, the burden of proof would be on you to demonstrate how this view is "useful" and thereby justifies calling it "free will." You would also have to show it to be compatible with determinism. In order to prove it useful, you would have to examine why we hold that people are capable of forming intent, but not thermostats, and provide a meaningful alternative that fits within both your definition of free will and a "real world" view of ethics. For your definition to be compatible with determinism, you would have to show that your definitions hold true even in a deterministic world where "could have done otherwise" and "ultimate source" are both nonexistent.

That sounds like a reasonable challenge. Fortunately, I think that some of the work is already done.

Saying that a thermostat has free will is useful because it is part of a manner of speech that is natural and common, at least if we talk about more complicated machines (a thermostat is pretty much the lowest of the low with regard to free will, so I wouldn't be surprised if the usefulness of such a borderline case is itself a borderline case), as the quote about hacker culture above shows. Furthermore, it is useful if we want to talk about different kinds of animals, and rather inevitable if we want to stick to a gradualistic view of evolution. Furthermore, it avoids the pitfall of making the concept of free will so ambitious that it is impossible to know whether or not we encounter a genuine case of free will, which would make the concept of free will completely useless for any real world application.

People tend to deny that a thermostat has a free will, since the free will of the thermostat is diluted behind recognition, and it takes a philosopher to see it. It should also be noted that there is a large gap both between human beings and other apes, and even more so between human beings and the most sophisticated of current computer programs with regard to free will. Therefore, it is not surprising that people claim that human beings are completely different and incomparable to anything else.

As I explained above, thermostats are punished. It just goes by the name "repaired". Note that due to the large gap between contemporary human beings and all other contemporary beings, the amount of responsibility of non-human beings is usually negligible. But note that it is usual to talk about the "punishment" of, say, dogs. It is also usual to describe animals as "good" or "evil", although those terms are sometimes watered down (a dog sometimes is said to have a "good nature", not to be "good", maybe to make a moral distinction between dogs and humans). Dogs' minds as boxes seem to be dark enough to become attributed responsibility on a regular basis, while according to Descartes, dogs are some kind of p-zombie-like machines.

Note also that there is not only a transition between human machines and other machines (if physicalism is true, it is inevitable to count humans as kinds of machines) and between other animals and humans (that is, our ancestors), but a transition from fertilized eggs to full grown adults. It seems obvious that fertilized eggs don't exercise free will: they are too light. Adults, on the other hand, should be considered responsible.



It seems undebated that a thermostat is perfectly deterministic, so I don't see how compatibilism could be an issue. Given the exactly same circumstances, the thermostat could not have done otherwise, and so on. Since I don't use the forking path model or the ultimate source model, I am untroubled by determinism. Instead, I have now the burden to explain why I can do without those models. As I said above, I think they are useless for ethics. I shall elaborate why.

I will restrict myself to the forking paths model. I leave the demonstration of the uselessness of the ultimate source model as an exercise to the reader.

I will first discuss another all-or-nothing effect. Next, I will try to explore the alleged benefit of "could have done otherwise". Third, I will mention a counterexample mentioned in the article you quoted. Fourth, I will cite an example of moral behavior without alternatives (thanks to Dennett, where I found it).

1. All-Or-Nothing

Some person, call it "Attila", has killed another person, Victor the victim. Is Attila morally responsible? That depends on whether or not there is a forking point, with one path leading to the killing, the other not leading to the killing. If there is such a forking point, then Attila had a choice, therefore, he is responsible. If there is no such forking point, Attila had no choice and is not responsible.

Now assume that we find such a forking point, but we also discover that there was a pressure for a certain outcome. On the forking point, it would have required an extraordinary noble man not to follow the road to the killing. Only one out of hundred randomly chosen people would have followed the road not to kill, faced with the very same situation. Is Attila nevertheless as much responsible as he would have been if the pressure would have been just the other way round, that is, only one in hundred persons would have followed the path leading to the killing?

Now, my scenario is vague about important details. What would it mean for another person to face "the same" situation? Just the actual details? Or would it also mean to share Attila's troubled childhood? If you would have had the same childhood, would you have abstained from the killing? If the upcoming during childhood is irrelevant, why is it occasionally mentioned in court? And what would have happened if you would not only share the same childhood, but also the same set of genes, to put you in "the same situation"?

However we answer this, it seems to me that forking -> free will -> responsibility is too simple to be useful.

2. Rebel Without a Cause

I still don't understand how this "could have done otherwise" makes you responsible at all, in your book. Let's visit our friend Attila again. Attila has killed Victor, but there was a certain point in his past where he magically could have done otherwise. That is, there was a branching point where his free will kicked in and pushed the lever. How does that make Attila guilty? How could Attila have made his free will act different than it did? His neurons fired "don't do it! Don't do it!", but, alas, it pleased his free will to push the lever, out of the blue, this way instead of that way. Poor Attila. Now God has a pretext to burn him.

3. Frankfurt's Argument

Attila plans to kill Victor. Nero, the evil neurosurgeon, implants a brain receiver in Attilas brain. Nero wants Attila to kill Victor, but would prefer if Attila would do it following his own intention. Should Attila chicken out, Nero sends an impulse that forces Attila to kill Victor. It turns out that this is unnecessary: Attila kills Victor without the interference of Nero. Now Attila could not have done otherwise. Does that really mean that he is not responsible?

4. Luther's Argument

"Here I stand, I can't do otherwise" says Luther. Does he say this to deny his personal responsibility? Not quite; he is saying that his personal responsibility forces him to act like he acts. He exercises his free will, despite external obstacles. He just can't do otherwise.
 
5/10 Grey Boxes

Our Fellow Beings And Their Responsibility


Computers receive input, process that input based on wiring and programming, and then produce some kind of output. There are no choices involved at all. Given the same wiring and programming (neither of which the computer is the "ultimate source" of) the same input (causes) will produce the same results (effects).

Yes, and given different input, different output may be produced. Different kinds of outputs are possible. So the program chooses between different possible outputs. It is not my fault that your concept of "choice" is so intimately tied to the concept of "could have done otherwise under exactly the same circumstances". I think that this concept of choice is useless, since it is void, since it is impossible to do otherwise under the same circumstances. But that seems to be the point of debate.

There is another aspect why my concept of choice is more useful than yours. See, even if I assume, for the sake of argument, that some objects have the magical ability to make a CHOICE, that is, a choice that can be this or that under exactly the same circumstances, what do I gain? How can I know that some object indeed has the ability of making a CHOICE? How could I ever test it? I can feed a program with different input and see different output, so I can know whether or not it makes some kind of choice. Since you seem convinced that physicalism holds for computers (why? Hardware sometimes malfunctions, and it would be possible to include a quantum mechanical random generator; wouldn't that allow for the possibility for some otherworldly soul to act trough the machine?), you are convinced that the program has no CHOICE. You also wish that we humans have a CHOICE. But it seems impossible to know anything about it.

As I said in my previous posts, a consequence of this lack of ability to know whether something has a libertarian free will makes it unfit for any ethical application.

Good question. There are only two reasons I can think of that we wouldn't be able to create artificially intelligent computers: that artificial intelligence simply requires too much complexity, and that there is something about the human brain that is more than the sum of its parts and cannot be duplicated. It's unlikely that the first reason will hold because if nothing else, computers are very good at dealing with (and helping to create) complexity. The second reason is pretty much dualism in a nutshell, although there may be variations on the theme. I don't know if this version of dualism is "metaphysical" or not, but I'm not sure I can think of any theories that couldn't be considered dualistic that would distinguish the human brain from the microchip inside your toaster that keeps your bread from burning.

Another argument against artificial intelligence could be that nobody really needs it. The first visions of robots where man-like androids; the robots now built follow practical needs.

Perhaps there are other "loopholes" [besides Dualism] that might explain free will, but they would probably currently have equally little scientific evidence.

I have trouble to imagine another possibility, but since this is an argument from ignorance, it carries little weight.

Robots might be easy to "punish," especially if your definition of "punishment" is simply to change their behavior. We could simply change their programming. We could also change other robot's programming to prevent other robots from behaving the same way. Of course, this is different than how we punish human beings, because this sort of thing would be akin to a frontal labotomy, and would be inethical.

All that depends on how easy "easy" is. Changing a line of code is easy. Changing the value of a single static variable is even easier. That's pretty much comparable to a frontal lobotomy (which isn't easy, but similar crude).

But we are constantly changing the states of the brains of our fellow human beings. By posting this post, I try to change the state of your brain. Is this unethical? Imagine a robot that is governed by an extremely difficult program that changed dramatically (and increased in complexity dramatically) over the course of its existence so far. It would be far too complicated to rewire it to change its behavior the way we want. But we could change the behavior if we talk to the robot, or restrict its movements, or something like that. That doesn't sound like frontal lobotomy any more, I would say. Instead, it sounds like we would treat human beings.

Imagine I could directly alter the states of your synapses in such a way that the result is the same as if I would have said some specific words to you. Would altering the states of your synapses "the hard way" be more unethical than to talk to you? Is it plausible that, given the complexity of your brain, anything like this will be possible in the foreseeable future?

Also, punishing other people for the crime of one person would also be inethical.

I think we punish people to prevent future crimes they could do, and to prevent future crimes other people could do. So people are always punished for crimes they have not (yet) done.

If you're talking about punishing robots in the same way that we punish human beings (which I believe you are talking about), it would likely have no affect at all unless the robots possessed true artificial intelligence (and we're not sure what that means exactly).

It would be trivial to construct a robot with the behavior: "first, behave destructive; but if you ever sense that you are in a prison, stop behaving destructive forever". This wouldn't be artificial intelligence, it wouldn't be very much far away from a thermostat. But putting this robot into prison would be a very efficient method to change its behavior (admitted, an even more efficient method would be to simply change its programming, as you suggested; but it is conceivable that there are cases where changing the programming is an inefficient option, while the robot is still far away from passing a Turing test, or anything like that).
 
6/10 Not So Free

Ketchup, Anyone?


Oh, that's right, "you people" like mayonnaise on your french fries. Yuck! Obviously, if you had any free will you would have ordered ketchup instead!

The conversation started with me saying "some french fries without ketchup, please" (well, actually, I said it in German). So perhaps I turned down ketchup using my free will.

OK, so in that one instance you felt as though you regretted your decision, but that doesn't mean that you weren't free to answer the waitress differently. Did you feel that you couldn't then decide to call the waitress over and explain that you changed your mind?

Absolutely no. Completely impossible. I would have been far too embarrassed. Of course I regretted that I could not do that.

I suppose you also felt this morning that whether you first put on your right shoe or the left one was decided for you millions of years ago too.

At least I didn't have the feeling of a deliberate choice. It happened without much thinking.

Again, that fear prevented you from making certain decisions in this instance doesn't mean that you don't feel that you have the ability to choose in other instances. Are you really saying that there is no instance in your entire life when you felt that you actually had the ability to choose between two actions?

I remember occasions where I spent lots of thoughts on the decision. Is that perhaps what you mean?

If you don't feel that you can ever choose your actions, then your feeling of shame at not having helped the woman is completely irrational.

How can a feeling be rational? For me, the important thing is that the feeling of shame is there, inevitably. What difference would it make if I would call this feeling "irrational"?

If you didn't feel that your ancestors had any choice but to act exactly as they did, then you wouldn't feel shame or pride over it!

I think this is not true. Some people are proud about the geological features of their country.

To put this chunk into perspective (especially for those readers who joined this thread lately): since I think that it is possible to (re?-)define free will such that compatibilism holds, it would be strange, would I attempt to show that there is no free will. But you challenged me to admit that I have some intuitive or introspective knowledge about the existence of my own free will. I don't think that some alleged introspective knowledge about our feelings is sufficient to describe the concept of responsibility. What my introspection shows me seems to be too inconsistent and inconclusive to be used as the base of any ambitious system.
 
7/10 Accountability

More Examples Investigating Responsibility, Anger And Madness


You stated that you would be angry at a person and expect them to be jailed only if the person had a choice to do otherwise, but you wouldn't be angry at a person or expect them to be jailed if the person didn't have a choice to do otherwise (due to being hypnotized, drugged, abducted by aliens, etc.). The action that the person took is the same in both cases; the only difference is the person's ability to do otherwise. Therefore, you hold the person who has a choice to do otherwise accountable, while you hold the person who doesn't have a choice to do otherwise unaccountable. The person who cannot do otherwise isn't responsible for their actions. Whether or not you get angry at someone and whether or not you feel that they deserve to be punished, is entirely dependent on whether or not they could have done otherwise, which is one definition of libertarian free will (the "garden paths" model).

You just have explained me that my feeling of shame in another case is irrational. But now you are switching back and forth between "feeling angry" and "holding accountable".

Your argument fails miserably in a deterministic world though, because neither of the people could have done otherwise. You don't explain what the actual difference is between the person who is drugged and the person who is a slave to their neurons.

Let us consider two persons, "Attila" and "Benedict". You forcefully drug both Attila and Benedict. As a consequence, both of them beat me. They had no choice, so I am mad at you, not at Attila and Benedict.

Next, you don't drug them. Attila remembers how wonderful the experience was and beats me again. Benedict finds no pleasure in beating me and abstains from beating me again. In a situation that is equal with regard to their external situation, they acted different, so they have a choice. I am mad at Attila, and glad about Benedict.

Now, given the neural states of Attila, none of them had a true choice. Both had been slaves of their neurons.

It seems to me that you are the one who should forgive them both, since you can't know for sure if or if not they have libertarian free will.

Indeed, I think some people are accountable, while others are not.
There, see? I was right!

No. You have been trying to use my anger as an argument. But my opinion about responsibility doesn't have to be connected with the circumstances that lead to my anger.

Your concept of accountability as you described it in the above exchange is completely based on libertarian free will (in fact, it's the "could have done otherwise" argument exactly). Your argument had nothing at all to do with whether or not the person had "more" or "less" free will (you still haven't explained what you mean by that).

Let's make matters a bit more complicated. You invite Benedict and offer him some drugs. If he takes them, there is a chance of p that the drugs will cause him to beat me. p is known to Benedict.

If p=0.9999, I would say, Benedict could have known that taking the drugs would have immoral consequences, and should have abstained from it. If p=0.0001, one can agree that it came as a surprise or accident that those drugs had such effect, if they have. At which threshold of p is Benedict accountable?

Benedict doesn't have any free will the moment he becomes violent. But there is a history in which Benedict is involved, and depending on the exact circumstances, he plays a more or less important part in creating the situation in which he becomes, now lacking free will, violent, so the degree of accountability varies.

Show me how, in a deterministic world, the person whose behavior is determined by the firing of neurons could have done otherwise.

The very same set of firing neurons responding to the very same stimulus couldn't have done otherwise.

I don't think you justified being mad at one person and not at the other, and you never showed how either case is different in any way.

Sigh. Since you insist that I spell it out:

We aggregate the different atoms in the universe into sets we call "things", "objects" or "persons", or something like that. Although those aggregations are partly arbitrary, it is more or less impossible to do without them. It is not unheard of to call one set of atoms "Bri", another one "jan", although things are very complicated, the set of atoms we call "jan" constantly changes, some of the atoms get lost, others are incorporated. Nevertheless, in spite of all those difficulties, the concept of "jan" is usually considered to be useful.

Reconstructing events, it is also usual to attribute causes to specific things and not other things. Consider, for example, the sentence "jan did it", as opposed to, say, "Bri did it". Some things can be treated as agents, especially if they are so complicated that the consideration of all of their internal mechanisms is not feasible. Strictly speaking, we should always say "the universe did it", but such a strict speech is far from being useable.

The distinction between a thing and its environment allows the distinction between its exterior and its interior. Firing neurons are part of the internals of jan. Drugs are external. They cause a change in the internals, but this change is, in the examples we are considering, not very complicated, and the brain mechanics of the drugged evil-doer can be considered as transparent. That is, drugs makes a box lighter. I search until I find the first box sufficiently dark.

Now, assume for a moment that there is no libertarian free will, determinism is true, and compatibilism is false, and we are all just p-zombies. Are you really unable to see any difference between the cases "p-zombie Attila beats jan because drugs changed its normal functioning" and "p-zombie Attila beats jan because the firing of its neurons, unaltered by any external chemical stimuli, made it do it"?
 
8/10 In The Courtroom

Being Held Accountable And the Breakdown Of The System Of Justice As We Know It


As I explained in a previous post (if I remember correctly (too lazy to look it up), my first post mentioning the judicial system), one possible aim could be to maximize happiness.
That's an interesting take on it. I don't think our judicial system does maximize happiness. Killing anyone who is convicted of a crime might maximize happiness because there would be far less crime. Sure, a few people who might be put to death by mistake (and their families) might not be too happy, but everyone else would be far happier.

Did you notice that my next sentence was "That is not my aim, by the way (at least not the only one)."? Furthermore, I doubt that killing everybody who committed a crime would increase happiness. Even those evil people I mentioned who copy copyright-protected software have friends and relatives. Do you want to kill all of them too? Fear and Loathing!

OK, so what if science has determined by a proponderance of evidence that one thing or another would make a person happy. Should we then force that person to do that thing to make them happier? For that matter, should we only be concerned about the happiness of the majority at the expense of the happiness of the few? And while we are maximizing happiness, why not just eliminate the most unhappy segment of the population?

I already said that I deem a preponderance of evidence insufficient. In fact, I said:

That is not my aim, by the way (at least not the only one). I think an important point to observe is the narrowness of our knowledge. For me, liberalism is a consequence of the limits of our knowledge.

Therefor, even if I came to the conclusion that it is probable that it would increase the happiness of one of my human fellows if I would force him to endure this or that treatment, I would hesitate with my plans to make him happy, since I don't think it would be right for me to force him his luck down his throat, since, after all, what do I know?
This doesn't sound like maximizing happiness to me.
And now, repeat after me, ...
That is not my aim, by the way (at least not the only one).


Not at all. I don't know whether free will exists or not. If it doesn't, then yes we are punishing a lot of people who have absolultely no control over their actions, which most ethics systems deem inethical.

You seem quite calm about the (given all the evidence, very likely) possibility of widespread unjust suffering of innocents. In fact, your calmness makes it hard to take you serious here. Giving them the benefit of doubt, you should act as if you knew that everybody is innocent. But it seems as if you don't care much about the greatest scandal of injustice of all times.

Or are you saying something like: "since our system must be just, libertarian free will must exist"? That, of course, would add arrogance to your carelessness for the suffering of innocents.

I think that if science were to prove tomorrow that we have no free will, a lot of things would change. For one thing, every criminal would have an air-tight defense and there would be little that anyone could do about it according to the law. I can't think of how the law would be patched in order to provide a distinction between a crime and an unintended act.

How about the law: "everybody has to act as if the discovery of the nonexistence of free will was never made"?

Had the latest findings of neuroscience that put the free will in a dubious light any impact on the system of justice?

If Attila killed Victor, the victim, and had a plan to kill Victor (that is, the neurons of the p-zombie Attila made a computation), and this plan entailed the consequence of Attila getting some money as the consequence of Victors dead, then Attila is a murderer, and his motif was money.

Maybe, since Attila is only a p-zombie, normal laws are not applicable. But then Attila can be sentences according to the lawz that prohibits murderz (a "z" is added to everything applicable only to zombies).

My point was that the difference between "murder" and either type of "manslaughter" is largely one of intent (motive and planning both indicate malicious intent). I was making this point to show that action by itself isn't enough to determine the crime committed or the appropriate punishment.

What is used then is the planning, the sort of computation that is made. If Attila makes a detailed plan to kill Victor, involving complicated preparations, Attila would be charged with murder. If Attila kills Victor accidentally, because he took no proper precautions to avoid killing Victor, we don't see evidence for a computation in his brain that can be regarded as a planning to kill Victor. Since we also see a lack of planning to avoid killing Victor, Attila is held responsible, but not for murder. The distinction is possible without resorting to any speculation about some magic ability to in deterministically could have done otherwise.

There are mitigating circumstances, yes.

Isn't this a bit of a verbal play? I say there are different degrees of guilt. You say, no, it is always full guilt or no guilt, but there may be mitigating circumstances of variable degrees. If you prefer, call it "full, atomic, indivisible guilt with mitigating circumstances of variable degrees".

Like I said, I don't know that our system of ethics is even equipped to consider the possibility that there is no libertarian free will, and therefore assumes that there is. It then uses this assumption of free will to determine guilt. It is impossible to determine if someone is behaving in an ethical manner only from their actions. You have to consider their intent as well. In order to be guilty of a crime, you had to have control over your actions (you could have done otherwise) and you have to be the ultimate source of your actions (you are responsible for the crime). In other words, you had to have committed the crime intentionally -- as an exercize of your own free will.

As I said, if there is no possibility to know, our system of favorable doubt would require that we assume that there is no libertarian free will, since this assumption seems to be favorable for the accused.

How do you rule out the possibility that exactly half of mankind has genuine libertarian free will, while the other half consists of p-zombies?

And if you would have a possibility to distinguish between them, would you refrain from punishing the p-zombies? What would you say if this would lead (deterministically, of course) to more and more p-zombies committing crimes? Of course, the p-zombies don't have a CHOICE, more crimes are just the inevitable output they produce after the input of the knowledge that they won't be punished. It's not like a conscious decision, just a purely mechanical consequence.
 
9/10 God To The Rescue

Faith And Responsibility


No, your free will doesn't keep you from doing anything. In fact, just the opposite. It makes you responsible for your actions. Someone who doesn't have free will can't be responsible, and therefore someone who doesn't believe they have free will would have no real reason not to do bad things if they knew they wouldn't get caught. Even if they beleive in God, they could argue that God couldn't possibly hold them accountable for something that is beyond their control. Someone who does believe in free will generally believes that they are responsible for their actions.

What should God stop from torturing innocent p-zombies? His justice?

And if I believe that I am RESPONSIBLE for my actions (as opposed to being merely responsiblez), why should that make me act any different?

And why should anybody prefer to be good, given a free will?
Perhaps because of a feeling of having responsibility for one's actions.

Couldn't this mere feeling be shared by those lacking free will?

Perhaps because they have a desire for others to also choose to be good, and know that others have a desire for them to choose to be good.

Couldn't tit-for-tat be established without free will? If even bacteria manage it?

Perhaps because they believe in God.

Because they want to make God happy? Why should they? And how do they know what makes God happy? Maybe God wants human sacrifices? Would it be a good thing to sacrifice a few fellow human beings to make God happy?

Or because they fear God? Then it boils down to: if you have property x, God will punish you, if you misbehave, so behave. If you lack property x, God will ignore you. The same would work for any other property, not only free will. It would even work if we knew that God punishes p-zombies.

My understanding is that the story of Adam and Eve is all about humans receiving free will. That's what the Tree of Knowledge was (the knowledge of good and evil). The implication is that after eating from the tree, Adam and Eve "knew" about good and evil, and therefore could choose between them.

Of course it matters little to me how you interpret the Bible, but as I understand it, a more common interpretation is that Adam and Eve committed a sin the moment they ate the apple. If thy lacked free will before they ate the apple, how could eating the apple have been a sin?
 
10/10 Counterstrike

Bri Explains Free Will


I think the analogy that I used before might be a good one, that our brain is more than the sum of its parts. In other words, if you believe we have a physical "brain" but something undefined which is our "mind" or "soul" then that "mind" or "soul" might very well be what allows us to have free will. That part of us simply might not follow the laws of physics as we currently understand it.

Since this seems to be a central point of your theory (attack, jan, attack, don't get stuck with too much defense work), it would be nice if you could elaborate this a bit more. How is the brain more than the sum of its parts? In any sense that violates physicalism? Or does this surplus supervene?

With the above definition, I think it easily meets the "ultimate source" standard. I see what you're getting at though.

Definition? So I assume you define free will as "that part of our brain that is not a part of the sum of its parts"? How about the "could have done otherwise" test? That would depend on how your definition relates to physicalism, I guess. If you stick to physicalism, you run into a problem, I would say.

You can't tell if someone is exercising free will only from their actions.

What else would you suggest? The Magic-Free-Will-Detector<sup>TM</sup>?

Whether it tells us something about the truth of the theory of evolution depends on whether we are determined to believe that evolution is true or false. The truth of the argument of the machines would have no affect on our actions one way or the other since those actions are completely determined.

If you put it that way, the truth of something never has any consequences, since we are unable to see the truth, we only see evidence.

Shouldn't I try to favor what is favored by evidence? Oh, yes, I see, I can't make a "choice" about what I favor, and it is prederminated what I will believe. So what?
Well, worse than that. Many skeptics believe that if something is unfalsifiable, then it's not to be believed. If determinism is unfalsifiable, then you shouldn't believe it, even if it seems to fit reality and there is no example of anything that doesn't fit it.

I think that's a completely different argument. According to you, nothing can ever be falsified, since we are always forced to believe what we have to believe, being forced by those nasty firing neurons. I don't see why determinism should be special in this regard.
 
jan said:
I also wonder: if the content of my reply was predetermined millions of years ago, why did it take me yet another week to write it?

Holy crap, jan! Do you really need to ask? It's going to take me another week just to read them!

Better get started!

-Bri
 
Bri said:
I agree. Why do you think my posts are so huge? Jan and I have been replying back and forth, commenting on each point, and it's gotten quite large. After she posts again, I am going to try to break it up into smaller posts based on each point.

I'm not sure if you're referring to the original post or to the conversation as it has evolved, but the discussion is not just about definitions. The original post might have been about the definition of the word "randomness" but not really. The current discussion is about whether it is possible to have meaningful free will if determinism is true, and about the ethical implications of that.

-Bri
;) Cheers.
I see the situation in a very simple way. Its all about nodes inside an environment.

For example: A bunch of characters in a play. In the story, the characters have free will and are surprised etc, make their own decisions, but outside of the story environment (viewed from the audience) they are observed as scripted; limited to their environment.

And randomness: The apperance of randomness exists when the cause of an event is unknown. If that's the definition of random then 'random' is a subjective term & thus random events occur.
"I dont know how this computer chooses how to shuffle the cards in solitair so it appears random, but the programmer who made solitair does."

Well maybe not the programmer, but it could be determined with some technical effort. Thats my 2 cents.
 
Mojo said:
Ah, but try going there today!
Not to say that he didn't have a choice that would affect the way he looked back at it today. ;) The physical reality (of cause-and-effect) is just the way of affirming (good or bad) the reality of our mental state. Similar to the Bible when it says, "Of their fruits ye shall know them." In other words, the physical is merely a confirmation -- hence the consolidation of -- the mental. However, you have to have a spiritual dimension or world, full of intents and motives, before such a thing can occur.
 
DavoMan said:
And randomness: The apperance of randomness exists when the cause of an event is unknown. If that's the definition of random then 'random' is a subjective term & thus random events occur.
Which is to say, randomness doesn't exist in the "objective sense," correct?
 
Iacchus said:
Which is to say, randomness doesn't exist in the "objective sense," correct?
I didnt actually think of that, but I would have to agree, correct.

However for a full 100% non-random experience, you would have to be completely external of the environment in question. To pull you back to my example: a member of the audience watching a play.
 
Iacchus said:
Not to say that he didn't have a choice that would affect the way he looked back at it today.
He is welcome to look back at what he did yesterday in whatever way he likes. He can't go back and change it though. And if the way he choses to look back on it contradicts other people's recollections of what he did yesterday they are quite possibly going to consider him to be deluded or dishonest (or just to have a very poor memory, of course).
The physical reality (of cause-and-effect) is just the way of affirming (good or bad) the reality of our mental state. Similar to the Bible when it says, "Of their fruits ye shall know them." In other words, the physical is merely a confirmation -- hence the consolidation of -- the mental. However, you have to have a spiritual dimension or world, full of intents and motives, before such a thing can occur.
I have no problem with the idea of a universe existing without sentience. The universe clearly must have existed before sentience evolved (yes, I know you'll claim that some sentient entity must have created it but I'm not going to take this idea seriously unless you provide some evidence). If every sentient being in the universe suddenly dropped dead tomorrow, the universe would not cease to exist.

Since you claim not to be able to remember what you did yesterday (in other words, you are not conscious of it, so it therefore has no "spiritual dimension") does this mean that, according to the way you consider the universe to work, anything you did yesterday did not really happen?
 

Back
Top Bottom