Is Randmoness Possible?

Hmm it really feels like the conversation has drifted. Just to re-hammer my point, the free-will quesion really is about definitions and logic. Noone agrees on the definition, and we only have logic to resolve the question.

The new terms like 'spiritual dimension' etc that are being introduced really seem to be clouding the issue by throwing in MORE words people don't agree on.

Boy I really feel like Mr Spock amungst 20 Dr Mccoys :p
splock.jpg
 
I agree with DavoMan. Trying to apply critical logic to propositions containing unclear terms is like trying to shift sand with a fork.
 
DavoMan said:
Hmm it really feels like the conversation has drifted.

What do you mean? Bri's and my derailment of Iacchus' thread? Or Iacchus' derailment of our derailment?

Just to re-hammer my point, the free-will quesion really is about definitions and logic. Noone agrees on the definition, and we only have logic to resolve the question.

I think Bri has already answered it:

Originally posted by Bri
The current discussion is about whether it is possible to have meaningful free will if determinism is true, and about the ethical implications of that.

Why do people feel like having a free will? Do they really feel having a free will? How do responsibility and accountability work? How important are concepts like "could have done otherwise"?

The answers to these questions do not only depend on how the term "free will" is defined.

As I adumbrated earlier, I could have taken the easy route and could just have said "libertarian free will does not exist, since determinism is true, therefore there is nothing left to discuss". But I think that such an answer won't do justice to all the other questions related to free will.

Quite contrary, I don't think that issues about definitions are very important in this discussion. This impression might have been arisen due to the difficulties we have to explain our competing concepts, and what they entail.
 
FrankP said:
Trying to apply critical logic to propositions containing unclear terms is like trying to shift sand with a fork.

I would like to call that the "If Only We Had Clear Definitions Of All Our Terms Fallacy".

If you try to use only clearly defined terms, you run into an infinite regress, since those definitions contain terms needing definitions, and so on.

The only way to avoid such a situation I know is to use a formal approach to formal logic. But the price you have to pay for this is that you only get meaningless chains of symbols. As soon as you try to attribute a meaning to your sentences, your troubles start again.

The best I am hoping for is to be able to discuss the unclear terms until we reach a point where the participants agree that all debated terms are now explained in terms familiar enough to everybody such that nobody bothers about them. Which is what I was trying to do.




If you don't risk to get dirty from time to time, you miss all the fun.
 
jan said:
I would like to call that the "If Only We Had Clear Definitions Of All Our Terms Fallacy".

If you try to use only clearly defined terms, you run into an infinite regress, since those definitions contain terms needing definitions, and so on.
..............
If you don't risk to get dirty from time to time, you miss all the fun.
I appreciate you bringing that to attention. The age old problem of how much resolution you want with ya definitions.
 
Mojo said:
He is welcome to look back at what he did yesterday in whatever way he likes. He can't go back and change it though. And if the way he choses to look back on it contradicts other people's recollections of what he did yesterday they are quite possibly going to consider him to be deluded or dishonest (or just to have a very poor memory, of course).
Ever experience "regret?"

I have no problem with the idea of a universe existing without sentience.
Sure you do, otherwise you wouldn't be here to tell us about it. ;)

The universe clearly must have existed before sentience evolved (yes, I know you'll claim that some sentient entity must have created it but I'm not going to take this idea seriously unless you provide some evidence).
I'm saying the physical Universe is an extension of that which already was ... sentient that is.

If every sentient being in the universe suddenly dropped dead tomorrow, the universe would not cease to exist.
Neither would sentience, because they would have passed on to the greater world of sentience.

Since you claim not to be able to remember what you did yesterday (in other words, you are not conscious of it, so it therefore has no "spiritual dimension") does this mean that, according to the way you consider the universe to work, anything you did yesterday did not really happen?
Of course not. It also has a "spiritual dimension" to the extent that I rembember (recall) it in the moment. I agree that it's not worth dwelling on though, unless of course you have some really serious issues, otherwise you would not be living your own life.
 
DavoMan said:
The new terms like 'spiritual dimension' etc that are being introduced really seem to be clouding the issue by throwing in MORE words people don't agree on.
The reason why I brought it up, is because I didn't want people to think I was strictly deterministic in my views. Also, if everything was based upon free will (which, I believe it is), it would still have to have a means by which to consolidate itself. This is where determinism comes in. In which case you can't have free will without determinism and whatnot.
 
Iacchus said:
Ever experience "regret?"
Certainly, but it didn't change what had happened. Did any of your regrets ever have any effect on past events?
Sure you do, otherwise you wouldn't be here to tell us about it.
Well, no, it's actually you who's here trying to tell people about this stuff. Who has an axe to grind here?
I'm saying the physical Universe is an extension of that which already was ... sentient that is.
you can say whatever you want. But do you have any evidence to support what you are saying?

Neither would sentience, because they would have passed on to the greater world of sentience.
Do you have any evidence that this "greater world" exists, or that entities that die pass on to it?

It also has a "spiritual dimension" to the extent that I rembember (recall) it in the moment. I agree that it's not worth dwelling on though, unless of course you have some really serious issues, otherwise you would not be living your own life.
Well, have you considered that people may have issues with things that you said yesterday and now claim not to remember?

Edited for formatting
 
jan said:
1/10 The Missing Half

I will be responding to your posts in order, as time permits. Thank you for taking the time not only to respond, but to break them up into digestable pieces.


Therefore, the Ethics department of my philosophical views shows just a big "Under Construction" sign (give or take; as will be apparent below, I still have some ideas about ethics). Since concepts like guilt or responsibility require some kind of ethics to explain them, one half of these concepts is left unspecified.

I think we all know intuitively what is just and what isn't, and we all know what constitutes moral responsibility. I'm willing to use the "common notion of justice" as a litmus test for whether a particular theory works or not.

As an example, we can probably agree (I think) that if someone points a gun to your head and tells you that you must steal something or die, you should not be held responsible for stealing in this case.

So any theory we come up with that would allow us to be able to define these sorts of judgements in a deterministic world is what we're after. If we can come up with valid examples that would be contrary to what we would normally consider to be just, then we might have to rethink things.


For practical purposes, I tend to think moral decisions should be based on the avoidance of suffering. It seems to me that older theories worry less about the avoidance of suffering than about the avoidance of sinfulness. And your insistence to have a theory of responsibility may stem from a similar viewpoint: you want people to be able to be guilty. I want them to be harmless.

I like your criteria. Maximize happiness and minimize suffering for the largest number of people. Sounds good.

Giving frontal labotomies to criminals might prevent them from committing crimes, and might even make them happy for the rest of their lives. Better yet, killing people who would commit any crime seems to meet the criteria as well, provided we did it quickly in order to minimize their suffering.

Both of those things wouldn't fit my moral litmus test though, which tells me that the punishment must somehow fit the crime. So there must be more to it than simply maximizing happiness.


Nevertheless, it seems to me that my concept of free will is also able to explain the terms responsibility or guilt, at least the other half of them.

I look forward to reading and responding to it!

-Bri
 
jan said:
2/10 My Definition Is This

It seems to me that you are playing "conquer the term" with me: if I use the term free will, you answer that my free will is not the real free will...

Indeed, I'm guilty of that. The accepted "free will" uses both of the models from that article ("could have done otherwise" and "ultimate source"). These models have shown to be effective in considering a system of ethics, and most modern justice systems rely on them.

That said, most compatibilist arguments have failed to refute those models, and recent attempts have instead "moved the target" by attempting to redefine "free will" so that it continues to be useful to define ethics, but could also exist in a deterministic world.

Although a new definition might only apply to ethics and not actually to what we think of as "free will" in any other context, I would still be duely impressed. If you are able to come up with a meaningful new definition of free will that allows for "common sense" ethics as well or better than the accepted definition of free will and is compatible with determinism, then I'll admit defeat even though you didn't actually prove that the accepted version of free will is compatible with determinism, and even though you may not have proven that your version of free will is useful in any other way except to define ethics.

So indeed, let's not consider "real" free will except as a means of comparison with whatever version you come up with. Your version, however, must be able to explain modern "common sense" notions of justice, for example "responsibility" and "intent." It must also not remove basic human rights and freedoms (for example, it mustn't use "the ends justify the means" sort of argument to allow putting everyone in jail or giving everyone a frontal labotomy in order to avoid crime).

We might, therefore, call a human being a black box, something for which we don't know the exact mechanism, because it is too complicated, contains too many feedback loops and too many details.

OK, I will grant you this, and I understand your comparison with the thermostat.



I used to think that complexity is the key ingredient that distinguishes a thermostat and a human being, that is, "darkness" and complexity are the same...It seems necessary that the thing we examine shows some tendency to try to accomplish some goals, that is, it must have some agenda to be an agent.

So we are beginning to see a definition of "~free will" (the ~ to differentiate between the commonly accepted version) emerging. If we consider an object that has a "purpose" and a "means" or "ability" to accomplish it, then it has the potential for ~free will.


The thermostat may lack a "real" agenda (since, after all, it's just a thermostat), but we have no trouble to identify its "apparent" agenda.

I'm trying to decide whether it would have to have a "~motive" as well as a purpose and a means. Something could have a purpose and a means, but have no motivation to actually accomplish its purpose. The "~motive" is a little difficult to define for an inanimate object (perhaps "something that causes it to accomplish its purpose"), but I'm with you so far. I can see that a thermostat does have a purpose and a means by which to accomplish that purpose (otherwise it would never be able to actually accomplish its purpose).


The exact details what makes a "box" "lighter" or "darker" are not completely known, therefore.

You seem to be defining "color" as "predictability," is that correct? You are then simply saying that it is unknown exactly what makes an object more or less predictable (complexity might play a part but obviously isn't entirely the answer because a completely "random" object could be very simple and completely unpredictable).


But conversely, if something passes the test, that is, it is felt necessary to adopt the intentional stance to describe its behavior, than in fact it has intentions...

...in other words, it would be a p-zombie.

I'm not certain that I understand what you're getting at, and I don't want to be accused of hijacking the term "intentions." If you were to redefine the word "intention" (let's call it "~intention") then I could accept a definition that anything for which we must adopt an intentional stance in order to interact with it has ~intentions.

That said, if you're asking whether something for which we must adopt an intentional stance must have "real" intentions, I don't know. There are computer programs that aren't all that complex, but mimick human conversatioal patterns well enough that a person talking to them on the Internet would likely think they are a real person. One might have to adopt an intentional stance towards this program in order to interact with it. Does this computer program actually have "real" intentions any more than a thermostat does?

Perhaps it doesn't really matter, since as Dennett pointed out, we could consider a thermostat to have intentions and can choose to interact with it in that way. However, I don't get angry at my thermostat when it fails to keep my room a constant temperature (I instead call the repair man), so do I think my thermostat is intentionally screwing with the temperature, no.


To be able to exercise free will, one must be able to control something (I hope you don't try to steal the word "control" too); therefore, the person abducted by aliens shows no free will, and so on. But those details may be better explained answering your post.

I wasn't really "stealing words" unless it was unclear to me whether you were redefining them or using them in some other context. As long as you're clear to redefine them, you can use any word you want, but it could get confusing. How about using some sort of a symbol to indicate words you're making up, such as "~control" (just be sure to define what the difference is between ~control and control).

-Bri
 
jan said:
2/10 My Definition Is This

If a p-zombie works indistinguishable from a human being, it is doubtful why natural selection would have created human beings in the first place, instead of p-zombies. I think I could try to find more arguments against the possibility of p-zombies (as opposed to real human beings), but would take the trouble only if you hold the position that they are, indeed, possible.

BTW, thanks for the reference on p-zombies. I had not heard this term used before.

I'm going to assume that your p-zombie is physically identical to a human being in every way except having been created artificially.

If I'm understanding, you're attempting to prove the statement "if it is necessary to adopt an intentional stance to describe something's behavior, then that thing must have intentions." You're doing this by attempting to disprove the opposite statement "it might be possible for something without intentions to exist that we are unable to describe without adopting an intentional stance."

You are attempting to disprove this last statement by showing that a p-zombie without intentions cannot exist. It is true that if we are considering a deterministic, non-dualist world (a world in which we are no more than the sum or our parts) such a p-zombie without intentions cannot exist. The p-zombie that is physically identical to a human being would have to be identical to a human being in every way, and would have to have intentions.

But does that prove that there is nothing for which we must adopt an intentional stance that doesn't have intentions? Might there not be something simpler than a p-zombie for which we would have to adopt an intentional stance in order to understand its behavior? I think that the Karl Sims "Evolved Virtual Creatures" is one example. These creatures are created from a randomely self-modifying program that even the programmer no longer understands, but although the creatures themselves are random (their equally random brothers who didn't possess the desired behavior were unceremoniously eliminated), their behavior doesn't seem to be random. Like a thermostat, they "attempt" to accomplish a specific goal. But unlike a thermostat, the only way to understand and predict their behavior is to adopt an intentional stance, even though few would argue that they have "real" intentions any more than thier discarded brethren did.

Or am I hijacking the term "intention" again?

-Bri
 
1/10 The Missing Half - Revision 1


Originally posted by Bri
I like your criteria. Maximize happiness and minimize suffering for the largest number of people. Sounds good.

It's a pity that it seems you didn't caught up to "8/10 In the Courtroom" as you wrote this reply, since I explain there why I am not an utilitarist and...

Giving frontal labotomies to criminals might prevent them from committing crimes, and might even make them happy for the rest of their lives. Better yet, killing people who would commit any crime seems to meet the criteria as well, provided we did it quickly in order to minimize their suffering.

...why an utilitarist doesn't need to subscribe to the view you attribute to utilitarism.
 
2/10 My Definition Is This Part A - Revision 1


The exact details what makes a "box" "lighter" or "darker" are not completely known, therefore.
You seem to be defining "color" as "predictability," is that correct? You are then simply saying that it is unknown exactly what makes an object more or less predictable (complexity might play a part but obviously isn't entirely the answer because a completely "random" object could be very simple and completely unpredictable).

No, but almost. The problem is: unpredictability might be insufficient. It is trivial to construct something that behaves unpredictable, by including some source of randomness: so unpredictability seems to be a necessary, but not sufficient trait. What would be a collection of traits that is sufficient to cause/enforce the presence of ~free will? This is a question that has to be answered by psychology and neuroscience, and it would be foolish to claim to know all the answers yet. As long as we don't know the answer to this question in all its glory, it is nevertheless possible to use a phenomenological approach or use some operational criteria. "Color" therefore serves as a placeholder for "all the details we have yet to discover and explore and investigate". This move, I hope, is legitimate since it is possible to give criteria when something is "light" or "dark".

I'm not certain that I understand what you're getting at, and I don't want to be accused of hijacking the term "intentions." If you were to redefine the word "intention" (let's call it "~intention") then I could accept a definition that anything for which we must adopt an intentional stance in order to interact with it has ~intentions.

Once again: almost. The need of having to adopt the intentional stance is a working replacement (in my view, Dennett might disagree) for the more thorough and complete definition we may be able to give if our knowledge increases. But for practical purposes, we might go along with saying that "~intention" is defined as something that, in practice, can only be described adopting the intentional stance (with emphasize on "only", see the next post).

That said, if you're asking whether something for which we must adopt an intentional stance must have "real" intentions, I don't know. There are computer programs that aren't all that complex, but mimick human conversatioal patterns well enough that a person talking to them on the Internet would likely think they are a real person. One might have to adopt an intentional stance towards this program in order to interact with it. Does this computer program actually have "real" intentions any more than a thermostat does?

This is an interesting question. My personal impression with them is that they are not that convincing, but perhaps I am missing something (if lifegazer and Iacchus are just some bots, then they fooled me).

What happens if you don't interact with those programs in a random encounter, but within the context of a Turing test, where you try with all your cleverness to find out which participant is a human being and which one is a program? I think most of these programs would fail miserably.

How would we describe those failures? I think it would be most likely to deal with those glitches on the design stance, like "the programmer intended it to look like that-and-that, but failed to realize that this-and-this would destroy the effect", or something like this. I'm not certain about this; we could also say "the program uses generic phrases and general statements, because it tries to hide the fact that it lacks knowledge about the subject currently discussed", which would be a description using the intentional stance (that is, we would use the intentional stance to describe a behavior that makes the program fail the Turing test, which would be quite ironic).

But now imagine a program that passes the Turing test flawlessly. And to describe the behavior of this program, the best you could do would be to use the intentional stance. Would you feel free to shut down the program? To use its sensory input to cause the program ~pain? I think the only possible moral reaction would be to play safe and treat the program as if it has a soul, even if we only know for certain that it has a ~soul, and have no idea about its "real soul".

(By the way, what do you think about adopting the following convention? If something meets some superficial criterion about having a certain quality, we speak of it having the quality "~quality". If, on the other hand, we want to speculate whether it really has this quality in all its metaphysical glory, we ask whether it has the quality "QUALITY". So we don't know whether the program in question has INTENTIONS, we only know about its ~intentions.)

Perhaps it doesn't really matter, since as Dennett pointed out, we could consider a thermostat to have intentions and can choose to interact with it in that way. However, I don't get angry at my thermostat when it fails to keep my room a constant temperature (I instead call the repair man), so do I think my thermostat is intentionally screwing with the temperature, no.

This morning, I once again had to repair our jalousie, and I was quite angry that it was broken once again.

If your dog bites your hand, what would be your reaction? "It's only an animal, according to Descartes, it's an automaton, it behaves strictly deterministic, so there is no reason to become angry, and furthermore, it would be extremely unjust to yell at the dog or punish it, since the dog completely lacks accountability". Congratulations for your calm temper, if this is your immediate reaction.
 
2/10 My Definition Is This Part B - Revision 1


BTW, thanks for the reference on p-zombies. I had not heard this term used before.

Some remarks by fellow posters here seem to indicate that p-zombies once where the rage on this board, and became later neglected, since everybody was tiered with them. But that's only a conjecture on my behalf, since I completely missed that time. I think we would have to ask one of the elder posters to confirm this story.

I'm going to assume that your p-zombie is physically identical to a human being in every way except having been created artificially.

No. I hoped it would be clear from the context, but obviously it wasn't obvious. The history of a p-zombie is usually not considered to be important. I don't assume that p-zombies have been created artificially. Instead, I tend to prefer the scenario that the p-zombies are human beings where God just forgot to give them FREE WILL, but that's not important. The important thing is: the p-zombies lack FREE WILL. But since they are indistinguishable from ordinary human beings, they can't fail to have ~free will, since the definition of ~free will just depends on observable things.

You are attempting to disprove this last statement by showing that a p-zombie without intentions cannot exist. It is true that if we are considering a deterministic, non-dualist world (a world in which we are no more than the sum or our parts) such a p-zombie without intentions cannot exist. The p-zombie that is physically identical to a human being would have to be identical to a human being in every way, and would have to have intentions.

This more or less hits the nail of what I was trying to show. The technical term that applies here, I guess, is the notion that mental phenomenon, according to physicalism, supervene, that is, if the atoms are configured the same, then the mind can't fail to be the same.

Of course, it is still possible to reject physicalism.

But does that prove that there is nothing for which we must adopt an intentional stance that doesn't have intentions? Might there not be something simpler than a p-zombie for which we would have to adopt an intentional stance in order to understand its behavior? I think that the Karl Sims "Evolved Virtual Creatures" is one example. These creatures are created from a randomely self-modifying program that even the programmer no longer understands, but although the creatures themselves are random (their equally random brothers who didn't possess the desired behavior were unceremoniously eliminated), their behavior doesn't seem to be random. Like a thermostat, they "attempt" to accomplish a specific goal. But unlike a thermostat, the only way to understand and predict their behavior is to adopt an intentional stance, even though few would argue that they have "real" intentions any more than thier discarded brethren did.

Or am I hijacking the term "intention" again?

No, and I think you raise a valid concern. I think the difference is what "being forced to use the intentional stance" means in practice.

To give another example: it can be extremely tedious to describe the effects of genes not using some kind of intentional language. It is quite tempting to say "the gene X wants to raise in frequency, and it achieves this aim by doing that-and-that." But how strong and irresistible is this temptation? It is quite possible to say "a gene X that has the effect of doing that-and-that rises in frequency", which avoids the intentional language. And as far as I see, this second version is usually the preferred version in the scientific literature. You can use the intentional stance, but you don't have to. The pressure is noticable, but far from irresistible.

(As an aside: it seems to follow from my theory that not only thermostats do have a tiny, tiny amount of intention, but also genes; but genes are something rather different than thermostats; thermostats and human beings are similar in being concrete objects, while genes are abstract objects, more like concepts and ideas, not inhabitants of our sublunar world, but of Plato's heaven of ideas. How weird is it to ascribe intentions to platonic ideas? Does this refute my ideas? I guess I'll have to carefully examine whether or not this is the case.)

Another example might be an insect that shows a rather complicated behavior that, at first glance, needs to be described in terms of intentions. But a closer look exhibits that this complicated behavior is rather inflexible and mechanical. So it seems that it is sufficient to describe the design of this behavior (not that there has been an Intelligent Designer; but that's another subject). ~Free will might not require to act different in the same situation, but it requires to act different in different situations.

Unfortunately, I don't know Karl Sims' work. But if his creatures simply appear to try to achieve a goal (I mean: to ~try to ~achieve a ~goal), then I don't see the irresistible force that would make us describe them in terms of the intentional stance, so they are not all that black.
 
jan said:
It's a pity that it seems you didn't caught up to "8/10 In the Courtroom" as you wrote this reply...

I'm getting there...

-Bri
 
jan said:
Unfortunately, I don't know Karl Sims' work. But if his creatures simply appear to try to achieve a goal (I mean: to ~try to ~achieve a ~goal), then I don't see the irresistible force that would make us describe them in terms of the intentional stance, so they are not all that black.

Karl Sims is the person who did those amazing images that I believe you commented on earlier in this thread.

If you haven't already, you'll have to see the "Creatures" movie. Karl Sims "Evolving Creatures" movie can be found here:

http://alife.ccp14.ac.uk/ftp-mirror/alife/zooland/pub/research/ci/Alife/karl-sims/creatures-demo.mpg

His paper describing how the amazing creatures "evolved" can be found here:

http://www.genarts.com/karl/papers/siggraph94.pdf

Sure, the creatures only "appears" to try to achieve a goal because they are just programs and are created by using an artificial form of natural selection (elimination of mutations that don't appear to achieve the goal better than predicesors). However, once you see the video, you will agree that it would be very difficult to use anything other than the "intentional stance" to describe what they are doing.

Karl Sims first created a 3D world complete with simplified laws of physics. These creatures are then created using a "randomely modifed" program, where one program randomely replaces segments of the creature's programming with random mutations of what was there. The new creatures have mutated body parts, "muscles," etc. The most "successful" of these mutuations (based on some goal such as "walking," "following," "competing for food") are kept, while the others are discarded.

You'd have to ask Karl Sims whether he felt guilt for halting the program.

-Bri
 
jan said:
You seem to be defining "color" as "predictability," is that correct?

No, but almost. The problem is: unpredictability might be insufficient...

..."Color" therefore serves as a placeholder for "all the details we have yet to discover and explore and investigate". This move, I hope, is legitimate since it is possible to give criteria when something is "light" or "dark".

OK, this explains it a little better, but I guess my question is...if you don't know what color represents exactly, how do you know the thermostat is "whiter" than a person? Aren't you are making claims about the relative colors of things without explaining how you know, or how such things can be determined? When you say that it is possible to give criteria when something is "light" or "dark" what are those criteria?

My personal impression with them is that they are not that convincing, but perhaps I am missing something (if lifegazer and Iacchus are just some bots, then they fooled me).

Really, I just assumed lifegazer and Iacchus were bots since they seem to repeat the same assertions over and over while ignoring reasoning by others. Seems like something a naive computer program might do.

Seriously though, you are correct that the earlier versions of such programs (like the famous "Eliza") were simple indeed, but they are getting much more complex nowadays. I think they have even done tests such as the one you described and people had a very difficult time telling the computer programs apart from children.

And then there are Karl Sims' "creatures."

My point is that it may very well be possible to write a computer program that is complex enough that it would be impossible to relate to it by any other means than an intentional stance.


This morning, I once again had to repair our jalousie, and I was quite angry that it was broken once again.

I have to admit, I had to look up "jalousie" as we don't use that term very often where I'm from! Were you angry that it broke, or were you angry AT IT for breaking? Did you blame IT, or did you blame the manufacturer, or perhaps yourself for breaking it?


If your dog bites your hand, what would be your reaction?

Well, we have already made that argument about getting angry at people in a deterministic world. You are intending to show that thermostats, dogs, and people do have at least some sort of ~free will (perhaps in different amounts) and therefore we might be justified being angry at them.

-Bri
 
jan said:
The important thing is: the p-zombies lack FREE WILL. But since they are indistinguishable from ordinary human beings, they can't fail to have ~free will, since the definition of ~free will just depends on observable things.

Oh, OK! Thank you for clarifying that.

We are assuming determinism and non-dualism for the purposes of this discussion (we are trying to show that there is a ~free will that will be able to substitute adequately for a lack of FREE WILL as far as ethics is concerned). Because of these assumptions, we are also working under the assumption that FREE WILL probably doesn't exist either (otherwise there would be no reason to try to define ~free will if the real thing were possible). Therefore, not only is it possible for your p-zombies to exist, but aren't really p-zombies at all, but full-fledged human beings. Was that your point?


No, and I think you raise a valid concern. I think the difference is what "being forced to use the intentional stance" means in practice...

...But if his creatures simply appear to try to achieve a goal (I mean: to ~try to ~achieve a ~goal), then I don't see the irresistible force that would make us describe them in terms of the intentional stance, so they are not all that black.

One of Karl Sims' creatures (a swimming creature) tries to "follow" a small glowing sphere that can be moved about its 3D underwater world. A human can move the sphere, and the creature will adjust its simulated muscles to move its simulated body to "catch" the sphere. The underlying programming is very complex since it's created by thousands of random modifications to the code that creates the creature. Even Karl Sims cannot tell you how it accomplishes its goal, and would have to resort to discussing this creature's actions in terms of the creature "wanting" to follow the sphere, or being "attracted" to the sphere. Of course, the truth is that the millions of other mutations that didn't follow the sphere as well were discarded.

If this qualifies as "being forced to use the intentional stance" (and I think it does) then this creature's box would be quite black even though the creature should have ~free will that is closer to a thermostat than a human being.

-Bri
 
jan said:
3/10 Manners of Speech

If I remember correctly, I never claimed determinism to be compatible with libertarian free will. I think that libertarian free will is incompatible with everything, including dualism...

1 : voluntary choice or decision (I do this of my own free will)
2 : freedom of humans to make choices that are not determined by prior causes or by divine intervention

Consider the following two sentences:

- I made a decision of my own free will.
- I made a decision by exercizing my free will.

These two sentences mean the same thing, although the first uses definition #1 and the second uses definition #2. Definition #1 simply means "voluntary choice" while #2 means "the ability to make voluntary choices."

The American Heritage Dictionary defines it like this:

1. The ability or discretion to choose; free choice: chose to remain behind of my own free will.
2. The power of making free choices that are unconstrained by external circumstances or by an agency such as fate or divine will.

Again, the first is "free choice" and the second is "the power to make free choices."

The only reason there are two definitions is because they can be used in two different ways (both happen to be nouns) but they both state related ideas, and both refer to libertarian free will (what other kind of "free will" is there?).

In fact, there can be no voluntary choice (definition #1) without the ability to make a voluntary choices (definition #2).

It is true that the phrase "of my own free will" (definition #1) is often used to mean "without coercion" (i.e. without a gun pointed at my head), but so is the second as in the sentence "I had no free will because a gun was pointed at my head." Also, both uses refer to any circumstance which provides a person with no opportunity for voluntary choice, including determinism, in which no voluntary choice exists at all. Other possible things that might prevent or affect one's ability to make voluntary choices would be mental defect, lack of ability to comprehend the consequences of one's action due to immaturity, and other mitigating circumstances.

In the Webster's definition, the phrase "that are not determined by divine intervention" simply means that a diety isn't forcing us to do its will (we're not puppets to a diety, but it might be possible that a diety somehow "enables" us to have free will). I don't see this version of free will (if it exists) to be incompatible with dualism, rather I think that it may only exist if dualism is true (i.e. we are more than the sum or our parts, that there is a part of us that we don't know about that allows us to make these choices ourselves, rather than our behavior being caused externally).

That is not to say that our choices are always unlimited. For example, we might be able to choose between only two possible things, or perhaps we are limited by the laws of physics (we cannot choose to fly, but we can choose whether to walk or run). Even if our choices are limited, we are still free to choose between them.

Ethically speaking, we are not held responsible for our actions if we're not choosing those actions voluntarily, whether that be because someone is holding a gun to our head or because we don't possess the ability to make our own choices. It would follow that if we don't possess the ability to make choices under determinism, then we cannot possibly voluntarily choose anything, and therefore cannot be held responsible for anything we do.


It is possible to expand this and develop a concept of "being forced by the firing of my neurons". But how often do people worry about this kind of force?

Modern ethics is concerned with many reasons for which we might not have the power to voluntary choose, including mitigating circumstances, mental defect, or someone holding a gun to one's head. Most people don't tend to worry about determinism forcing their neurons to fire because most people presume libertarian free will to be a fact (and probably don't even consider the possibility of determinism), which would leave only other influences which might affect our ability to voluntarily choose to do or not do something.

If you suggest to the average person on the street that every choice they make was actually predetermined before they were born, they'd think you were crazy. Once explained, they are likely to ask something like "How can anything I do be voluntary if all my actions are being controlled by circumstances beyond my control that existed before I was born?" which is exactly the concept of "being forced by the firing of my neurons." If people don't associate "I didn't do this of my own free will" with determinism and neurons it is simply an indication that they presume free will to be a fact or that they don't know anything about determinism or neurons.


"I do this of my own free will" means something like "it wasn't a shotgun-wedding", it doesn't allude to philosophical determinism or neuroscience.

It asbosultely would allude to determinism or neuroscience if people didn't presume free will (assuming they even know what determinism or neuroscience is). But it is also used quite often to refer to mental defect or any number of other things that might take away one's ability to make voluntary decisions.

The very fact that compatibilists have been attempting to mesh the ideas of determinism with free will for thousands of years proves that libertarian free will is indeed related to philosophical determinism and neuroscience. We cannot make any voluntary choices if we don't have libertarian free will.

So, it seems to me that your only hope at compatibilism is to do what Dennett attempted, which is to redefine "free will" (and "voluntary" and "intentional" and other related terms) in order to provide some means in a deterministic world by which we can distinguish between those things that we typically consider involuntary or unintentional and those things we typically consider voluntary or intentional.


If, for you, this is just some verbal trick and nothing besides libertarian free will is genuine free will, then I believe that free will doesn't exist, at least unless someone could show me a kind of free will that doesn't violate physicalism.

I believe that libertarian free will ("could have done otherwise" and "ultimate source") is FREE WILL, but I can perhaps concede that another definition of "free will" which can exist alongside determinism might be used to define ethics concerning involuntary acts.


I don't think that the forking paths or the ultimate source model are useful at all. More on that below.

That may very well be, but you have yet to prove it.


If free will is something we share with thermostats, and if we agree that thermostats are pretty deterministic, that doesn't seem like an impossible task.

Well, perhaps ~free will is something we share with thermostats, but you have to prove that ~free will is a meaningful substitute for FREE WILL. So far, you have only used it to indicate a spectrum of ~intention, which is defined as how "much" we are able to understand the object using only an intentional stance (black box) to being able to understand the object by using an intentional stance and perhaps other stances as well (a whiter box). Your claim is that things that are commonly thought of as having INTENTION (people) are black boxes, while things that are sometimes thought of as not having INTENTION (thermostats) are grey boxes, while things that are almost never thought of as having INTENTION (a rock) are close to white.

Although, humans are very good at anthropomorphizing all sorts of thing ("that rock really likes to lie there and do nothing" or "the rock wants to get to the center of the earth when you drop it"), so I'm not sure there is anything that is truly white.

I can buy this view of things, but you would have to show it to be useful in distinguishing between the responsibility that could be held by a rock for destroying a house in a landslide, the responsibility of a thermostat for destroying the house by causing the pipes to freeze, and the responsibility that can be held by a human for bulldozing the house. Only one of these is considered a "crime" and even then only under specific circumstances.

And I still think that both the forked path and the ultimate source model are useless, despite of thousands of years of tradition.

And still they have yet to be shown useless after thousands of years, nor has anyone even provided a reasonable alternative for the application of ethics. The very notion of "intent" and "responsibility" in most modern ethics system are tied up in the ideas of "could have done otherwise" and "ultimate source."


The thermostat as an example was, I think, provided by another poster, not by Dennett, and I just adopted it.

That's possible, but the Stanford article also uses it:

According to Dennett, even a thermostat can be interpreted as a very limited intentional system since its behavior can usefully be predicted by attributing to it adequate beliefs and desires to display it as acting rationally within some limited domain. For example, the thermostat desires that the room's temperature (or the engine's internal temperature) not go above or below a certain range. If it believes that it is out of the requisite range, the thermostat will respond appropriately to achieve its desired results.

From the Jargon File 4.4.7

Oh, now that seems silly to me. I'm a programmer by trade and could be referred to as a "hacker" so I can tell you for a fact that programmers/hackers/designers do sometimes anthropomorphize computer hardware and software, but they don't believe that they possess any real intent.

Sure, it might be "useful" in some cases to anthromorphize a rock, but that doesn't make the rock alive. Likewise, it might be "useful" to say that a dripping faucet is "sad" but does that mean it's really sad? I suppose you can argue that it's "useful" to think of a thermostat as having intentions, but I doubt if a repairman could possibly repair one unless he had a deeper knowledge of it. In fact, it would be very difficult indeed if we related to a thermostat using only an intentional stance. Therefore, we can think of the intentional stance as being "useful" but "incomplete." It might also be useful to consider the world as flat (such as with a map) but that doesn't make it so, and as a result the usefulness of the metaphor will always be limited.

It could just as easily be argued that the boxes have no shades of grey, but are completely black or white. Anything for which it would be impossible for us to understand using anything other than an intentional stance has real INTENTIONS, otherwise we simply haven't found a better way of understanding or relating to it (yet). If we did know how to understand it another way, we would be able to understand it better than we currently can using an intentional stance. One might claim that humans are the only thing for which we will always have to use an intentional stance because because only humans actually have true INTENTIONS. Alternately, one could claim that there are only white boxes, and that we can only understand the behavior of very complex objects (including humans) using an intentional stance because we haven't yet learned to understand it using a physical stance.


If this is true, why do we try to repair them?

Do you honestly feel that by repairing them, we are somehow holding thermostats ethically responsible for keeping the room a comfortable temperature? We only repair them when it is cheaper than replacing them. I feel no guilt in discarding a broken thermostat in the trash rather than repairing it.

-Bri

edited for poor grammar
 
Iacchus said:
Thank you for at least posing your reply "intelligently." :) What if the Creator were in fact endowed with free will? Wouldn't that provide for the best of both worlds? Wherein that which is most highly evolved, becomes the cause, of which the rest becomes the effect? And, while it may not appear this way, I believe in compatiblism myself.

Well, I have yet to see an actual definition of free will beyond "the relative freedom to do what you want", which may in and of itself be deterministic.

Quite frankly, I cannot comprehend what "free will" is in the spiritual sense. If the mind is "spirit" (which I don't believe) then nevertheless it has to operate according to some kind of spirit physics.

The only alternative is random influences, which merely mean your behavior is partially random, and partially deterministic. Whatever this "spiritual free will" is, seems to be excluded. What else could it be if neither random nor deterministic nor a combo of the two?
 

Back
Top Bottom