bobdroege7
Illuminator
- Joined
- May 6, 2004
- Messages
- 4,408
If we're nitpicking, the pH scale isn't open-ended.
What are the boundaries of the pH scale, then?
then, how about Time?
I don't think that is open ended both ways!
If we're nitpicking, the pH scale isn't open-ended.
What are the boundaries of the pH scale, then?
If you want a precedent for negative numbers, look no further than Freshman statistics.
Correlation goes nicely from -1 to 1. Uncorrelated is, of course, 0.
see, http://en.wikipedia.org/wiki/Correlation.
Is there anything particularly hard about the negative numbers here?
The number of standard deviations from average is also a nice, intuitive quantity that has a particular meaning for positive and negative numbers. Is there anything painfully hard about the idea of being one sigma above average height or one sigma below?
I think it would be nice if the weather report sometimes included not only the temperature, but how far above or below average that temperature was for that time of year, especially for record temperatures.
The analogy to coordinates should also be fairly obvious.
So I am a little puzzled why the use of negative numbers seems so objectionable to a couple of readers.
0 makes a neutral starting point for the 10/10 scale. Then later, credible evidence pulls you one way or the other. It suggests that if the claim is very new and if you have nothing to go on, it makes sense to wait for more information until moving in a positive or negative direction.
The Bayesian approach to probability does equate having no information with a probability of 1/2. I think that it's consistent, if you look at it the right way.I think by doing so it makes a subtle category error though. It equates "not enough information" with a 50/50 chance, or at least it seems to. Unless I've misunderstood, in your system a 0 can mean the odds of tossing a coin and getting heads, which we know to be very close to 50/50, or it can mean the odds of a non-interventionist God existing, which is unknowable.
I would rather the coin toss chance was 0.5, and the God question was a "not enough information to judge the probability".
Dr Adequate, though he tends to be slightly more ... blunt than absolutely necessary, gave a good reason:So I am a little puzzled why the use of negative numbers seems so objectionable to a couple of readers.
0 makes a neutral starting point for the 10/10 scale. Then later, credible evidence pulls you one way or the other. It suggests that if the claim is very new and if you have nothing to go on, it makes sense to wait for more information until moving in a positive or negative direction.
On the other hand, see my comments in another thread regarding a scale that goes from negative infinity to positive infinity and which is useful when dealing with incorporating new evidence into your probability assignments. (It's the same as the logit scale that chipotle mentioned in this thread.)Here's a quick question. If you assign -3 to proposition X and -5 to proposition Y, and X and Y are independent, what value should you assign to the proposition that X and Y are both true?
Can you show me any way of answering questions like this without first converting into a SENSIBLE scale, doing the maths, and then converting back?
If we might change our mind, we aren't totally sure. But we shouldn't ever be totally sure. One good thing about a scale that goes from negative infinity to positive infinity is that it makes it seem more reasonable to exclude the endpoints, as opposed to a scale that goes from 0 to 1, or one that goes from -10 to +10.The idea is, is that we can at the same time be sure of something and still imagine extraordinary evidence that could change our minds.
[...] the more certain we are of something the better the evidence needs to be to change our minds.
The Bayesian approach to probability does equate having no information with a probability of 1/2. I think that it's consistent, if you look at it the right way.
If we flip a coin once, we really do have no information about whether that one flip will come up heads or tails. We have no reason to suspect one outcome more than the other. The idea that you've expressed as "we know the probability is 1/2" can be thought of, not as talking about a single coin flip, but about the independence of different coin flips. It says that even if we flip the coin a bunch of times, and we know what it did on those flips, we still won't won't know anything about what it's going to do on the next flip. It again might come up heads or tails, and we again have no reason to suspect one outcome more than the other.
The God question doesn't have a probability, the way you think of probability. It's not something that can be repeated the way a coin flip can be. Either God exists or he doesn't, and what we don't know is which. What would it mean to know the probability that he exists?
Oops. Typo.[...] we still won't won't know anything about [...]
I'm not sure. I think, in a real situation, you'd be likely to find that you do have some background information that could help you decide, as opposed to a hypothetical situation where you can stipulate "I have no information. Period. Now what do I do?".That sounds a bit silly to me. What if I come up with three possible statements, each of which I have no information about, except that each excludes the other two? Are we supposed to "equate" that to a 25% chance of each statement being true and a 25% chance none are, or 16.6% chance of each statement being true and a 50% chance none are, or what?
You have information about the coin. For example, you know it's more or less symmetric. But what information have you about the result of its next flip? Will it be heads or tails? You don't know.It seems to me that we do have information, specifically the information that the odds of each outcome are 0.5. If an alien told us that fleebles exist in two states, fnarg and gnarf, and that this thing here is a fleeble, we don't know what the odds are regarding the probability of each state. We lack information we have about the coin.
Yes, it's hard to tell if there is a non-interventionist God. It's also hard to tell whether the next flip of this coin will be heads or tails. You might be able to explain in detail why you can't tell, for example, by referring to the symmetry of the coin, etc., but in the end, you still can't tell.Yes, but zero on the Dave Scale has been defined as "hard to tell", which fits the God question accurately. It's hard (very very hard!) to tell if there is a non-interventionist God. But zero is also bracketed by "probable and "improbable", implying that a known 50/50 chance is also a zero.
I'm not sure. I think, in a real situation, you'd be likely to find that you do have some background information that could help you decide, as opposed to a hypothetical situation where you can stipulate "I have no information. Period. Now what do I do?".
You have information about the coin. For example, you know it's more or less symmetric. But what information have you about the result of its next flip? Will it be heads or tails? You don't know.
When you say "the probability is 1/2 that the next flip of this coin will be heads", you are saying something about the coin, not really about the result of its next flip, though it might sound that way. When a Bayesian says the same thing, he is saying something about the next flip, or rather, about his knowledge about the next flip, namely, that he doesn't know what its result will be.
Therefore, he is happy also to assign a probability of 1/2 to other statements whose truth he is completely uncertain about---even though you would say about them that you don't have enough information to assign any probability---because all he ever means by assigning a probability of 1/2 to a statement is that he doesn't know whether the statement is true.
Yes, it's hard to tell if there is a non-interventionist God. It's also hard to tell whether the next flip of this coin will be heads or tails. You might be able to explain in detail why you can't tell, for example, by referring to the symmetry of the coin, etc., but in the end, you still can't tell.
This sounds extraordinarily dubious, philosophically.
Suppose two people are going to run a race.
In World #1, I happen to know that the two runners are so evenly matched that the race is a 50/50 proposition.
In World #2, I have no idea how good each runner is.
That's an important difference. In World #1, just as one example, I would be rational to automatically take any bet on either runner that offers favourable odds. In World #2 I would not be rational to automatically do so.
Knowing that things are 50/50 is epistemologically distinct from not having any idea what the odds are.
I can tell what the odds are, though. I cannot tell what the odds of a non-interventionist God existing are.
Absolutely. But within the ontological framework of probability theory, the difference vanishes.
A key aspect of statistics -- and of Bayesian statistics in particular -- is that any event (set) has an associated probability with it. In particular, you described a particular view of probability in terms of "would it be rational to take a bet." In Bayesian statistics, you don't have the option of not taking the bet -- you must place a bet. (In particular, any event set by definition has an a priori distribution.) Another way of looking at it is that you're not the bettor, but the bookie -- and therefore you must accept any bets that people want to make with you.
However, as the bookie, you can adjust your odds once people start betting.
It's not hard to show that starting out from a 50/50 distribution as a representation of complete ignorance gives you the most profitable freedom to adjust odds (against all possible distributions). Basically, the more bias you have in your initial set, the more money you will need to lose to adjust for that bias.
In the Bayesian probability framework, you can't not tell me. You have to cite some number or other.
I can't see a difference. In both cases, you might win and you might lose, and you have no way to tell which.Suppose two people are going to run a race.
In World #1, I happen to know that the two runners are so evenly matched that the race is a 50/50 proposition.
In World #2, I have no idea how good each runner is.
That's an important difference. In World #1, just as one example, I would be rational to automatically take any bet on either runner that offers favourable odds. In World #2 I would not be rational to automatically do so.
There are no real odds. What's real is that something is the case or that it isn't. If you don't know whether it is or is not the case, you use odds to describe your incomplete knowledge about it, to describe the strength of your belief that it's true. Or, at least, that's the Bayesian view of probability. Which is the only one that makes sense to me.Knowing that things are 50/50 is epistemologically distinct from not having any idea what the odds are.
I don't know what it means to talk about "the odds of a non-interventionist God existing", as if the odds were some objective property of the world that we could discover. He exists, or he doesn't exist. How is trying to determine the odds of him existing different from just trying to determine whether he exists? (Obviously, if he doesn't intervene, we won't get anywhere in our attempt. But that's not the point. Pretend we're talking about something that we have no information about at the moment but that we could gather information about if we tried.)I can tell what the odds are, though. I cannot tell what the odds of a non-interventionist God existing are.
I can't see a difference. In both cases, you might win and you might lose, and you have no way to tell which.
Why is it rational to take a bet in World #1? You might lose your money. Maybe you don't want to risk losing your money, even for the chance of getting more.
If you don't mind the risk, why would you mind it in World #2? You might argue, well maybe the runner I'll be betting on is much worse that the other. Sure, but maybe he's much better. You don't know, just as you don't know which of the evenly-matched runners in World #1 will win.
I know that it's generally considered rational to take a bet that has positive expectation, but I can't think of any justification for this position, in single cases. The only justification I can see is the argument that, over your entire life, you will have lots of opportunities for taking lots of independent bets of roughly the same size, each with positive expectation, and if you take all of them, the probability is very nearly 100% that you will come out ahead overall. (And we'll just pretend that the very small probability of you coming out behind is precisely zero.)
There are no real odds. What's real is that something is the case or that it isn't. If you don't know whether it is or is not the case, you use odds to describe your incomplete knowledge about it, to describe the strength of your belief that it's true. Or, at least, that's the Bayesian view of probability. Which is the only one that makes sense to me.
I don't know what it means to talk about "the odds of a non-interventionist God existing", as if the odds were some objective property of the world that we could discover. He exists, or he doesn't exist. How is trying to determine the odds of him existing different from just trying to determine whether he exists? (Obviously, if he doesn't intervene, we won't get anywhere in our attempt. But that's not the point. Pretend we're talking about something that we have no information about at the moment but that we could gather information about if we tried.)
I've made up a (not totally original) scale for use in my Science and Nonsense class. I call it the 10/10 scale, pronounced "ten, ten".
On the 10/10 scale,
-10 means definitely not true
-5 means probably not true
0 means hard to tell
+5 means probably true
+10 means definitely true
Most scales go from 1 to 5 or 1 to 10. I think a negative to positive scale is important because it gives a nice symmetry to the situation and it has a definite place for 0.
I'm not proposing this as an Earth-shattering improvement, but as a nice incremental improvement over the previous scales.
It also clarifies that "probably true" is not the same as "hard to tell" is not the same as "probably not true", something we might imagine a t.v. lawyer in a trial badgering a witness towards.
I have students come up with claims that fall in different parts of the scale and ask them to come up with a claim that they put at +10 and think up what sort of evidence would change their mind.
For instance, "the Earth is round" is a claim that most people would place at +10. The evidence needed for most people to change their mind would be, say, being brought to the edge of the world (as seen is Eric the Viking) and looking over that edge.
The idea is, is that we can at the same time be sure of something and still imagine extraordinary evidence that could change our minds. I think this captures the tentative nature of science pretty well. It's not that we religate everything to live between probably-not-true to probably-true for fear of being wrong, it's that the more certain we are of something the better the evidence needs to be to change our minds.
I would like to hear what my fellow skeptics think of the scale, if they've seen it elsewhere and if they have any questions.
-David
Why should we accept the Bayesian assumption then, if it makes important information vanish and demands we arbitrarily assign probability values to things we don't understand?