• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

10/10 scale

What are the boundaries of the pH scale, then?

I'm not sure offhand, but there's a limit to how many H+ or OH- ions can physically/chemically fit into a given amount of water.

I don't think anything gets much below -4 or much above 15.
 
It's not quite that simple, rjh01, although I admit that I thought so myself based on my fuzzy memories of first-year aqueous chemistry at university. When I looked it up it turned out to be more complicated.

Check it out:

http://en.wikipedia.org/wiki/Ph_Scale

Basically the pH scale measures hydrogen ion activity, and the equations you use to estimate pH at middling levels of acidity and alkalinity break down at really high or low levels of hydrogen ion activity. So while you might think from those equations that 0 was the lowest pH, it turns out it is not.

Battery acid is more like -0.5 than 0, and it's not the most acidic thing in the universe.
 
If you want a precedent for negative numbers, look no further than Freshman statistics.

Correlation goes nicely from -1 to 1. Uncorrelated is, of course, 0.
see, http://en.wikipedia.org/wiki/Correlation.
Is there anything particularly hard about the negative numbers here?

The number of standard deviations from average is also a nice, intuitive quantity that has a particular meaning for positive and negative numbers. Is there anything painfully hard about the idea of being one sigma above average height or one sigma below?

I think it would be nice if the weather report sometimes included not only the temperature, but how far above or below average that temperature was for that time of year, especially for record temperatures.

The analogy to coordinates should also be fairly obvious.

So I am a little puzzled why the use of negative numbers seems so objectionable to a couple of readers.

0 makes a neutral starting point for the 10/10 scale. Then later, credible evidence pulls you one way or the other. It suggests that if the claim is very new and if you have nothing to go on, it makes sense to wait for more information until moving in a positive or negative direction.

Sometimes a claim is so out-there that on it's face you can form an opinion of how credible it is. To friends I call some claims "prescoffed" (parsed pre-scoffed) in that they are so absurd that scoffing is unneeded because it is already included. Human superpowers (or super-human powers) that involve new physics fall into this category. The idea that kangaroos and koala bears got from Mount Ararat to Australia all on their own also falls into this category. You can always change my mind with good evidence, but for some claims the evidence would have to be incredibly good.

You can do the same by starting with 5 on a scale from 0 to 10, but it really doesn't feel the same as a nice, middle, neutral 0.

Unless you think that everything new should be held in complete doubt (-10) as a default position until something else is empirically verified. Then 0 would make a nice place for the lower end of the scale.

-David
 
Last edited:
If you want a precedent for negative numbers, look no further than Freshman statistics.

Correlation goes nicely from -1 to 1. Uncorrelated is, of course, 0.
see, http://en.wikipedia.org/wiki/Correlation.
Is there anything particularly hard about the negative numbers here?

The number of standard deviations from average is also a nice, intuitive quantity that has a particular meaning for positive and negative numbers. Is there anything painfully hard about the idea of being one sigma above average height or one sigma below?

I think it would be nice if the weather report sometimes included not only the temperature, but how far above or below average that temperature was for that time of year, especially for record temperatures.

The analogy to coordinates should also be fairly obvious.

So I am a little puzzled why the use of negative numbers seems so objectionable to a couple of readers.

I think mental constructs like this should be as complicated as is necessary, but no more complicated. I think this scale is needlessly complicated because of its relatively arbitrary zero point, which makes it difficult to perform even rough calculations of the combined probability of multiple simultaneous propositions.

That's not a totally insignificant problem, because I believe a lot of first year university students have a shaky grasp of probability in the first place. In Australia at least only the advanced maths students ever cover the topic in high school, or at least that is how it was in my day.

0 makes a neutral starting point for the 10/10 scale. Then later, credible evidence pulls you one way or the other. It suggests that if the claim is very new and if you have nothing to go on, it makes sense to wait for more information until moving in a positive or negative direction.

I think by doing so it makes a subtle category error though. It equates "not enough information" with a 50/50 chance, or at least it seems to. Unless I've misunderstood, in your system a 0 can mean the odds of tossing a coin and getting heads, which we know to be very close to 50/50, or it can mean the odds of a non-interventionist God existing, which is unknowable.

I would rather the coin toss chance was 0.5, and the God question was a "not enough information to judge the probability".
 
I think by doing so it makes a subtle category error though. It equates "not enough information" with a 50/50 chance, or at least it seems to. Unless I've misunderstood, in your system a 0 can mean the odds of tossing a coin and getting heads, which we know to be very close to 50/50, or it can mean the odds of a non-interventionist God existing, which is unknowable.

I would rather the coin toss chance was 0.5, and the God question was a "not enough information to judge the probability".
The Bayesian approach to probability does equate having no information with a probability of 1/2. I think that it's consistent, if you look at it the right way.

If we flip a coin once, we really do have no information about whether that one flip will come up heads or tails. We have no reason to suspect one outcome more than the other. The idea that you've expressed as "we know the probability is 1/2" can be thought of, not as talking about a single coin flip, but about the independence of different coin flips. It says that even if we flip the coin a bunch of times, and we know what it did on those flips, we still won't won't know anything about what it's going to do on the next flip. It again might come up heads or tails, and we again have no reason to suspect one outcome more than the other.

The God question doesn't have a probability, the way you think of probability. It's not something that can be repeated the way a coin flip can be. Either God exists or he doesn't, and what we don't know is which. What would it mean to know the probability that he exists?
 
So I am a little puzzled why the use of negative numbers seems so objectionable to a couple of readers.

0 makes a neutral starting point for the 10/10 scale. Then later, credible evidence pulls you one way or the other. It suggests that if the claim is very new and if you have nothing to go on, it makes sense to wait for more information until moving in a positive or negative direction.
Dr Adequate, though he tends to be slightly more ... blunt than absolutely necessary, gave a good reason:

Here's a quick question. If you assign -3 to proposition X and -5 to proposition Y, and X and Y are independent, what value should you assign to the proposition that X and Y are both true?

Can you show me any way of answering questions like this without first converting into a SENSIBLE scale, doing the maths, and then converting back?
On the other hand, see my comments in another thread regarding a scale that goes from negative infinity to positive infinity and which is useful when dealing with incorporating new evidence into your probability assignments. (It's the same as the logit scale that chipotle mentioned in this thread.)

I disagree with this, from your initial post:

The idea is, is that we can at the same time be sure of something and still imagine extraordinary evidence that could change our minds.
If we might change our mind, we aren't totally sure. But we shouldn't ever be totally sure. One good thing about a scale that goes from negative infinity to positive infinity is that it makes it seem more reasonable to exclude the endpoints, as opposed to a scale that goes from 0 to 1, or one that goes from -10 to +10.

But here you're right on target:

[...] the more certain we are of something the better the evidence needs to be to change our minds.
 
The Bayesian approach to probability does equate having no information with a probability of 1/2. I think that it's consistent, if you look at it the right way.

That sounds a bit silly to me. What if I come up with three possible statements, each of which I have no information about, except that each excludes the other two? Are we supposed to "equate" that to a 25% chance of each statement being true and a 25% chance none are, or 16.6% chance of each statement being true and a 50% chance none are, or what?

If we flip a coin once, we really do have no information about whether that one flip will come up heads or tails. We have no reason to suspect one outcome more than the other. The idea that you've expressed as "we know the probability is 1/2" can be thought of, not as talking about a single coin flip, but about the independence of different coin flips. It says that even if we flip the coin a bunch of times, and we know what it did on those flips, we still won't won't know anything about what it's going to do on the next flip. It again might come up heads or tails, and we again have no reason to suspect one outcome more than the other.

It seems to me that we do have information, specifically the information that the odds of each outcome are 0.5. If an alien told us that fleebles exist in two states, fnarg and gnarf, and that this thing here is a fleeble, we don't know what the odds are regarding the probability of each state. We lack information we have about the coin.

The God question doesn't have a probability, the way you think of probability. It's not something that can be repeated the way a coin flip can be. Either God exists or he doesn't, and what we don't know is which. What would it mean to know the probability that he exists?

Yes, but zero on the Dave Scale has been defined as "hard to tell", which fits the God question accurately. It's hard (very very hard!) to tell if there is a non-interventionist God. But zero is also bracketed by "probable and "improbable", implying that a known 50/50 chance is also a zero.
 
Last edited:
That sounds a bit silly to me. What if I come up with three possible statements, each of which I have no information about, except that each excludes the other two? Are we supposed to "equate" that to a 25% chance of each statement being true and a 25% chance none are, or 16.6% chance of each statement being true and a 50% chance none are, or what?
I'm not sure. I think, in a real situation, you'd be likely to find that you do have some background information that could help you decide, as opposed to a hypothetical situation where you can stipulate "I have no information. Period. Now what do I do?".

It seems to me that we do have information, specifically the information that the odds of each outcome are 0.5. If an alien told us that fleebles exist in two states, fnarg and gnarf, and that this thing here is a fleeble, we don't know what the odds are regarding the probability of each state. We lack information we have about the coin.
You have information about the coin. For example, you know it's more or less symmetric. But what information have you about the result of its next flip? Will it be heads or tails? You don't know.

When you say "the probability is 1/2 that the next flip of this coin will be heads", you are saying something about the coin, not really about the result of its next flip, though it might sound that way. When a Bayesian says the same thing, he is saying something about the next flip, or rather, about his knowledge about the next flip, namely, that he doesn't know what its result will be. Therefore, he is happy also to assign a probability of 1/2 to other statements whose truth he is completely uncertain about---even though you would say about them that you don't have enough information to assign any probability---because all he ever means by assigning a probability of 1/2 to a statement is that he doesn't know whether the statement is true.

Yes, but zero on the Dave Scale has been defined as "hard to tell", which fits the God question accurately. It's hard (very very hard!) to tell if there is a non-interventionist God. But zero is also bracketed by "probable and "improbable", implying that a known 50/50 chance is also a zero.
Yes, it's hard to tell if there is a non-interventionist God. It's also hard to tell whether the next flip of this coin will be heads or tails. You might be able to explain in detail why you can't tell, for example, by referring to the symmetry of the coin, etc., but in the end, you still can't tell.
 
I'm not sure. I think, in a real situation, you'd be likely to find that you do have some background information that could help you decide, as opposed to a hypothetical situation where you can stipulate "I have no information. Period. Now what do I do?".

Fair enough. So in practise it's an entirely philosophical claim?

You have information about the coin. For example, you know it's more or less symmetric. But what information have you about the result of its next flip? Will it be heads or tails? You don't know.

When you say "the probability is 1/2 that the next flip of this coin will be heads", you are saying something about the coin, not really about the result of its next flip, though it might sound that way. When a Bayesian says the same thing, he is saying something about the next flip, or rather, about his knowledge about the next flip, namely, that he doesn't know what its result will be.

I'm not sure I follow you here. How does this get to the conclusion in the next paragraph?

Therefore, he is happy also to assign a probability of 1/2 to other statements whose truth he is completely uncertain about---even though you would say about them that you don't have enough information to assign any probability---because all he ever means by assigning a probability of 1/2 to a statement is that he doesn't know whether the statement is true.

This sounds extraordinarily dubious, philosophically.

Suppose two people are going to run a race.

In World #1, I happen to know that the two runners are so evenly matched that the race is a 50/50 proposition.

In World #2, I have no idea how good each runner is.

That's an important difference. In World #1, just as one example, I would be rational to automatically take any bet on either runner that offers favourable odds. In World #2 I would not be rational to automatically do so.

Knowing that things are 50/50 is epistemologically distinct from not having any idea what the odds are.

Yes, it's hard to tell if there is a non-interventionist God. It's also hard to tell whether the next flip of this coin will be heads or tails. You might be able to explain in detail why you can't tell, for example, by referring to the symmetry of the coin, etc., but in the end, you still can't tell.

I can tell what the odds are, though. I cannot tell what the odds of a non-interventionist God existing are.
 
This sounds extraordinarily dubious, philosophically.

Bayesian statistics are a rather small subset of philosophy.


Suppose two people are going to run a race.

In World #1, I happen to know that the two runners are so evenly matched that the race is a 50/50 proposition.

In World #2, I have no idea how good each runner is.

That's an important difference. In World #1, just as one example, I would be rational to automatically take any bet on either runner that offers favourable odds. In World #2 I would not be rational to automatically do so.

Knowing that things are 50/50 is epistemologically distinct from not having any idea what the odds are.

Absolutely. But within the ontological framework of probability theory, the difference vanishes.

A key aspect of statistics -- and of Bayesian statistics in particular -- is that any event (set) has an associated probability with it. In particular, you described a particular view of probability in terms of "would it be rational to take a bet." In Bayesian statistics, you don't have the option of not taking the bet -- you must place a bet. (In particular, any event set by definition has an a priori distribution.) Another way of looking at it is that you're not the bettor, but the bookie -- and therefore you must accept any bets that people want to make with you.

However, as the bookie, you can adjust your odds once people start betting.

It's not hard to show that starting out from a 50/50 distribution as a representation of complete ignorance gives you the most profitable freedom to adjust odds (against all possible distributions). Basically, the more bias you have in your initial set, the more money you will need to lose to adjust for that bias.



I can tell what the odds are, though. I cannot tell what the odds of a non-interventionist God existing are.

In the Bayesian probability framework, you can't not tell me. You have to cite some number or other.
 
Absolutely. But within the ontological framework of probability theory, the difference vanishes.

A key aspect of statistics -- and of Bayesian statistics in particular -- is that any event (set) has an associated probability with it. In particular, you described a particular view of probability in terms of "would it be rational to take a bet." In Bayesian statistics, you don't have the option of not taking the bet -- you must place a bet. (In particular, any event set by definition has an a priori distribution.) Another way of looking at it is that you're not the bettor, but the bookie -- and therefore you must accept any bets that people want to make with you.

However, as the bookie, you can adjust your odds once people start betting.

It's not hard to show that starting out from a 50/50 distribution as a representation of complete ignorance gives you the most profitable freedom to adjust odds (against all possible distributions). Basically, the more bias you have in your initial set, the more money you will need to lose to adjust for that bias.

In the Bayesian probability framework, you can't not tell me. You have to cite some number or other.

Okay. Why should we accept the Bayesian assumption then, if it makes important information vanish and demands we arbitrarily assign probability values to things we don't understand? What's the use of this philosophical construct if it makes us chuck useful data on one hand, and make highly dubious assumptions on the other?
 
Suppose two people are going to run a race.

In World #1, I happen to know that the two runners are so evenly matched that the race is a 50/50 proposition.

In World #2, I have no idea how good each runner is.

That's an important difference. In World #1, just as one example, I would be rational to automatically take any bet on either runner that offers favourable odds. In World #2 I would not be rational to automatically do so.
I can't see a difference. In both cases, you might win and you might lose, and you have no way to tell which.

Why is it rational to take a bet in World #1? You might lose your money. Maybe you don't want to risk losing your money, even for the chance of getting more.

If you don't mind the risk, why would you mind it in World #2? You might argue, well maybe the runner I'll be betting on is much worse that the other. Sure, but maybe he's much better. You don't know, just as you don't know which of the evenly-matched runners in World #1 will win.

I know that it's generally considered rational to take a bet that has positive expectation, but I can't think of any justification for this position, in single cases. The only justification I can see is the argument that, over your entire life, you will have lots of opportunities for taking lots of independent bets of roughly the same size, each with positive expectation, and if you take all of them, the probability is very nearly 100% that you will come out ahead overall. (And we'll just pretend that the very small probability of you coming out behind is precisely zero.)

Knowing that things are 50/50 is epistemologically distinct from not having any idea what the odds are.
There are no real odds. What's real is that something is the case or that it isn't. If you don't know whether it is or is not the case, you use odds to describe your incomplete knowledge about it, to describe the strength of your belief that it's true. Or, at least, that's the Bayesian view of probability. Which is the only one that makes sense to me.

I can tell what the odds are, though. I cannot tell what the odds of a non-interventionist God existing are.
I don't know what it means to talk about "the odds of a non-interventionist God existing", as if the odds were some objective property of the world that we could discover. He exists, or he doesn't exist. How is trying to determine the odds of him existing different from just trying to determine whether he exists? (Obviously, if he doesn't intervene, we won't get anywhere in our attempt. But that's not the point. Pretend we're talking about something that we have no information about at the moment but that we could gather information about if we tried.)
 
I can't see a difference. In both cases, you might win and you might lose, and you have no way to tell which.

Why is it rational to take a bet in World #1? You might lose your money. Maybe you don't want to risk losing your money, even for the chance of getting more.

If you don't mind the risk, why would you mind it in World #2? You might argue, well maybe the runner I'll be betting on is much worse that the other. Sure, but maybe he's much better. You don't know, just as you don't know which of the evenly-matched runners in World #1 will win.

I know that it's generally considered rational to take a bet that has positive expectation, but I can't think of any justification for this position, in single cases. The only justification I can see is the argument that, over your entire life, you will have lots of opportunities for taking lots of independent bets of roughly the same size, each with positive expectation, and if you take all of them, the probability is very nearly 100% that you will come out ahead overall. (And we'll just pretend that the very small probability of you coming out behind is precisely zero.)

Almost everything you do should be a positive-expectation bet. Getting a job or an education or a date is a positive-expectation bet.

Caveats about diminishing marginal value and the value of your time aside, yes, I do think it's irrational to pass up a bet which has a net positive expectation if you take it, for most values of rational.

There are no real odds. What's real is that something is the case or that it isn't. If you don't know whether it is or is not the case, you use odds to describe your incomplete knowledge about it, to describe the strength of your belief that it's true. Or, at least, that's the Bayesian view of probability. Which is the only one that makes sense to me.

I don't see the meaning in your claim, "there are no real odds". If there are no real odds, how do casinos make a profit?

In some cases, like fair dice and properly balanced roulette wheels, our knowledge about future events is incomplete but the knowledge we do have is extraordinarily precise. We don't know what result will come up but we know the probability that any given result will come up very precisely.

Whereas in the case of the unknown racers our knowledge about future events is nonexistent. We still don't know what the outcome will be, but we don't know what the probability of any given result is either.

I don't know what it means to talk about "the odds of a non-interventionist God existing", as if the odds were some objective property of the world that we could discover. He exists, or he doesn't exist. How is trying to determine the odds of him existing different from just trying to determine whether he exists? (Obviously, if he doesn't intervene, we won't get anywhere in our attempt. But that's not the point. Pretend we're talking about something that we have no information about at the moment but that we could gather information about if we tried.)

In the classical Christian scheme of things, we find out by dying. However up until that point, we're asked to take a gamble on the existence of this non-interventionist God with significant payoffs if we choose correctly. My point is that there is simply no sensible way to gauge the odds of such a bet paying off. Whereas if we were to bet on the fall of a fair die, you could gauge the odds of the bet paying off very precisely.
 
Earth isn't round. It's roughly spherical... a lumpy sort of sphere, actually.

Good scale, though!

I've made up a (not totally original) scale for use in my Science and Nonsense class. I call it the 10/10 scale, pronounced "ten, ten".

On the 10/10 scale,
-10 means definitely not true
-5 means probably not true
0 means hard to tell
+5 means probably true
+10 means definitely true

Most scales go from 1 to 5 or 1 to 10. I think a negative to positive scale is important because it gives a nice symmetry to the situation and it has a definite place for 0.

I'm not proposing this as an Earth-shattering improvement, but as a nice incremental improvement over the previous scales.

It also clarifies that "probably true" is not the same as "hard to tell" is not the same as "probably not true", something we might imagine a t.v. lawyer in a trial badgering a witness towards.

I have students come up with claims that fall in different parts of the scale and ask them to come up with a claim that they put at +10 and think up what sort of evidence would change their mind.

For instance, "the Earth is round" is a claim that most people would place at +10. The evidence needed for most people to change their mind would be, say, being brought to the edge of the world (as seen is Eric the Viking) and looking over that edge.

The idea is, is that we can at the same time be sure of something and still imagine extraordinary evidence that could change our minds. I think this captures the tentative nature of science pretty well. It's not that we religate everything to live between probably-not-true to probably-true for fear of being wrong, it's that the more certain we are of something the better the evidence needs to be to change our minds.

I would like to hear what my fellow skeptics think of the scale, if they've seen it elsewhere and if they have any questions.

-David
 
Why should we accept the Bayesian assumption then, if it makes important information vanish and demands we arbitrarily assign probability values to things we don't understand?

Because it allows us to represent aspects of probability theory and to solve other problems with extreme (and remarkable) accuracy. In fact, Bayesian probability theory is probably the single most powerful investigative tool that modern scientists have at their disposal, across all fields from aeronautics to zoology.

That's like asking "why should we accept written language if it doesn't represent pitch accurately?"
 

Back
Top Bottom