• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

PEAR and Selection Bias

Robin

Penultimate Amazing
Joined
Apr 29, 2004
Messages
14,971
Those behind the PEAR project seem inordinately fond of one particular graph - it is the centrepiece of their website, it is the background image for many of the web pages and it is the background image of founder Jahn's web page:
covr3.gif

Here is it's more technical format:
pearmeta.PNG.jpg

This is the cumulative z score derived from about 140 hours of experimentation over 12 years (experiment described here).

So the obvious issue that springs to mind is that they have included more data from experiments that favoured their hypothesis against data from experiments which did not.

I simulated this procedure using a random number generator, but left out a few unfavourable results - representing just 1% of all sessions. Here is the result:
pear.png.jpg

Which looks remarkably similar. I get the same means and standard deviations too.

You don't even have to allege misconduct, this could be consistent with experimenters feeling more motivated to commit favourable data than unfavourable, or occasionally having subjects see the results not go as expected and say things like "I wasn't ready", or "I pressed the wrong button".

Of course you can get the same result by flipping one in 10,000 bits, just as the PEAR people say is happening. But if you have the competing explanations:

1. A new and mysterious form of energy hitherto unknown to science, or;
2. Occasional lapses in lab discipline over 12 years.

Which is really more likely?
 
ISTR that Rhine achieved similar positive results because he threw away the really bad trials as he thought in those cases the subjects were having him on.

YMMY. :covereyes
 
Those behind the PEAR project seem inordinately fond of one particular graph - it is the centrepiece of their website, it is the background image for many of the web pages and it is the background image of founder Jahn's web page:
[qimg]http://www.princeton.edu/~pear/images/covr3.gif[/qimg]
Here is it's more technical format:
[qimg]http://lh4.ggpht.com/robin1658/SOKiEfZAoGI/AAAAAAAAAPM/eTY54GRd4qQ/s400/pearmeta.PNG.jpg[/qimg]
This is the cumulative z score derived from about 140 hours of experimentation over 12 years (experiment described here).

So the obvious issue that springs to mind is that they have included more data from experiments that favoured their hypothesis against data from experiments which did not.

I simulated this procedure using a random number generator, but left out a few unfavourable results - representing just 1% of all sessions. Here is the result:
[qimg]http://lh6.ggpht.com/robin1658/SOG-gkxC8mI/AAAAAAAAAOo/QfuInPDLbC8/s400/pear.png.jpg[/qimg]
Which looks remarkably similar. I get the same means and standard deviations too.

You don't even have to allege misconduct, this could be consistent with experimenters feeling more motivated to commit favourable data than unfavourable, or occasionally having subjects see the results not go as expected and say things like "I wasn't ready", or "I pressed the wrong button".

Of course you can get the same result by flipping one in 10,000 bits, just as the PEAR people say is happening. But if you have the competing explanations:

1. A new and mysterious form of energy hitherto unknown to science, or;
2. Occasional lapses in lab discipline over 12 years.

Which is really more likely?
That's a toughie, but according to the authors of the paper: "The order of the operator intentions is established either by their own choice (volitional protocol) or by random assignment (instructed protocol), and is unalterably recorded in the database manager before the REG is activated by a remote switch. All subsequent data are automatically recorded on-line, printed simultaneously on a permanent strip recorder, and summarized by the operators in a dedicated logbook. Any discrepancy among these redundant records, or any fail-safe indication from the REG or its supporting equipment (both extraordinarily rare), invoke preestablished contingency procedures that preclude inclusion of any fouled data or any possible means of favorable data selection."
 
That's a toughie, but according to the authors of the paper: "The order of the operator intentions is established either by their own choice (volitional protocol) or by random assignment (instructed protocol), and is unalterably recorded in the database manager before the REG is activated by a remote switch. All subsequent data are automatically recorded on-line, printed simultaneously on a permanent strip recorder, and summarized by the operators in a dedicated logbook. Any discrepancy among these redundant records, or any fail-safe indication from the REG or its supporting equipment (both extraordinarily rare), invoke preestablished contingency procedures that preclude inclusion of any fouled data or any possible means of favorable data selection."
You need to just think about that for just a moment.

Please tell me how a procedure could both preclude inclusion of fouled data and yet rule out any possible means of favourable data selection? What if the experimenters decide that unfavourable data is fouled data?

This is just a contradictory claim.
 
You need to just think about that for just a moment.

Please tell me how a procedure could both preclude inclusion of fouled data and yet rule out any possible means of favourable data selection? What if the experimenters decide that unfavourable data is fouled data?

This is just a contradictory claim.
Not if this statement is accurate: "Any discrepancy among these redundant records, or any fail-safe indication from the REG or its supporting equipment (both extraordinarily rare), invoke preestablished contingency procedures that preclude inclusion of any fouled data or any possible means of favorable data selection." Note the phrase "preestablished contingency procedures."
 
Not if this statement is accurate: "Any discrepancy among these redundant records, or any fail-safe indication from the REG or its supporting equipment (both extraordinarily rare), invoke preestablished contingency procedures that preclude inclusion of any fouled data or any possible means of favorable data selection." Note the phrase "preestablished contingency procedures."
How exactly does pre-establishing a procedure prevent it from being followed incorrectly???
 
Rodney said:
How exactly does pre-establishing a procedure prevent it from being followed incorrectly???
You can use that same logic to knock down any experiment whose results bother you.

Well, that and a minuscule effect, easily explained by a tiny bias, where the researchers made their own contributions to the data, on experiments that took place over more than a decade, that others have been unable to replicate.

Linda
 
You can use that same logic to knock down any experiment whose results bother you.

Conversely you might leave no possibility for bias, error and incompetence if you do like the result.

Of the following two positions, which is more logically sound?

1.) Assuming the experiment definitely was done 100% efficiently and correctly every time.

2.) Considering the possibility the experiment may or may not have been conducted 100% efficiently every time.

Position number 1 is closed.

Position 2 is open.

Position 2 would lead to peer review, independent replication and further experimentation to try and make sure its conclusion were as accurate as possible.

Position 1 leads no further and concludes.

If your life depended on medication, would you prefer to be taking medication that has gone through the logical process of position 1, or 2?
 
Last edited:
You can use that same logic to knock down any experiment whose results bother you.
Why do you assume the result bothers me?

In any case what you say is not true, any generally accepted result in science is due to experiments the results of which cannot be explained as the artifact of flawed experimental design or practice.

I cannot think of any other area of science that says "trust me, I was really careful" like this, everybody else accepts the discipline of replicability.

And as I said before, if this effect is genuine then it should be trivially easy to demonstrate it with a single replicable experiment.
 
Last edited:

Back
Top Bottom