The Ganzfeld Experiments

amherst said:
The criteria I'm talking about is the standardness criteria. Unlike the Milton/Wiseman meta-analysis, the studies from the 94 autoganz meta-analysis were homogeneous . They all followed the standard criteria, set out by Bem and Honorton in their paper. This standardness criteria is what the blind Cornell grad students used to rate the standardness of the Milton/Wiseman database. And there was no "sorting the experiments for inclusion in the m-a." Everything Honorton did at PRL was reported in the original paper.

amherst

I'll take this briefly. It really shouldn't have dragged on so long.

Standardness was not used as a criteria in the Honorton meta-analysis of ganzfeld studies from '73-'85.

I don't consider PRL's studies to be a meta-anlysis.
 
amherst said:
It's not at all clear to me that he is missing data. What data do you think he is missing? Show me the studies he failed to include.

Why do you still stand by that assertion? All the studies from Beirman, Wezelman, and Broughton were included in Radin's analysis. It is really starting to become hard to understand what your criticisms are.

amherst

Since I don’t have the book, I cannot say for certain which studies are missing. My conclusion was drawn from the fact the Radin’s m-a (at 2,549) seems too small, since Honorton’s m-a (762) plus the PRL results (355) plus the most recent (1,661) are, by themselves, larger than Radin’s. Therefore Radin’s cannot be exhaustive.
 
Ersby said:


Since I don’t have the book, I cannot say for certain which studies are missing. My conclusion was drawn from the fact the Radin’s m-a (at 2,549) seems too small, since Honorton’s m-a (762) plus the PRL results (355) plus the most recent (1,661) are, by themselves, larger than Radin’s. Therefore Radin’s cannot be exhaustive.
Radin writes:
"Figure 5.4 summarizes all replication attempts as of early 1997."

If you go back and look you'll see that only about half (22 out of 40 to be exact) of the studies included in the Milton/Wiseman meta-analysis were published before 1997. These 22 studies account for only 770 of the sessions. So if you add those with the 355 at PRL, plus the 762 from the original ganzfeld studies, you get 1,887, which fits with Radin's analysis.

amherst
 
In order for Radin's figures of 289 sessions for Edinburgh and 590 sessions for Durham to be correct, he'd have to know about the work by Dalton (Edinburgh) and Broughton (Durham) both of which were published in 1997. So it seems possible that Radin had results from these before they were published. So some of those 22 (perhaps a good percentage, since they also include Palmer's work - published 1998) are included in Radin's figures.

So the sums still don't add up.

Oh, and in case I don't get a chance later, I'll mention now that I'm going on holiday tomorrow, so won't be able to respond until Monday-ish.
 
Ersby said:
So some of those 22 (perhaps a good percentage, since they also include Palmer's work - published 1998) are included in Radin's figures.

Gah! I meant Parker, not Palmer.

I need that holiday.
 
I'm just finishing Susan Blakemore's In Search of the Light, her book about her adventures as a parapsychologist. A large part of the intrigue in the book centers around the ganzfeld experiments performed at Carl Sargent's lab in Cambridge, UK. She spent a week in his lab observing the experiments, and uncovered some, shall we say, shenanigans involving the random selection of targets. This was before autoganzfeld. He would not let her back in his lab after that week.

Sargent's lab contributed nine of the 28 studies that Honorton reanalyzed after Hyman presented his commentary on the original 42 studies, rating their randomization method as "adequate."

~~ Paul
 
Ersby said:
In order for Radin's figures of 289 sessions for Edinburgh and 590 sessions for Durham to be correct, he'd have to know about the work by Dalton (Edinburgh) and Broughton (Durham) both of which were published in 1997. So it seems possible that Radin had results from these before they were published. So some of those 22 (perhaps a good percentage, since they also include Parker's work - published 1998) are included in Radin's figures.

So the sums still don't add up.

Oh, and in case I don't get a chance later, I'll mention now that I'm going on holiday tomorrow, so won't be able to respond until Monday-ish.
Lots of things:
1. Radin writes that "The Edinburgh experiments conducted from 1993 to 1996 and still ongoing, consisted of five published reports and 289 sessions ..." So we know that the (mid/late?) 1997 (highly successful) Dalton study of 128 trials was not included in Radin's meta-analysis.

On a side note-I think you may be under the false impression that the Milton/Wiseman paper was itself an all inclusive meta- analysis of the replications done after PRL. This is not the case and I don't think it was intended to be. For instance, the Cornell replications are completly missing from their database.

2.The 1997 Parker/Gothenburg experiments are:
Parker et al. (1997) (Study 1)b
30 trials

Parker et al. (1997) (Study 2)b
30 trials

Parker et al. (1997) (Study 3)b
30 trials

Since 90 trials are exactly what Radin lists on his graph for Gothenburg/Parker, we can be sure that these studies were published in early 97 and therefore included in Radin's analysis. Unless you can show that Parker/Gothenburg published the results of (any) ganzfeld experiments before 1997, your criticism is baseless.

3.Lets say (since we don't know the exact date) all of the 1997 Durham/Broughton experiments listed in the Milton/Wiseman meta-analysis were actually published in early 1997 and therefore included in Radin's meta-analysis. These studies would be:

Broughton & Alexander (1997) (First Timers Series 1) a
50 trials

Broughton & Alexander (1997) (First Timers Series 2) a
50 trials

Broughton & Alexander (1997) (Emotionally Close Series) a
51 trials

Broughton & Alexander (1997) (Clairvoyance Series) a
50 trials

Broughton & Alexander (1997) (General Series) a
8 trials

If you subtract these 209 sessions from the 590 sessions Radin listed in his meta-analysis you get 381. Unless you can show that Durham/Broughton published the results of more than 381 sessions before 1997, your criticism of Radin not being all inclusive is again, baseless.

So to sum up, the sums add up.


amherst
 
Paul C. Anagnostopoulos said:
I'm just finishing Susan Blakemore's In Search of the Light, her book about her adventures as a parapsychologist. A large part of the intrigue in the book centers around the ganzfeld experiments performed at Carl Sargent's lab in Cambridge, UK. She spent a week in his lab observing the experiments, and uncovered some, shall we say, shenanigans involving the random selection of targets. This was before autoganzfeld. He would not let her back in his lab after that week.

Sargent's lab contributed nine of the 28 studies that Honorton reanalyzed after Hyman presented his commentary on the original 42 studies, rating their randomization method as "adequate."

~~ Paul
A few things that shed light on the Sargent ganzfeld situation:

1. pg.79 of The Conscious Universe:
"To address the concern about whether independent replications had been achieved, Honorton calculated the experimental outcomes for each laboratory separately. Significantly positive outcomes were reported by six of the ten labs, and the combined score across the ten laboratories still resulted in odds against chance of about a billion to one. This showed that no one lab was responsible for the positive results; they appeared across-the-board, even from labs reporting only a few experiments. To examine further the possibility that the two most prolific labs were responsible for the strong odds against chance, Honorton recalculated the results after excluding the studies that he and Sargent had reported. The resulting odds against chance were still ten thousand to one. Thus, the effect did not depend on just one or two labs; it had been successfully replicated by eight other laboratories."

2. pg. 220:
"Next Begley repeated another common criticism:

'Of the 28 studies honorton analyzed in 1985, nine came from a lab where one-time believer Susan Blackmore of the University of the West of England had scrutinized the experiments. The results are "clearly marred," she says, by "accidental errors" in which the experimenter might have known the target and prompted the receiver to chose it.'

What Begley fails to report is that after Blackmore's allegedly "marred" studies were eliminated from the meta-analysis, the overall hit rate in the remaining studies remained exactly the same as before. In other words, Blackmore's criticism was tested and it did not explain away the ganzfeld results. It is also important to note that Blackmore never actually demonstrated that the flaw existed."

3. And here's what Blackmore herself said after the autoganzfeld results had been published:

"In this new paper Bem and Honorton claim that these 'stringent standards' have been met - a view with which I widely concur. As the experiments are presented here, and in the previous 1990 paper, there are no obvious methodological flaws. The results are highly significant and the effect size is comparable to that found in previous ganzfeld studies."
1994 Psi in Psychology Skeptical Inquirer 18 351-355

So basically what all this adds up to is, Sarget's experiments, "marred" or not, are irrelevant to the success of the ganzfeld database.

amherst
 
What Begley fails to report is that after Blackmore's allegedly "marred" studies were eliminated from the meta-analysis, the overall hit rate in the remaining studies remained exactly the same as before. In other words, Blackmore's criticism was tested and it did not explain away the ganzfeld results. It is also important to note that Blackmore never actually demonstrated that the flaw existed."
Blackmore's concern, voiced in her book, was this: ff she noticed these shenanigans in one lab, how about others? How widespread was the messy protocol? And why did Honorton rate Sargent's randomization as adequate---possibly because he didn't know what Sargent had done? She laments not being able to shed more light on the flaw, but she wasn't allowed back to Sargent's lab. And apparently Sargent wasn't particularly forthcoming with his data when requested by other people.

However, she did redo Honorton's calculations, rating Sargent's studies as flawed for randomization, and found a correlation between randomization and z-score, as did Hyman.

To be repetitious, the problem with psi experiments is that an experiment is about its protocol and statistics, not about an observable event. So perfection is required in the protocol and statistics to be sure there is no mundane explanation. And perfection is impossible.

~~ Paul
 
Paul C. Anagnostopoulos said:

To be repetitious, the problem with psi experiments is that an experiment is about its protocol and statistics, not about an observable event. So perfection is required in the protocol and statistics to be sure there is no mundane explanation. And perfection is impossible.

~~ Paul

What is observable is that the psychic, or whoever, gets more hits than is expected by chance.
 
T'ai said:
What is observable is that the psychic, or whoever, gets more hits than is expected by chance.
Care to get together with Ian and do the math on this? Your assertion is absolutely nothing more than wishful thinking without the math.

Why do people say crapola like this?

~~ Paul
 
T'ai Chi said:


What is observable is that the psychic, or whoever, gets more hits than is expected by chance.
...but only by screwing up the math first.


[edit: whoops - Paul beat me]
 
Zep said:
...but only by screwing up the math first.


[edit: whoops - Paul beat me]

You've asserted that, now provide evidence for it. You're welcome to list specific occurances where you believe the math to be screwed up.
 
Paul C. Anagnostopoulos said:

Care to get together with Ian and do the math on this?


Irrelevant.


Why do people say crapola like this?

~~ Paul

Because you said the "crapola" that there is nothing observable, when in fact their 'psychic skills', what the statistics are measuring if psi exists, are what is being observed.
 
Oh T'ai, come on! Show us one example where you've done the math to compute the probability of a psychic getting a specific sort of hit, then done an exhaustive search to find every occurrence of that hit throughout the history of psychics, then shown that the probability has been overcome.

The very idea of doing this is ludicrous. But that's what psi experiments require.

~~ Paul
 
T'ai said:
Because you said the "crapola" that there is nothing observable, when in fact their 'psychic skills', what the statistics are measuring if psi exists, are what is being observed.
Give me one reference to an experiment where the probabilities of chance hits were calculated.

~~ Paul
 
Paul C. Anagnostopoulos said:
Oh T'ai, come on! Show us one example where you've done the math to compute the probability of a psychic getting a specific sort of hit, then done an exhaustive search to find every occurrence of that hit throughout the history of psychics, then shown that the probability has been overcome.


Ganzfeld, auto-Ganzfeld, and RNG experiments come to mind. In these one knows the probability of getting a hit pretty easily.
 

Back
Top Bottom