The Ganzfeld Experiments

Ersby said:


This is judging bias. It's a known artifact of psi experiments for a judge to tend to choose the first option out of the choices. It's certainly something to take into account.

Response bias, as I said before, is useful only as a post-hoc analysis. If you have completed a ganzfeld experiment and it later transpires that the choice of targets largely consisted of people, water and countrysides: images that commonly pop into the mind under these circumstances, then you have a reasonable case for suggesting response bias has inflated the hit rate. (and this remains the same, even if you have each person doing one trial each)

This happened in Study 302, and Bem acknowledged that. So while there appeared to be a 25% hit rate by chance, the way things played out, 34% was the expected hit rate by chance.

You're misunderstanding the problem. The problem with study 302 was that:

"The experimental design called for this study to continue until each of the clips had served as the target 15 times. Unfortunately, the premature termination of this study at 25 sessions left an imbalance in the frequency with which each clip had served as the target. This means that the high hit rate observed (64%) could well be inflated by response biases"
http://comp9.psych.cornell.edu/dbem/does_psi_exist.html

Since the target selection is going to be completly random, this isn't a problem for studies which have been completed since " targets largely consist(ing) of people, water and countrysides: images that commonly pop into the mind under these circumstances" are going to have only a 25% chance of being the correct target. Since study 302 ended prematurely the researchers couldn't know for sure whether clips which receivers may have a bias towards were initially targets more than decoys. That's why Bem did the analysis to adjust for that possibility. Again, this isn't a problem for studies which have been completed.

amherst
 
Since the target selection is going to be completly random, this isn't a problem for studies which have been completed since " targets largely consist(ing) of people, water and countrysides: images that commonly pop into the mind under these circumstances" are going to have only a 25% chance of being the correct target.

False, even with random selectioning , these targets can show up more than 25% of the time. If people tend to choose the number "1" and you roll 1 a little more often on a 4-sided dice, even though it's a random selection of target, the chances of scoring more than 25% is greater. Larger numbers of trials will have to be performed to lessen the chances of the response bias. The small sample rates in these ganzfield experiments can easily account for the slighter than "odds" results.

That's why Bem did the analysis to adjust for that possibility. Again, this isn't a problem for studies which have been completed

I doubt that his adjustments are reliable. Why don't they just re-do the experiment with a larger sample?!
 
thaiboxerken said:
False, even with random selectioning , these targets can show up more than 25% of the time. If people tend to choose the number "1" and you roll 1 a little more often on a 4-sided dice, even though it's a random selection of target, the chances of scoring more than 25% is greater. Larger numbers of trials will have to be performed to lessen the chances of the response bias. The small sample rates in these ganzfield experiments can easily account for the slighter than "odds" results.
Again:
"In the 10 basic autoganzfeld experiments, 160 film clips were sampled for a total of 329 sessions; accordingly, a particular clip would be expected to appear as the target in only about 2 sessions. This low expected frequency means that it is not possible to statistically assess the randomness of the actual distribution observed. Accordingly, Honorton et al. (1990) ran several large-scale control series to test the output of the random number generator. These control series confirmed that it was providing a uniform distribution of values through the full target range. Statistical tests that could legitimately be performed on the actual frequencies observed confirmed that targets were, on average, selected uniformly from among the four film clips within each judging set and that the four possible judging sequences were uniformly distributed across the sessions."
http://comp9.psych.cornell.edu/dbem/response_to_hyman.html

For the 329 auto-ganzfeld sessions reported in the article "a particular clip would be expected to appear as the target in only about 2 sessions." If you think the randomization is faulty then that's another matter, but to think that certain clips are actually likely to appear as targets more frequently over this many sessions, when the randomization is sound, is a baseless criticism.
I doubt that his adjustments are reliable. Why don't they just re-do the experiment with a larger sample?!
Of course you doubt it, and I'm sure you haven't even glanced at it. Why don't you go read it and then specifically explain how you think his analysis was faulty?

amherst

PS: The experiments have been on-going. As I've already mentioned before, a 1997 meta-analysis of the 2,549 ganzfeld sessions which had been reported up to that time revealed a hit rate of 33.2%. The odds against chance of this happening are "...beyond a million billion to one."(Radin, 97)
 
If you think the randomization is faulty then that's another matter, but to think that certain clips are actually likely to appear as targets more frequently over this many sessions, when the randomization is sound, is a baseless criticism.

Then it's not truly random, is it? Truly random results will still result in clusters of repeated numbers at some point in the timeline. Roll a 6 sided dice and record the numbers about 50 times. Now break them up into sections of 5. You'll notice that in each section of 5, some numbers will show up more often than others.
 
T'ai Chi said:


It is what is expected by chance if there is no psi. Assumptions, in the form of hypotheses, are a part of science, Ken.

And that is an assumption that all pictures in the set have an equal likely hood of being picked, which is what I am critiquing.
 
In order to determine what the null hypothesis chance results are, you have to have a model of the process you're undertaking, then develop a statistical model that fits the process. It's a tad naive to say that because you have four targets to choose from, chance results are 25%. It may be correct, but it is naive.

T'ai said:
It is what is expected by chance if there is no psi. Assumptions, in the form of hypotheses, are a part of science, Ken.
It is what is expected by chance if there are no effects that interfere with the naive assumption of 25%. Psi is one possible effect.

That said, I can't think of any mundane effects that would bias the results, assuming that the method of presenting the four targets to the judge does not include any clues.

Edited to add: And response bias has been eliminated as a possible problem.

Is it really asking too much for someone to post a link to the meta-analysis? Is it the "Does Psi Exist"? article?

~~ Paul
 
You know, if these ganzfeld experiments are so replicable, they present the perfect opportunity to try to determine some of the confounding factors.

Apparently, much deviation from the standard protocol kills the results:

http://comp9.psych.cornell.edu/dbem/Updating_Ganzfeld.pdf

So, it's time to begin a systematic search for exactly what deviations cause the problems. This would uncover some fascinating evidence about what exactly is going on.

~~ Paul
 
amherst said:

You're misunderstanding the problem. The problem with study 302 was that:

"The experimental design called for this study to continue until each of the clips had served as the target 15 times. Unfortunately, the premature termination of this study at 25 sessions left an imbalance in the frequency with which each clip had served as the target. This means that the high hit rate observed (64%) could well be inflated by response biases"
http://comp9.psych.cornell.edu/dbem/does_psi_exist.html

Since the target selection is going to be completly random, this isn't a problem for studies which have been completed since " targets largely consist(ing) of people, water and countrysides: images that commonly pop into the mind under these circumstances" are going to have only a 25% chance of being the correct target. Since study 302 ended prematurely the researchers couldn't know for sure whether clips which receivers may have a bias towards were initially targets more than decoys. That's why Bem did the analysis to adjust for that possibility. Again, this isn't a problem for studies which have been completed.

amherst

I understood the problem with Study 302 perfectly, as the quote demonstrates.

The idea that a picture will be chosen 25% of the time is true for those experiments that haven't happened yet. For those that have, there is the opportunity to see if the random selection just so happened to reflect response bias. This is what happened to Study 302 and that is why it reads "This means that the high hit rate observed (64%) could well be inflated by response biases"

Take an extreme example. Imagine an ganzfeld experiment had a target set where the sender knew that water appeared in the possible targets 50% of the time. According to chance, even if he talked about water every time, he'll still get 25% hit rate. But if (and this is not impossible, by any means, ie the over-representaion of Set 20 in the PRL experiments) the water-based pictures happened to be chosen as targets 60% of the time, his hit rate expected by chance will rise to 29%.
 
First, let me make a correction. When I said

The idea that a picture will be chosen 25% of the time is true for those experiments that haven't happened yet.

I was referring to Study 302, which had a target pool of just four images. It looks like I’m referring to all experiments. I’m not. It would have been better if I’d said “The idea that a certain feature of a picture will be in the target 25% of the time is true for those experiments…” etc.

Anyhoo, ever the skeptic, I decided to trawl through my hard drive to find if I could come up with any other examples re. Response bias (whether it increased results or not). As it turns out, apart from the woeful Farsight Institute, the only person to detail receiver choices is Prof Beirman.

Looking at the results from series 3-6, it shows that water imagery does frequently crop up in mentation notes, even when psi doesn’t appear to be present (series 4, 5 and 6 scored at chance). The tidal wave clip, used throughout, is usually the second most popular. The most popular across all series was a film clip of a horse. This surprised me, I’ll admit, although since the clip is from a commercial and is described by Beirman as representing “freedom” I think there’s half a chance that the countryside may be visible in the background. The bias towards choosing people is less pronounced. In series 4 the vote is split, as it were, between a clip from the film JFK and The Beatles. Add the two together, and the total means that “people” were chosen more that the tidal wave or the horse. In series 3, 5 and 6, however, the tendency lessened somewhat and the “people” option is a poor third choice.

As it turns out the random selection meant that the favoured targets were not over represented in the overall scheme of things. For example, in series 5 and 6, horses were a popular choice for the receivers AND one of the most common targets BUT it was a target only 1/8th of the time, so wasn’t large enough to effect results.

Nevertheless, it demonstrates that water imagery is fairly prevelant in ganzfeld situations. Possibly the countryside too, thought I couldn’t say for sure without seeing the actual horse clip. My previous assertion referring to “people” as subject matter has taken a knock, however.

Having said all that, I think we (the sceptics, that is) are all dancing round the same handbag. We’ve come to the conclusion that certain patterns can influence the hit rate. But we can only find these post hoc, which is a shame. I like my theories to have some predictive qualities to them.

I think response bias has been overlooked in parapsychology. And of course, it’s much more subtle that choosing just one theme as in my example earlier. A “typical” Rv session would make note of a number of things, not just one.
 
Ersby said:
First, let me make a correction. When I said



I was referring to Study 302, which had a target pool of just four images. It looks like I’m referring to all experiments. I’m not. It would have been better if I’d said “The idea that a certain feature of a picture will be in the target 25% of the time is true for those experiments…” etc.

Anyhoo, ever the skeptic, I decided to trawl through my hard drive to find if I could come up with any other examples re. Response bias (whether it increased results or not). As it turns out, apart from the woeful Farsight Institute, the only person to detail receiver choices is Prof Beirman.

Looking at the results from series 3-6, it shows that water imagery does frequently crop up in mentation notes, even when psi doesn’t appear to be present (series 4, 5 and 6 scored at chance). The tidal wave clip, used throughout, is usually the second most popular. The most popular across all series was a film clip of a horse. This surprised me, I’ll admit, although since the clip is from a commercial and is described by Beirman as representing “freedom” I think there’s half a chance that the countryside may be visible in the background. The bias towards choosing people is less pronounced. In series 4 the vote is split, as it were, between a clip from the film JFK and The Beatles. Add the two together, and the total means that “people” were chosen more that the tidal wave or the horse. In series 3, 5 and 6, however, the tendency lessened somewhat and the “people” option is a poor third choice.

As it turns out the random selection meant that the favoured targets were not over represented in the overall scheme of things. For example, in series 5 and 6, horses were a popular choice for the receivers AND one of the most common targets BUT it was a target only 1/8th of the time, so wasn’t large enough to effect results.

Nevertheless, it demonstrates that water imagery is fairly prevelant in ganzfeld situations. Possibly the countryside too, thought I couldn’t say for sure without seeing the actual horse clip. My previous assertion referring to “people” as subject matter has taken a knock, however.

Having said all that, I think we (the sceptics, that is) are all dancing round the same handbag. We’ve come to the conclusion that certain patterns can influence the hit rate. But we can only find these post hoc, which is a shame. I like my theories to have some predictive qualities to them.

I think response bias has been overlooked in parapsychology. And of course, it’s much more subtle that choosing just one theme as in my example earlier. A “typical” Rv session would make note of a number of things, not just one.

If receivers weren't using psi, and the results were just due to response bias, clips which receivers may have a preference for shouldn't be rated as targets when they are targets significantly more than when they are decoys. But there is a significant difference, so this isn't the case.

Bem writes:
"The second analysis treats each film clip as its own control by comparing the proportion of times it was rated as the target when it actually was the target and the proportion of times it was rated as the target when it was one of the decoys. This procedure automatically cancels out any content-related target preferences that receivers (or experimenters) might have. First, I calculated these two proportions for every clip and then averaged them across the four clips within each judging set. The results show that across the 40 judging sets, clips were rated as targets significantly more frequently when they were targets than when they were decoys: 29% vs. 22%, paired t(39) = 2.03, p = .025, one-tailed. Both of these analyses indicate that the observed psi effect cannot be attributed to the conjunction of unequal target distributions and content-related response biases."


amherst
 
If receivers weren't using psi, and the results were just due to response bias, clips which receivers may have a preference for shouldn't be rated as targets when they are targets significantly more than when they are decoys. But there is a significant difference, so this isn't the case.


And yet, response bias was "adjusted" for. How can you say reponse bias isn't there, when it clearly was stated to be there?

"This means that the high hit rate observed (64%) could well be inflated by response biases"
 
It's interesting how the descriptions given by the reciever aren't even close to what the actual picture is. It is a huge stretch to call these "hit's". Thanks for showing me that the experiments are definitely BS.
 
amherst said:
Some interesting examples of ganzfeld hits:
http://www.psiexplorer.com/ganz7.htm

amherst

Extremely interesting. I think that any rational person has to concede that the receivers impressions here couldn't simp[ly be due to chance. One would have to either concede the reality of anomalous cognition, or argue that some sort of cheating has taken place. Does everyone agree with this?

If people don't then they're incorrigibly stupid and I don't intend to waste anymore time on this thread.
 
Ian said:
Extremely interesting. I think that any rational person has to concede that the receivers impressions here couldn't simp[ly be due to chance. One would have to either concede the reality of anomalous cognition, or argue that some sort of cheating has taken place. Does everyone agree with this?
How the hell can we possibly know, Ian, without calculating the probabilities of such "hits" due to chance? You're relying on your gut instinct, again, and then calling people stupid who don't agree with your gut.

Okay, folks, how can we calculate the probabilities of these hits due to chance? Or run baseline experiments to determine the probabilities empirically? Come on, now, don't hold back.

~~ Paul
 
thaiboxerken said:
If receivers weren't using psi, and the results were just due to response bias, clips which receivers may have a preference for shouldn't be rated as targets when they are targets significantly more than when they are decoys. But there is a significant difference, so this isn't the case.


And yet, response bias was "adjusted" for. How can you say reponse bias isn't there, when it clearly was stated to be there?

"This means that the high hit rate observed (64%) could well be inflated by response biases"
The study the quote was referring to was study 302 which was prematurely ended and therefore "...left an imbalance in the frequency with which each clip had served as the target." Because of this, Bem wrote that the hit rate "...could well be inflated by response biases." That is why he performed the initial analyses to adjust for that (real) concern. After this had been carried out, the results were still still highly significant.
http://comp9.psych.cornell.edu/dbem/does_psi_exist.html#Table 2

Later, because of Hyman's concerns about randomization, Bem performed the same two analyses on the entire data base. "Both these analyses indicate that the observed psi effect cannot be attributed to the conjunction of unequal target distributions and content-related response biases."
http://comp9.psych.cornell.edu/dbem/response_to_hyman.html

For the sake of completeness and clarity, I reprint Bem's own description of the situation:

"In the original article, I discussed Study 302, which used a single 4-clip target set. In particular, I actually did two analyses that eliminated the need for the assumption that the chance baseline is .25. In the first analysis, the proportion of times that each clip (in the target set of 4) appeared in the study was multiplied by the proportion of times that receivers rated it as the target. This product yields the probability that there would have been a "hit" on that target "in the absence of psi"... When summed across the four clips in the set, this yields an EMPIRICAL (rather than an assumed theoretical) chance baseline against which the actual hit rate observed for that target set can be compared. This showed that a psi effect (i.e., a non-chance, presumably non-artifactual effect) did occur.

In the second analysis, each clip in the set served as its own control. The proportion of times receivers judged a clip to be the target when it WAS the target was compared with the proportion of times receivers judged a clip to be the target when it WAS NOT the target. Again, this finesses the need to assume a theoretical chance baseline of any kind. Again, a psi effect was shown to be present.

In response to Hymans critique--published in the same issue of the journal as the original article and sent to anyone who asked me for reprints--I performed the same two analyses on all 40 sets, all 160 targets. This is reported in my "Response to Hyman," which also appeared in the same issue and was sent to all reprint requesters. Both analyses confirmed the presence of a psi (i.e., nonchance) effect--across the entire database and all studies.

I believe that these analyses settle the chance-baseline "control" issues for demonstrating an anamolous information transfer effect. (I myself find it very satisfying to use analyses that dispense with any kind of assumed theoretical baseline.)"
http://groups.google.com/groups?q=D...2ljn9vINNrfo@newsstand.cit.cornell.edu&rnum=6

amherst
 
thaiboxerken said:
It's interesting how the descriptions given by the reciever aren't even close to what the actual picture is. It is a huge stretch to call these "hit's". Thanks for showing me that the experiments are definitely BS.
When a picture of George Washington was the target the receiver said:

"Lincoln Memorial... And Abraham Lincoln sitting there... ...the 4th of July... All kinds of fireworks... Valley Forge... bombs bursting in the air... Francis Scott Key... Charleston..."

But of course a President and Independence Day "aren't even close to what the actual picture is"!

When a clip of a man spitting fire was the target the receiver said:

"I find flames again...The lips I see are bright red, reminding me of the flame imagery earlier..."

But of course flames "aren't even close to what the actual picture is"!

When a picture of an african mask was the target the receiver said:

"A mask ... like a ceremonial mask"

But of course a mask "(isn't) even close to what the actual picture is"!

When a picture of Christ being crucified was the target the receiver said:

"Jesus...Prayer...Funeral... Death."

But of course Jesus "(isn't) even close to what the actual picture is"!

When a clip of an eagle in flight was the target the receiver said:

"...a big huge huge eagle, eagle wings spread out."

But of course an eagle "(isn't) even close to what the actual picture is"!


amherst

PS: http://www.expandmind.com/CogDiss.html
 
amherst said:
I'll say it again, if a "judge" is blind to the correct target, since there are 4 targets displayed at the end of a session, he has a 25% chance of choosing correctly. Images which a receiver mentions have a 25% chance of correlating in some way with the actual target. This is not difficult to understand. The only reason you can't comprehend it is because (in my opinion) you are too afraid to.

amherst
Rubbish. Personal bias plays a part in this. Target overlap plays a parts in this. Feedback plays a parts in this. There are recorded instances of sensory and data leakage (Blackmore). Your insistence on expectations that precisely 25% score is expected by chance indicates to me you are trying to make a case where there probably is none.

Yes, the results are interesting. Not sensational, just interesting, because there are too many questions raised about how they were obtained. I agree - more research is required. I would recommend that the contentious issues raised here be eliminated by using non-overlapped, simple images...such as Zener symbols. And I would also take up Blackmore's recommendations regarding properly random non-replaced target selection, and no feedback for the receiver.

As to being afraid to allow for anything, as a skeptic, I'm ALWAYS prepared to allow for the possibility of psi existing. Once, when I was younger, I thought that it did indeed exist. But what I have found since then is that the evidence is not there that it does exist. Patently and clearly not there. And that the researchers who say they did have evidence, and who I had expected had done robust work to back their claims up, clearly had not - the gaping holes were obvious and quite disappointing. And this sort of stuff isn't convincing me otherwise.
 
Amherst,

Please tell me in some intelligent way why a simple card test is not the BEST way to nail down if “psi” exists ?

If the sender sits in one room with a shuffled deck of circle, square, triangle and cross cards and chooses one at random.

He then sends as hard as he can.. he only has to concentrate on a single image.

The receiver then selects one of 4 simple images.

Surely any “noise” of alternative images and thoughts would be INSTANTLY removed.

The clarity of the simple image would surely HELP any “psi” effect.

WHY is this not the best (in fact the ONLY way to measure any effect)… anything more complex SURELY adds in subjectivity and confusion ?

Please stop avoiding answering this simple query.

In the end it is the FACT that NO psi shows up in this sort of test that PROVES to sceptics that psi doesn’t exist.

You can waffle on about Ganzfeld all you like but you NEED to convince me WHY the simple test is wrong when it makes the most sense in this sort of examination.
 
Aussie Thinker said:
Amherst,

Please tell me in some intelligent way why a simple card test is not the BEST way to nail down if “psi” exists ?

You can waffle on about Ganzfeld all you like but you NEED to convince me WHY the simple test is wrong when it makes the most sense in this sort of examination.

In the world of Psi, elaboration is the only way it seems to provide the equivical edge.

Is there any clear, unequivical, demonstration of any paranormal effect?
 

Back
Top Bottom