The Ganzfeld Experiments

Paul C. Anagnostopoulos said:



Come on, you believers: At least admit that all you're doing is attaching the label psi to unexplained results. To further define psi as paranormal is a leap of faith. It could just as well be invisible flying donkeys.

~~ Paul

is, You can call the effect 'IFD', if you want. People who study the phenomena agree that the effect shall be known as the 'Psi Effect'. You have to accept and remember that no known method of action has been found to account for the Ganzfeld. And if you are to assume that each and every last known recorded purported 'effect' is responsible to either delusion, fraud or error, in light of so many undeunked accounts, that is an extraordinary claim in itself, and as such requires extraordinary evidence. In short, you need to show, not opine 'wishfull thoughts'.

The fact is, the method of action has not been identified, yet it fits so close to both common human experience and now the QM theories of non-local behaviour, that it is more likely that the effect is indeed non-local and therefore worthy of increased research and funding.
 
amherst said:
So what exactly is your problem with the ganzfeld? As you well know, the analyses done initially by Bem and then furthered by Bierman et.al. http://a1162.fmg.uva.nl/~djb/publications/1998/AutoGF_set20effect.pdf show that the results can not be due to response bias. This strongly suggests that any future successful experiments will not be due to it either. What exactly is your criticism? What do you think explains the results?

amherst

I cannot explain the results of the PRL. I didn't say that response bias does. I was trying to explain that response bias exits: the "25% hit rate expected by chance" may be an over-simplification, despite what you learnt in second grade math. That is all.

To say response bias won't be a problem in the future is a little optimistic, I feel. If it had an effect on the result expected by chance in the past, then why not again?

And no one can deny that the selection of hits you linked to is an incomplete dataset. As such, it's pretty worthless.
 
If you were to look at the overall body of scientific evidence and list the debunking issue next to it, it would start to look something like this:

Exp.No. Debunking Issue

**1 No significance
**2 Poor controls
**3 Poor controls
**4 Wrong tie
**5 Knew the brother of one of the girls in admin
**6 Mirror in bathroom opposite lab building
**7 Too difficult to understand
**8 Um.... errr....
**9 Black magic

[...]
 
I think that Lucianaarchy get the sweeping statement award.

I am unconvinced either way, there hasn't been a donstrated psi effect, there also is definitive proof that there isn't pis.

The matching pictures things is bogus, as sated before there are plent of protocols could be established to eliminate 'response bias'.
As stated before.

1. A picture shiould have a list of words that is generated to go with it. For the fire breather (fire, flame, spitting, etc)
2. A level for how many match words are chosen to be a hit.
3. Sets are generated to eliminate matches between pictures.
4. The order of pictures in the set and the target are ranomised.
5. In a control study the 'response bias' or 'random match' effects can be studies and each picture rated upon that.

I would like to believe that psi exists, very much. But what bothers me is this.
1. Sloppy methodolgogy where there are veru few control considered.
2. For all the studies that have been reportedly done we read about the same old studies over and over. Where are the new studies demonstarting the effect?
3. We are told about the 'one in a billion odds', which is just crap, but if someone honestly states that they don't remeber why it is crap. Well that is proof. You can't take disparate events and claim them as a trial in a row. So if you chose a number beween one and a thousand on three sepeate day the chance that the 'same number will come up three times in a row' is only 'one in a billion', if you state in advance which three number picks you are going to use. You can not chose three number picks after the fcat and say that the odds are 'one in a billion'.
4. Since there are no control groups in the studies, we don't know if the alleged 32 percent hit rate is even significant or an artifact of something else. Why no control gropus?
5. By now they should have found at least one reciver and sender who can actual do the psi thing. The studies should be two tiered one to screen large numbers of people to find those with psi talent and two to demonstrate that talent.
6. Why such small trials and numbers of runs? At my local university you can recruit thousands of freshman for studies like this. I was involved in a study where we ran a thousand people in a semester and it was a considerably more complex methodology and protocol.
7. Why the adjustment of numbers after the fact? In any other research area that I have read, after the fact adjustment is always accompanied by the phrase 'but since this was not a controlled for factor the result of this adjustment should be replicated later'. There should be controls when the study is run the first time!

I would really really like to believe that psi does exist.
I haven't seen anything that says it doesn't but most of this research does a ;b]disservice[/b] to the notion that it does exist. By just using tighter protocols and methods any effect demonstrated would be accepted, instead of all this after the fact readjustment and wishful thinking.

Thanks.
 
Dancing David said:
I think that Lucianaarchy get the sweeping statement award.

I am unconvinced either way, there hasn't been a donstrated psi effect, there also is definitive proof that there isn't pis.


Are you sure about this??
 
Hi,

I'm a newbie on this forum, so I don't know if it was said before but... Ray Hyman published a very good reply to Bem and Honorton "Ganzfeld meta-analysis study" in "Psychological Bulletin".

Reference:

Hyman, R. (1994). Anomaly or Artifact ? Comments on Bem and Honorton. Psychological Bulletin, 115, 1, 19-24.

I think it's a very good reply at Bem and Honorton and it is a good idea to read it... :)

I think also that Richard Wiseman made a meta-analysis of his own about the Ganzfeld studies and find it was sonsistent with chance, but I don't have the reference for this publication...

See you,
 
Ersby said:
I cannot explain the results of the PRL. I didn't say that response bias does. I was trying to explain that response bias exits: the "25% hit rate expected by chance" may be an over-simplification, despite what you learnt in second grade math. That is all.

I dunno what 2nd grade maths is. I packed maths in at the age of 15. Is that about the same age?

Anyway, for the life of me I am unable to understand this. The paper Amherst references
http://a1162.fmg.uva.nl/~djb/publications/1998/AutoGF_set20effect.pdf

States in the abstract:

We make a distinction between a random (target selection) procedure which,
in principle, excludes sequential dependencies, and a resulting random (target) sequence
that may contain peculiarities especially in terms of target frequency distribution, which
may correspond to a subject’s response biases. Post-hoc corrections that adjust the hit
probability are available.

True enough. Nevertheless the fact that after the event there is a correlation between psychological propensities to choose particular targets, and such targets, is utterly irrelevant. One would expect that. A subject might have a psychological propensity to get water impressions, and judges could have a psychological propensity to choose water stimuli, and a post hoc analysis would find that water targets are chosen more often than say fire targets. Now if the targets were not initially randomly selected, or that out of the initial pool of targets there are proportionately more psychologically desirable targets eg water, then this admittedly would be a problem. But my understanding is that this is not the case. Of course it could just by chance that there are more targets depicting water. But again this will eventually average itself out. It' simply not possible for the chance rate to on average be greater than 25%.

What am I not understanding here??

Anyway, in the main paper it states:

However even when the target selection procedure is properly random, the
resulting sequence of targets may be quite structured. The 10-bit binary sequence
“1111111111” may result from a random binary generator. The probability for such a
sequence is equal to all other 10-bit binary sequences like “1101001101” although the
latter may appear more random than the former. Response biases are tendencies for subjects to prefer a specific response over
others. For instance if we have a very rigid subject with an absolute preference to select
the response “1” then this subject in a 10-trial binary choice experiment will “produce” as
a response sequence “1111111111”. This, of course, would yield 10 hits in case of the former target sequence. The probability that 10 hits occurs by chance is smaller than 1 in
100 and thus such a result is said to be statistically significant and considered to be an
indication that psi occurred. Is that the correct conclusion?

Obviously not in that particular case, but it must average itself out to 25%. For example the actual targets might have a propensity to be 0's so if a receiver has a propensity to prefer 1's this will then result in less than the chance rate.

What am I failing to understand here??
 
Luci said:
And if you are to assume that each and every last known recorded purported 'effect' is responsible to either delusion, fraud or error, in light of so many undeunked accounts, that is an extraordinary claim in itself, and as such requires extraordinary evidence.
But it is not as extraordinary as the claim that people can read one another's minds, is it?

Exp.No. Debunking Issue

**1 No significance
**2 Poor controls
**3 Poor controls
**4 Wrong tie
**5 Knew the brother of one of the girls in admin
**6 Mirror in bathroom opposite lab building
**7 Too difficult to understand
**8 Um.... errr.... [smell of experimenter on photos]
**9 Black magic
Well, not that last one. That's what psi is.

Ian said:
I dunno what 2nd grade maths is. I packed maths in at the age of 15. Is that about the same age?
You were 15 in second grade? Oh wait, there is some British confusion here.

Of course it could just by chance that there are more targets depicting water. But again this will eventually average itself out. It' simply not possible for the chance rate to on average be greater than 25%.
It's useful to figure out whether, just by chance, there were a disproportionate number of water targets, even after all the trials. The experiments don't include enough trials to wash out such skew.

~~ Paul
 
Interesting Ian said:



What am I failing to understand here??

Put simply, what you're failing to understand is that all the biases you mention in your article are extremely unlikely to cancel themselves out nicely and leave you with a simple 25% baseline against which you can compare.

Instead, you need to take the various biases into account when calculating the baseline. Quoting from the Bierman paper you cite, for instance : "The preference for specific targets results in a corrected expected probability which is .2598 and thus indeed should result in an adjustment of the reported z-scores." Note that this "correction" moves the baseline by almost 4%, so it's a non-trivial correction. Quoting further, "A conservative correction [for a different effect] reduces the 10% differential effect to a non-significant 6.8%" [Italics and boldface mine].

In other words, the effect evaporates when you do the math right.

That's what you're missing. When the effect evaporates when you do the math right, then a) the effect may not have been there to begin with, and b) you need to be extremely careful to do the math correctly in the future, lest you find other effects that aren't there.
 
Paul C. Anagnostopoulos said:

But it is not as extraordinary as the claim that people can read one another's minds, is it?


Well, not that last one. That's what psi is.


You were 15 in second grade? Oh wait, there is some British confusion here.


It's useful to figure out whether, just by chance, there were a disproportionate number of water targets, even after all the trials. The experiments don't include enough trials to wash out such skew.

~~ Paul

Why is it useful? If there were disprotionately fewer water targets one might as well argue that the hit rate is artificially low. How would Skeptics react to such a suggestion? Exactly LOL. Therefore why make an issue about the number of water targets being disproportionately high? The point is that over a large number of experiments this must all average out to 25%. Yes?? :confused:
 
Interesting Ian said:


The point is that over a large number of experiments this must all average out to 25%. Yes??

No. Unless you have an infinite amount of experiments, which would be rather expensive to run.

With a "large" number of experiments, the results are likely to average to something near 25%. That's not the same thing at all.

With a "small" number of experiments, you are likely to get all sorts of wierd stuff.

Doing the math right involves taking words like "likely" and "near" seriously, turning them into quantitative effects, and correcting for those effects.
 
drkitten said:
Put simply, what you're failing to understand is that all the biases you mention in your article are extremely unlikely to cancel themselves out nicely and leave you with a simple 25% baseline against which you can compare.

That doesn't matter. On average it must be 25%. I cannot see any conceivable way it could not be. Although I have never studied statistics in my life it is absurd to suggest that it is 26.5% as Ersby suggests (or whoever he is quoting suggests).

Instead, you need to take the various biases into account when calculating the baseline.

If baseline doesn't mean average then I do not understand what it means.

Quoting from the Bierman paper you cite, for instance : "The preference for specific targets results in a corrected expected probability which is .2598 and thus indeed should result in an adjustment of the reported z-scores." Note that this "correction" moves the baseline by almost 4%, so it's a non-trivial correction. Quoting further, "A conservative correction [for a different effect] reduces the 10% differential effect to a non-significant 6.8%" [Italics and boldface mine].

This conveys absolute zero meaning to me.

In other words, the effect evaporates when you do the math right.

References please.

That's what you're missing. When the effect evaporates when you do the math right, then a) the effect may not have been there to begin with, and b) you need to be extremely careful to do the math correctly in the future, lest you find other effects that aren't there. [/B]

I want to know 2 things

a) How can the chance of picking one of 4 targets be magically greater than 25% when the pool of potential stimuli has all types of stimuli in equal proportions, the target is chosen randomly out of the 4 stimuli, the order in which the stimuli presented to the judges is random, and a sufficiently large number of trials/experiments are carried out??

b) Where are your references stating that there is no effect even if we assume that somehow mysteriously the average would be 26.5%?? The average would need to be about 33%.
 
JMA said:


I think also that Richard Wiseman made a meta-analysis of his own about the Ganzfeld studies and find it was sonsistent with chance, but I don't have the reference for this publication...

See you,

Hi, welcome. I think you may be ref to the Wiseman/Milton Meta analysis. In fact, it was not complete. When Julie Milton completed it, it was found to be stat. significant.
 
drkitten said:
Originally posted by Interesting Ian


The point is that over a large number of experiments this must all average out to 25%. Yes??
--------------------------------------------------------------------------------



No. Unless you have an infinite amount of experiments, which would be rather expensive to run.

It's wholly irrelevant how many experiments are run. The average must still be 25%. You clearly fail to understand what the word average means. Of course the more experiments we run the more and more confident we can be that there is an anomalous effect. Running just a few trials is worthless. Nevertheless my points remains; the average cannot conceivably be greater than 25%.

With a "large" number of experiments, the results are likely to average to something near 25%. That's not the same thing at all.

Yes it's exactly the same thing. Of course if we take bias into account then over a particular experiment the average likely be either greater or lesser than 25%. But that all averages out over a sufficiently large number of experiments so that the average is exactly 25%.

With a "small" number of experiments, you are likely to get all sorts of wierd stuff.

Yes, and that ought to have as many artificially low hit rates as artificially high. But with sufficiently high number of experiments it must average out at 25%. Not 26% or 27% as people are claiming.
 
Interesting Ian said:


It's wholly irrelevant how many experiments are run. The average must still be 25%. You clearly fail to understand what the word average means. Of course the more experiments we run the more and more confident we can be that there is an anomalous effect. Running just a few trials is worthless. Nevertheless my points remains; the average cannot conceivably be greater than 25%.



Yes it's exactly the same thing. Of course if we take bias into account then over a particular experiment the average likely be either greater or lesser than 25%. But that all averages out over a sufficiently large number of experiments so that the average is exactly 25%.



Yes, and that ought to have as many artificially low hit rates as artificially high. But with sufficiently high number of experiments it must average out at 25%. Not 26% or 27% as people are claiming.

It has become clear to me after reading this post that you understand statistics even less than I do.
 
Hi,

Lucianarchy said:
I think you may be ref to the Wiseman/Milton Meta analysis. In fact, it was not complete. When Julie Milton completed it, it was found to be stat. significant.

Do you know in wich publication I can find that? I'd like to read that, especially if the results are stat. significant...

Thanks,
 
Ersby said:
Interestingly, the post hoc expected hit rate by chance for static targets was 24.4% while for dynamic targets it was 27.7% which means the 10% gap between the scores of static and dynamic targets should actually by a 6.8% gap, which renders the effect non-significant.

[/B]

How is this response bias worked out?? Obviously if it is worked out as in taking the percentage of psychological desirable targets hit divided by the total number of targets hit, then this figure will be greater than 25%. Inevitably there is bound to be a greater percentage of psychologically desired targets than for other targets. Moreover, no matter how many experiments are run it will be greater than 25%. As Ersby says it might well be 27.7%.

But are people not able to understand that this not alter the chance of getting the right target?? Do people really fail to understand that if psi does not exist the average cannot possibly be greater than 25%??? :eek:
 
Lucian is mistaken. Wiseman and Milton worked on the paper together. When it was completed it showed no effect.

http://www.csicop.org/si/9911/lilienfeld.html

Meanwhile, going back a bit (and I'm rambling now, I accept that. If time is precious feel free to skip the rest of this post :) ). Talking about the hits linked to, it would be useful to know from just how many trials the quotes were taken from, and how long the transcripts originally were. Amherst has said that some come from the PRL database, so there are at least 350+ trails to choose from. Plus, the mentation of the PRL experiments lasted half an hour, so the notes would last several pages. Even at a conservative estimate, we’re looking at a tiny fraction of the whole dataset.

As for the odds of such a hit, well, since no one has done a full-length experiment into RV with just guessing, we can’t say for sure. However, in 1923 the SPR did something similar. There had been a medium who was adept at identifying passages in closed, unidentified, physically distant books. (When I say “identified”, I mean she would talk about the theme of the passage, or that it relates to a certain relative in some way) This medium, Mrs Leonard, did several of these tests and scored 36% hit rate, whereby the passage did bear a marked resemblance to what she mentioned.

In 1923 the SPR arranged for a cold reading version, as it were, of simple guesses and found that a little under 5% was scored.

That’s one in twenty, and this was an experiment whereby the guessers only had a couple of sentences: not the pages of notes that remote viewers have. I think such matches on the Psiexplorer site are entirely in accordance with chance. They are “relevant” only in that they’re on a site linked to the PRL test, but not relevant in that they further the discussion.

Let’s not forget, before anyone starts inventing thousands-to-one against odds against describing a photo, that the behaviour of set 20 in the PRL was at a probability of 0.0000125. Anyone care to explain that?
 
Interesting Ian said:
But are people not able to understand that this not alter the chance of getting the right target?? Do people really fail to understand that if psi does not exist the average cannot possibly be greater than 25%??? :eek:

Response bias, for the last time, is an entirely post hoc measure. If, after the run of an experiment, it transpires that targets were chosen at random that JUST SO HAPPENED to coincide with the ALREADY KNOWN AND UNDERSTOOD propensity for people to talk about certain things in the ganzfeld state, then there is grounds for suggesting that response bias will inflate the hit rate expected by chance.

I already gave a hypothetical example of how it could work. If someone knows that 50% of a target pool has water, and he talks about water in each session, and (by chance) water pictures are chosen as a taregt 60% of the time, not 50%, then the hit rate expected by chance is 29%.
 
Ersby said:
Lucian is mistaken. Wiseman and Milton worked on the paper together. When it was completed it showed no effect.


Sorry, Ersby, it is you who is mistaken. It was updated.

See my response to the JMA, below.
 

Back
Top Bottom