The Ganzfeld Experiments

Paul C. Anagnostopoulos said:

Give me one reference to an experiment where the probabilities of chance hits were calculated.

~~ Paul

Uh just something like Zener card guess would do it.

The probability of a hit is 1/5 for choosing 1 card from 5 cards (given that you can't see through the cards, etc etc.)
 
Zep said:
For the hundredth time... :rolleyes:

PEAR try five ways from Sunday to make sense of their own data. Result: nada.

http://www.princeton.edu/~pear/IU.pdf

*sigh*

Zep, lengthy justification or excuses or explanation by the PEAR staff is NOT the same as you claiming they screwed up the math.

Can you show me the math they screwed up and how they screwed it up?
 
T'ai said:
Ganzfeld, auto-Ganzfeld, and RNG experiments come to mind. In these one knows the probability of getting a hit pretty easily.
Surely you're joking. And what does that have to do with psychics?

Uh just something like Zener card guess would do it.

The probability of a hit is 1/5 for choosing 1 card from 5 cards (given that you can't see through the cards, etc etc.)
We're talking about psychics getting hits, not trivial experiments with Zener cards. Nevertheless, the assumption that people guess one of the Zener cards with probability 0.2 is naive.

You're assuming that people works like machines.

~~ Paul
 
T'ai Chi said:


Ganzfeld, auto-Ganzfeld, and RNG experiments come to mind. In these one knows the probability of getting a hit pretty easily. [/B]

If you eleiminate the inflences that have been discussed ad nauseum in this thread, response bias and random match bias, lack of randomization, experimental error and deliberate error.

Then there is the question that has yet to be answered, what is the standard deviation and does 33% rise above it?

Which RNG studies are really good ones? The search I did produced totaly bad research methods.
 
Paul C. Anagnostopoulos said:

We're talking about psychics getting hits, not trivial experiments with Zener cards. Nevertheless, the assumption that people guess one of the Zener cards with probability 0.2 is naive.

You're assuming that people works like machines.

~~ Paul

I just specified a simplistic model which has been used before, which satisfied your desire for a specification of a hit by chance.
 
T'ai, I was talking about psychics. Everyone keeps saying that psychics get hits that defy chance, but no one does the math.

~~ Paul
 
Dancing David said:

Then there is the question that has yet to be answered, what is the standard deviation

In a binomial setting with probability p of success (ie. .25) and n trials, the standard deviation is:

s = sqrt((p*(1-p))/n) = sqrt(3/(16n)).
 
T'ai Chi said:


In a binomial setting with probability p of success (ie. .25) and n trials, the standard deviation is:

s = sqrt((p*(1-p))/n) = sqrt(3/(16n)).

That's cool, but I guess I was thinking more in terms of population statistics where the standard deviation is the the square root of the variance in the population.

But by the above method a trial of twenty five runs would have a standard deviation of 8%, so a result of 33% would be less than the standard deviation.

However i was thinking of it more as a sampling population issue where the standard deviation is found more like this:

Terms you'll need to know
x = one value in your set of data
avg (x) = the mean (average) of all values x in your set of data
n = the number of values x in your set of data
For each value x, subtract the overall avg (x) from x, then multiply that result by itself (otherwise known as determining the square of that value). Sum up all those squared values. Then divide that result by (n-1). Got it? Then, there's one more step... find the square root of that last number. That's the standard deviation of your set of data.
Now, remember how I told you this was one way of computing this? Sometimes, you divide by (n) instead of (n-1). It's too complex to explain here. So don't try to go figuring out a standard deviation if you just learned about it on this page. Just be satisified that you've now got a grasp on the basic concept.

which I am quoting from here http://www.robertniles.com/stats/stdev.shtml

I guess that i am sceptical of the assumption that there will just 'happen' to be a .25 chance that a target picture will match the reciever's statements.

I think that if the 'random match' was measured and controled for then I would be more comfortable accepting a twenty five percent chance for the taget matching the reciever's statement. But that isn't controlled for , so without controls I don't agree to the twenty five percent chance and so I wonder if the matching rates fall on a bell curve with a variance and distribution.

Say that there is variance in the rate of matching between 'non-psichic' events that causes a distribution of 'matches between the target and reciever statements', would it not be important to know the mean of that and the variance and from that the standard deviation.

Again I think a few sample controls would be all it would take to determine this and if it is effecting the data. That way when someone started talking about a hit rate of thirty three percent I would be very confident that it was not the mean of the method and that it was a hit level that rose above random chance.

Maybe I am excessive in my thinking.
 
There's an internet point in the place I'm staying. Alas, it costs a fortunre. I'll have to be quick and take the pertinant points.

amherst said:



2.The 1997 Parker/Gothenburg experiments are:
Parker et al. (1997) (Study 1)b
30 trials

Parker et al. (1997) (Study 2)b
30 trials

Parker et al. (1997) (Study 3)b
30 trials

Since 90 trials are exactly what Radin lists on his graph for Gothenburg/Parker, we can be sure that these studies were published in early 97 and therefore included in Radin's analysis. Unless you can show that Parker/Gothenburg published the results of (any) ganzfeld experiments before 1997, your criticism is baseless.

amherst

Why would I need to show they were published before '97. I'm saying that Radin got some results before they were publsihed.

Besides, you're missing out studies 4 and 5 (Parker and Westerlund) which make up the Gothenburg data. The fact that Radin has only 90 sessions makes me think that he got only the partially complete results.

Of course, I could be wrong, but I also don't see how Edinburgh put together 289 sessions in three years unless the Dalton work is included.

Similarly with Durham's (which includes Kanthamani's work, I believe).

We're simply not going to agree. I think that Radin's (and, of course, Milton's too - I never thought that was complete either!) is lacking. Some of Bierman's work, too, appears to have been missed off.

Anyway, here's one I made earlier...

http://www.skepticreport.com/tools/rvlist.htm
 
Paul C. Anagnostopoulos said:
T'ai, I was talking about psychics. Everyone keeps saying that psychics get hits that defy chance, but no one does the math.

~~ Paul

I wonder why I got so much flak for trying to put some basic math in psychic stuff then by studying# of questions, hit rates, etc.

Go figure.

Paul, can you tell us if they defy chance? What math have you done?
 
T'ai, I'm not talking about overall hit rates and other such summary statistics. I'm talking about the probability of getting a specific hit by chance. Say some psychic guesses that my mother died of Alzheimer's. What is the probability that they could guess that by chance?

I have not done these probability calculations. I suspect it is more or less impossible to do so. I brought up the subject simply to counter continuous remarks like "Well, that hit was just too amazing to be a coincidence." People who make remarks like this are talking out of their arses.

~~ Paul
 
Paul C. Anagnostopoulos said:
Say some psychic guesses that my mother died of Alzheimer's. What is the probability that they could guess that by chance?

~~ Paul

I think those kinds of calculations are possible. The statistics must be out there somewhere.

What this has to do with ganzfeld, I'm not sure.

Does anyone know if there's a way to add together effect sizes if you don't know the number of sessions involved?
 
Ersby said:
I think those kinds of calculations are possible. The statistics must be out there somewhere.
Well, perhaps for Alzheimer's. Can we find out how many people have a picture of a horse in their front hall? And we certainly have no idea what the chances are of the psychic choosing these things to say. How often would a nonpsychic pick Alzheimer's or the horse?

[/quote]What this has to do with ganzfeld, I'm not sure.[/quote]
It's pertinent, because the same questions hold for guessing about a photo or video. Our assumption that there is a 25% chance of picking each of four photos is naive.

~~ Paul
 
Ersby said:
Why would I need to show they were published before '97. I'm saying that Radin got some results before they were publsihed.
I'm having a hard time understanding your criticism. I initially thought ( I guess incorrectly) that your position on Radin's meta-analysis was that he left out some unsuccessful studies which were published pre-early 1997 and that this is why the result of his analysis was so significant. It now seems that you do agree that Radin was in fact all inclusive with everything published pre-early 97, but you think he used a few studies which had been conducted but not yet published, and this inflated the overall hit rate of his meta-analysis. So is your criticism of Radin that he pick and chose from studies which weren't published yet, ignoring unsuccessful tests, and only including successful ones? Is this what you are saying?

Anyway, the overall hit rate of Radin's meta-analysis is at 33.2%. The result of the Milton/Wiseman, Bem/Palmer/Broughton 40 study meta-analysis (including all the non-standard, nonsignificant studies) is at 30.1%. You can't point me towards any studies which were not included in these two analyses which would significantly lower the hit rates. Maybe we will just have to agree to disagree, but it seems obvious to me that the ganzfeld has been replicated.

amherst
 
Paul C. Anagnostopoulos said:
T'ai, I'm not talking about overall hit rates and other such summary statistics. I'm talking about the probability of getting a specific hit by chance. Say some psychic guesses that my mother died of Alzheimer's. What is the probability that they could guess that by chance?


Who knows! But you don't know that probability either. :)

Personally, with all the information out there, something which I think people drastically underestimate, I'd say the probability is quite high. That is just my intuition speaking, and not a scientific assessment.

Paul, that is why we need to test mediums etc. in a scientific setting, where we can CONTROL the types of things they can make pronouncements on, so we can have a handle on the probability involved.
 
Ersby said:

Does anyone know if there's a way to add together effect sizes if you don't know the number of sessions involved?

So you have, say:

Effect Size, Sample Size
.2, m1
.04, m2
.7, m3
etc.

, for example, and want to know how to combine them when you don't know the m's?

I know you don't know the number of sessions, but do you have any other statistics, like the variance for example? I have heard of various ways to weight effect sizes, say inversely proportional to their variance.
 
Paul,

Our assumption that there is a 25% chance of picking each of four photos is naive.
There is a 25% chance of picking the 'target' if a purely random selection process is used. If the process is not random (and we both think that human thought processes are NOT purely random?) then I'm not sure exactly how we calculate the 'probability per photo'. One way would be to 'pretest' the collection of photos by running Ganzfeld trials in which there was no sender (hence, no target), yet the reciever is not told this - they still sit in the 'receiving room' for 20 minutes, and still have to select one of the 4 photos at the end of the session. Do this often enough and we should start to find out if there is any 'selection bias' towards particular photos. Of course this would have to be done for each individual reciever - we can't assume that one person's bias is applicable to everyone!

But in the end, if there is a finite pool of photos, and each photo is chosen as the target the same number of times, then it doesn't matter what the 'bias' might be (provided it's constant!). For example, if we had a pool of just 2 photos, and one has a 'bias' of 80% to the other's 20%, then as long as (a) we select each photo as a target 50% of the time and (b) the reciever selects according to the bias, we should get a "by chance" result.

(Edited to remove the stupid attempt at a table of results that doesn't line up even remotely sensibly)

Of course, this assumes that 'selection bias' is a constant. Perhaps people start of a session with a preference for "water" scenes, but subconsciously tire of this, and eventually start to choose against water scenes in later trials?
 
amherst said:

I initially thought ( I guess incorrectly) that your position on Radin's meta-analysis was that he left out some unsuccessful studies which were published pre-early 1997 and that this is why the result of his analysis was so significant. It now seems that you do agree that Radin was in fact all inclusive with everything published pre-early 97, but you think he used a few studies which had been conducted but not yet published, and this inflated the overall hit rate of his meta-analysis.

amherst

Well, yes and no. Certainly work pre-97 is missing but perhaps not in the way you inferred. My criticism (not just of Radin, but of the whole meta-analysis debate) is that the years 1985-1991 have largely disappeared from view. If you look at the link I put a while back, it lists as much info on the remote viewing/ganzfeld issue as I have been able to find. Some of those experiments listed '85-'91 don't appear to be included in any meta-analysis. Beirman looked at them in the paper we talked about ages ago, and noted they dragged the effect back down towards chance.
 
T'ai Chi said:

I know you don't know the number of sessions, but do you have any other statistics, like the variance for example? I have heard of various ways to weight effect sizes, say inversely proportional to their variance.

Nope, nothing else. Just the effect sizes. Oh well, not to worry. Does something simple like the mean effect size have any worth? Or is it just a meaningless (ha ha) number?
 
Ersby said:


Well, yes and no. Certainly work pre-97 is missing but perhaps not in the way you inferred. My criticism (not just of Radin, but of the whole meta-analysis debate) is that the years 1985-1991 have largely disappeared from view. If you look at the link I put a while back, it lists as much info on the remote viewing/ganzfeld issue as I have been able to find. Some of those experiments listed '85-'91 don't appear to be included in any meta-analysis. Beirman looked at them in the paper we talked about ages ago, and noted they dragged the effect back down towards chance.
-The only Bierman paper I remember us discussing was the one he co-authored on response bias. Since all of his articles are available on his website http://a1162.fmg.uva.nl/~djb/publications/ can you please provide a link to the one you are referring to so I can read it?

-Since the abstracts you've listed for the 85-91 ganzfeld studies don't provide much detail, I'd like to read what Bierman says about them before I give any real comment. I will just reiterate that Radin claims he included every published ganzfeld replication attempt as of early 97 in his meta-analysis. And he certainly knew about the studies you listed in your report.

-Radin pg. 88:
"From 1974 to 1997, some 2,549 ganzfeld sessions were reported in at least forty publications by researchers around the world."

Radin is claiming he obtained all his data from published material. Unless you have evidence that he is lying (which you don't), you should retract your claim that he added pre-published studies.

amherst
 

Back
Top Bottom