The Ganzfeld Experiments


"By the 1960s, a number of parapsychologists had become dissatisfied with the familiar ESP testing methods pioneered by J. B. Rhine at Duke University in the 1930s. ..


Yes, and they made up some BS explanation for their failure to find "psi". Instead of opting for the most simple explanation (psi doesn't exist), they decided to give themselves more wiggleroom. This is an agenda-based science, one that sets out to prove a predrawn conclusion. They believe psi exists, and they will do what they can to prove it, whether it really exists or not.



Historically, psi has often been associated with meditation...


This has nothing to do with the "forced" choice excuse that you are giving.


The ganzfeld has no more "wiggle" room than Rhine's forced choice card tests(which were also highly significant). You do realize that Ray Hyman agreed upon and helped design the Auto-Ganzfeld protocol don't you?

amherst


False, it's been demonstrated that there is wiggle room throughout threads in the JREF forum and other criticisms of the test. Your appeal to Ray Hyman doesn't help your case at all.
 
Ersby, the issue with ganzfield subjective judging lies not on the subjects/testees themselves but in the judges. They must make subjective assessment of the subjects' guesses and see if they can match them up to the pool of targets offered.

Eg. Do the words "yellow" and "wheels" actually constitute a match on a picture that includes a yellow car in the background in what is essentially a picture of a child in the foreground? How about the word "wheels", same picture? In a yes/no situation, this becomes tricky. Imagine now if you scored a "yes" for that image, and a few more like it, in a yes/no scoring process. This would raise the score of hits significantly, would it not? Whereas some other judges may score the same test a "no hit", thereby lowering the score of hits.

To address this, a scale could be introduced to get away from the yes/no situation - say, 1 to 10. But the question still remains: By what criteria are you going to score any attempt? Note that RV-type free-response results are rarely drawings or paintings - they are usually very rambling verbal descriptions. So matching these to actual images where possible and assigning a score is even more subjective - it's a bit like marking a student's essay, where the scorer has some leeway to award marks for convincing arguments, etc.

Incidentally, this is the method that PEAR used - and they rapidly discovered that the judging process itself was the hinge-point in determining the propensity for the RV effects to appear. In the PEAR paper that I referenced above, they described how went to the trouble of trying to remove the subjectivity of the scoring methods (they even rescored most of the reponses), in an effort to ensure that there were no influences from that direction. Alas, as they did so, so did the RV effects disappear, to their disappointment.

Summary: As soon as the targets are complex images, subjectivity in the scoring process plays a major part in determining the psi results. And while the results are due to subjective method, they will not be respected.

[edit: dyslexia backlash]
 
Tell me again why the judge does not pick the sent image himself?
 
Zep said:
...the issue with ganzfield subjective judging lies not on the subjects/testees themselves but in the judges. They must make subjective assessment of the subjects' guesses and see if they can match them up to the pool of targets offered.

The "subjects/testees" are the ones who (almost always) choose the picture/clip from a pool of four. If you'd just read the papers Zep...
Eg. Do the words "yellow" and "wheels" actually constitute a match on a picture that includes a yellow car in the background in what is essentially a picture of a child in the foreground? How about the word "wheels", same picture? In a yes/no situation, this becomes tricky. Imagine now if you scored a "yes" for that image, and a few more like it, in a yes/no scoring process. This would raise the score of hits significantly, would it not? Whereas some other judges may score the same test a "no hit", thereby lowering the score of hits.
Get it through your head, this isn't the Pear protocol. This isn't remote viewing. There is no yes/no situation. The receiver (or completely blind outside judge) simply chooses the target he feels best corresponds to the imagery he (or if an outside judge, the receiver) experienced during the sending phase. Everyone is completely blind as to what the target is. Therefore, if no psi is involved, the receiver has exactly as good of a chance to mention images for one of the decoys as he does for a target, 25%. Does this not make sense to you?
To address this, a scale could be introduced to get away from the yes/no situation - say, 1 to 10. But the question still remains: By what criteria are you going to score any attempt? Note that RV-type free-response results are rarely drawings or paintings - they are usually very rambling verbal descriptions. So matching these to actual images where possible and assigning a score is even more subjective - it's a bit like marking a student's essay, where the scorer has some leeway to award marks for convincing arguments, etc.

Incidentally, this is the method that PEAR used - and they rapidly discovered that the judging process itself was the hinge-point in determining the propensity for the RV effects to appear. In the PEAR paper that I referenced above, they described how went to the trouble of trying to remove the subjectivity of the scoring methods (they even rescored most of the reponses), in an effort to ensure that there were no influences from that direction. Alas, as they did so, so did the RV effects disappear, to their disappointment.
This isn't PEAR, this isn't remote viewing, you have no idea of what you're talking about. You are grasping at imaginary straws.
Summary: As soon as the targets are complex images, subjectivity in the scoring process plays a major part in determining the psi results. And while the results are due to subjective method, they will not be respected.

[edit: dyslexia backlash]
I'll say it again, if a "judge" is blind to the correct target, since there are 4 targets displayed at the end of a session, he has a 25% chance of choosing correctly. Images which a receiver mentions have a 25% chance of correlating in some way with the actual target. This is not difficult to understand. The only reason you can't comprehend it is because (in my opinion) you are too afraid to.

amherst
 
It can't be so it isn't! It's impossible so it must be scoffed at!

These experiments show absolutely nothing and are filled with fraud and horrible controls! A frickin monkey could design a better experiment! The results are not significant at all! It's all due to chance because of poor controls and obvious sensory leakage!

Q: !Xx+-Rational-+xX! how do you know all of this!?
A: Because I'm a skeptic and the burden of proof is not on me!

It's our job to change the thinking of non-materialists! Not being a materialist is a denial of critical and rational thinking.

Like I have said believer's weak insignificant results won't help them one bit once their material minds become extinct! That is what science tells us!
 
amherst said:

I presented four articles in my original post, one of which was (as I have explained many times)a nontechnical piece written for an encyclopedia. If you had paid attention to my posts you would have easily known:
1. Why there wasn't a detailed discussion of the statistics in the article.
2. Where you could find a detailed discussion of the statistics.
Yet instead of reading the Psychological Bulletin article you incredibly write:
"It is not sufficient in any research paper to just say Across these studies, receivers achieved an average hit rate of about 35 percent. , that is sloppy reporting, it is crucial to any study like this that the number of targets tested per trial be discussed, the number of trials run and the different parameters for this alleged 35% hit rate".
My you are touchy but I only read one papaer and was responding to it, it is ny intention to respond to each in turn. I am sorry that you appear so defensive.

Maybe you should find another forum for discussion, I have been polite to you.

In the encyclopedia article you were referring to, Bem writes:
"Altogether, 100 men and 140 women participated as receivers in 354 sessions across 11 separate experiments during Honorton's autoganzfeld research program. The experiments confirmed the results of the earlier studies, obtaining virtually the same hit rate: about 35 percent. It was also found that hits were significantly more likely to occur on dynamic targets than on static targets. These studies were published by Honorton and his colleagues in the Journal of Parapsychology in 1990, and the complete history of ganzfeld research was summarized by Bem and Honorton in the January 1994 issue of the Psychological Bulletin of the American Psychological Association (Bem & Honorton, 1994; Honorton et al., 1990)."

Yet you have the gall to say:
Again no data to look at and verify the statement that it is "35% or that it is meaningful.

And again you are very poetic and overgeneralizing in your statements, in the papaer i responded to the data are not presented, I could have pasted the same clip you did, there is still no data even in table form to back the assertion that there is a 35% hit rate.


Gall huh, you are a touchy one , perhaps you have me confused with someone else who has been rude to you here.
Video is more sucsesful than static.
on targets"
This implies that you think Bem doesn't have the evidence to justify what he is saying. It's basically accusing him of being dishonest, and since you didn't even bother to read the scientific article, I find your statements to be, again, incredible.

I don't know what basis you have for your rather exagerated statements, I treat all papers the same way, be they ones posted here or elsewhere, there are certain things I look for in reaserch papers. gee I didn't even mention the lack of a lierature review , now did I?

I guess that you don't review or read many research paper from the tone you take. Maybe you should read soem more of the stuff that gets published in general, I apply the same standard to all.

You seem to have a problem, that is really too bad because it makes it difficult to engage in dialouge.

David, after the ganzfeld sending phase is finished, a receiver is presented with four randomly assembled pictures on a computer screen.

Are you discussing auto ganzfeld or the origanal style ganzfeld, I believe my critique applies only to the ganzfeld where a judge matches the 'reciever' statements to the pictures. I haven't read enough about the auto ganzfeld yet to decide.

The only way a receivers spurious "free association" of certain pictures could potentially bias the hit rate would be if those pictures were targets more than a decoys.

Actualy it doesn't matter if the match is differnt across different pictures then there is the potential for a match rate influencing the expected 25% jit rate. If all picture rate the same on the 'free association match' then the effect would be negated.

This possibility has been addressed and shown to be not the case by Bem in his Response to Hyman:

Content-Related Response Bias

"Because the adequacy of target randomization cannot be statistically assessed owing to the low expected frequencies, the possibility remains open that an unequal distribution of targets could interact with receivers' content preferences to produce artifactually high hit rates. As we reported in our article, Honorton and I encountered this problem in an autoganzfeld study that used a single judging set for all sessions (Study 302), a problem we dealt with in two ways. To respond to Hyman's concerns, I have now performed the same two analyses on the remainder of the database. Both treat the four-clip judging set as the unit of analysis and neither requires the assumption that the null baseline is fixed at 25% or at any other particular value.

In the first analysis, the actual target frequencies observed are used in conjunction with receivers' actual judgments to derive a new, empirical baseline for each judging set. In particular, I multiplied the proportion of times each clip in a set was the target by the proportion of times that a receiver rated it as the target. This product represents the probability that a receiver would score a hit on that target if there were no psi effect. The sum of these products across the four clips in the set thus constitutes the empirical null baseline for that set. Next, I computed Cohen's measure of effect size (h) on the difference between the overall hit rate observed within that set and this empirical baseline. For purposes of comparison, I then reconverted Cohen's h back to its equivalent hit rate for a uniformly distributed judging set, in which the null baseline would, in fact, be 25%.

Across the 40 sets, the mean unadjusted hit rate was 31.5%, significantly higher than 25%, one-sample t(39) = 2.44, p = .01, one-tailed. The new, bias-adjusted hit rate was virtually identical (30.7%), t(39) = 2.37, p = .01, tdiff (39) = 0.85, p = .40, indicating that unequal target frequencies were not significantly inflating the hit rate.

The second analysis treats each film clip as its own control by comparing the proportion of times it was rated as the target when it actually was the target and the proportion of times it was rated as the target when it was one of the decoys. This procedure automatically cancels out any content-related target preferences that receivers (or experimenters) might have. First, I calculated these two proportions for every clip and then averaged them across the four clips within each judging set. The results show that across the 40 judging sets, clips were rated as targets significantly more frequently when they were targets than when they were decoys: 29% vs. 22%, paired t(39) = 2.03, p = .025, one-tailed. Both of these analyses indicate that the observed psi effect cannot be attributed to the conjunction of unequal target distributions and content-related response biases."
http://comp9.psych.cornell.edu/dbem/response_to_hyman.html



amherst

That last bit will take a while to digest, I thank you for pasting it as I haven't gotten to that article.
 
Posted by Amearst

I'll say it again, if a "judge" is blind to the correct target, since there are 4 targets displayed at the end of a session, he has a 25% chance of choosing correctly. Images which a receiver mentions have a 25% chance of correlating in some way with the actual target. This is not difficult to understand. The only reason you can't comprehend it is because (in my opinion) you are too afraid to.

And that is exactly the kind of thinking that I am critiquing! If soem pictures have a better chance of matching the recievers statements then the chances are not 25% at all. they will vary from picture to picture and from reciever statement to reciever statement.


receiver mentions have a 25% chance of correlating in some way with the actual target.

I am stating that some images will have a higher random match rate than others, it is not as simple as saying that there are four pictures and a twenty five percent chance. Somepictures will randomly match the recievers statements at a much higher rate than 25% and that changes the chances of a 'hit'.

That is why the level of random matching would have to be studied and contolled for,

A much easier method would be to arbitrarily assign which words are matches to each picture, and then how many matching words are required for ahit.
 
Dancing David said:


And that is exactly the kind of thinking that I am critiquing! If soem pictures have a better chance of matching the recievers statements then the chances are not 25% at all. they will vary from picture to picture and from reciever statement to reciever statement.


receiver mentions have a 25% chance of correlating in some way with the actual target.

I am stating that some images will have a higher random match rate than others, it is not as simple as saying that there are four pictures and a twenty five percent chance. Somepictures will randomly match the recievers statements at a much higher rate than 25% and that changes the chances of a 'hit'.

That is why the level of random matching would have to be studied and contolled for,

A much easier method would be to arbitrarily assign which words are matches to each picture, and then how many matching words are required for ahit.
Let me try to explain this as clearly as possible:

Let's say psi doesn't exist. Let's also say a receiver in the ganzfeld is going to mention the color red in his trial, and that during the sending phase all he sees are red images of various sorts. At the end of the process when this receiver is presented with the four targets from which he is going to choose, a red fire engine appears as one of the pictures. Now if the receiver goes by his non-psi imagery, he is going to pick the fire engine, yet the fire engine is more likely to be a decoy than the target. You understand?

To go further, lets say the receiver is presented with TWO red images, one of a fire engine and one of a red balloon, again, he isn't using psi and he has no real clue as to whether one of these is the correct target or not. Now while it is true that there is a fifty percent chance that one of these pictures is going to be the correct target, clearly only one of them can be and therefore the receiver will have to pick between the two. Divide 50% in half and what do you get? 25% again.

This is also obviously going to be the same if 3 or all 4 pictures contain the content of red. Of course this would also apply to any other content receivers might mention in their reports.

I hope you now see that, any way you cut it, the receiver only has a 25% chance of being correct as long the targets are randomly selected.


amherst
 
Amherst said:
I hope you now see that, any way you cut it, the receiver only has a 25% chance of being correct as long the targets are randomly selected.
And there are enough trials to eliminate receiver response bias.

~~ Paul
 
Paul C. Anagnostopoulos said:

And there are enough trials to eliminate receiver response bias.

~~ Paul
The only way receiver response bias could spuriously affect the hit rate would be if the pictures/clips which the receiver's were biased towards were targets more frequently than decoys:

"Accordingly, Honorton et al. (1990) ran several large-scale control series to test the output of the random number generator. These control series confirmed that it was providing a uniform distribution of values through the full target range. Statistical tests that could legitimately be performed on the actual frequencies observed confirmed that targets were, on average, selected uniformly from among the four film clips within each judging set and that the four possible judging sequences were uniformly distributed across the sessions."
http://comp9.psych.cornell.edu/dbem/response_to_hyman.html

amherst
 
amherst said:

Let me try to explain this as clearly as possible:

Let's say psi doesn't exist. Let's also say a receiver in the ganzfeld is going to mention the color red in his trial, and that during the sending phase all he sees are red images of various sorts. At the end of the process when this receiver is presented with the four targets from which he is going to choose, a red fire engine appears as one of the pictures. Now if the receiver goes by his non-psi imagery, he is going to pick the fire engine, yet the fire engine is more likely to be a decoy than the target. You understand?

To go further, lets say the receiver is presented with TWO red images, one of a fire engine and one of a red balloon, again, he isn't using psi and he has no real clue as to whether one of these is the correct target or not. Now while it is true that there is a fifty percent chance that one of these pictures is going to be the correct target, clearly only one of them can be and therefore the receiver will have to pick between the two. Divide 50% in half and what do you get? 25% again.

This is also obviously going to be the same if 3 or all 4 pictures contain the content of red. Of course this would also apply to any other content receivers might mention in their reports.

I hope you now see that, any way you cut it, the receiver only has a 25% chance of being correct as long the targets are randomly selected.


amherst
Not so obvious at all, which is why I am questioning it.

Wether or not psi exists I feel that this is a phenomena that should be controlled for, and for psi to be detected it has to be controolled for.

I suppose this is another one of those agree to disagree, the issue is that any statement issued by the reciever is going to have a chance for randomly matching any target or non target picture.

I hope you now see that, any way you cut it, the receiver only has a 25% chance of being correct as long the targets are randomly selected.
There is a random distribution of possible matching that could be anything other than 25%

I am pointing out a possible confounding principle that negates this statement. The issue is not that the targets are randomly selected, the issue is that certain target photos are going tyo have a higher arte of matchintg a randomly chosen 'reciever list' than others, and that over trial runs that use small sample sizes this is going to skew the data towards more than 25% or skew that data towards less than 25% and it has nothing at all whatsoever with the picture being randomly chosen.

And as I stated before if the pictures are matched on this 'random reciever match' then there is no issue whatsoever.

As long as the pictures have different match rates versus a rondomly chosen reciver list it does matter at all that one out of four is ranomly chosen to be the target. The method of randomly selecting one of the pictures does not control for the possible confounding principle.

It becomes even worse if two of the pictures have a random chance of being a better match than the target.

The best way to control for this would we to:
1. Chose which words will match a picture when it is in the target position.
2. Create sets of four pictures which have no or one matching words as target hits.
3. Decide what level of matching words in needed for a target to be considered a hit.

I am sorry Amhearst, having the pictures randomly selected from a set of four will not conrol for this at all, as long as the pictures in the set are not matched for 'random reciever matching'.

The goal in science is to control for any possible conbfounding influences, and randomly choosing a picture from four will not control this effect.

Lets us say that there are four pictures with the unlikely(I choose this to deonstrate the confounding principle distribution of
(5%)(10%)(15%)(75%), there is a one in four chance of a given picture being chosen as the target so by the mistaken notion that you average the chances, this makes the average slightly higher than 25%, sounds good so far.

Except that for the first three being the target it will throw the data lower than 25% and in the fourth case it will throw the data high. And you can't just average them and say it "will work out in the long run", even a large number of runs could have a randomly chosen run where the (75%) will come up more than average and skew the data high. Or the obverse.

Sorry the fact that the pictures are chosen randomly doesn't average it out, especialy if all the pictures have a high chance of matching the random list.
 
On Bem's attempt to find a content related bias:
In particular, I multiplied the proportion of times each clip in a set was the target by the proportion of times that a receiver rated it as the target. This product represents the probability that a receiver would score a hit on that target if there were no psi effect. The sum of these products across the four clips in the set thus constitutes the empirical null baseline for that set.

I disagree with this the (%target)*(%chosen as target) does not give you the random match rate. That would be the rate that a reciever statement matches a given target. To get the random match rate you would take any 'reciever statement' and compare it to each picture. Then you would do this for a suitably large sample of reciever statements versus each picture. Then you would get the percentage of 'random match' for each picture.

I am not sure what Bem calculated but I don't see that as the match rate if there was no psi effect. Would that not be (%target)-(%chosen as target) in this scheme, stated as a negative rate? You can empiracly find the 'random match' to 'reciever statement' as a more accuarte measure.

The null baseline for the clips in the set would be found more easily as well.
Guess I just aint that smart.
 
Amherst said:
The only way receiver response bias could spuriously affect the hit rate would be if the pictures/clips which the receiver's were biased towards were targets more frequently than decoys:
Correct, which is why you have to run only one trial per receiver, or sufficient trials to eliminate the spurious effect.

~~ Paul
 
I have to wonder if there was a control group established. For example, did they use "non" psi people in their experiments as well? Are they assuming all people possess psi? Did they have the sender be the reciever and see if the judge picks the same picture? They assume a 25% hit chance if no psi is there,but without a control group, it is just an assumption.


I guess with junk-science, such questions just don't matter.
 
thaiboxerken said:

They assume a 25% hit chance if no psi is there,but without a control group, it is just an assumption.

It is what is expected by chance if there is no psi. Assumptions, in the form of hypotheses, are a part of science, Ken.
 
thaiboxerken said:
I have to wonder if there was a control group established. For example, did they use "non" psi people in their experiments as well? Are they assuming all people possess psi? Did they have the sender be the reciever and see if the judge picks the same picture? They assume a 25% hit chance if no psi is there,but without a control group, it is just an assumption.


I guess with junk-science, such questions just don't matter.
"They assume a 25% hit chance if no psi is there,but without a control group, it is just an assumption."

Assumption? Ken, there are four targets. The receiver has a one in four chance of choosing correctly by chance. What is one fourth of 100%? 25%. If there were five targets, the chance hit rate would be 20%. Two targets? 50%. And so on and so on. This is grade school math.

amherst
 
When you use math as your control group, you aren't factoring in other things (humanity) that may change the properties of the test. You can place 5 targets in front of a shooter, but people may aim for the middle one more often. It's not a 20% chance.
 
thaiboxerken said:
You can place 5 targets in front of a shooter, but people may aim for the middle one more often. It's not a 20% chance.

This is judging bias. It's a known artifact of psi experiments for a judge to tend to choose the first option out of the choices. It's certainly something to take into account.

Response bias, as I said before, is useful only as a post-hoc analysis. If you have completed a ganzfeld experiment and it later transpires that the choice of targets largely consisted of people, water and countrysides: images that commonly pop into the mind under these circumstances, then you have a reasonable case for suggesting response bias has inflated the hit rate. (and this remains the same, even if you have each person doing one trial each)

This happened in Study 302, and Bem acknowledged that. So while there appeared to be a 25% hit rate by chance, the way things played out, 34% was the expected hit rate by chance. Another example is the experiments of Willin into music and the ganzfeld. Although the overall result was at chance, Willin makes the observation:

“The music that seemed to lend itself to greater contact being made between the two participants telepathically was either from the Romantic period or emotionally arousing music whereas the worst scores were obtained from more intellectually stimulating music. For instance, the aforementioned Berlioz track and the opening the Symphony 5 by Shostakovich were positively identified but Kontakte 1 and Kontakte 2 by Stockhausen were not discovered.”

Whether this means the Romantic music had a better hit rate or whether, when the subjects scored a hit, it tended to be Romantic music as the target, I don’t know. But it’s a neat demonstration of response bias. People described the type of music that they automatically thought of when they thought of music: typical classical music. They didn’t think of the more modern, minimalist works. It simply never occurred to them to talk about Shostakovich or what have you.
 
thaiboxerken said:
When you use math as your control group, you aren't factoring in other things (humanity) that may change the properties of the test. You can place 5 targets in front of a shooter, but people may aim for the middle one more often. It's not a 20% chance.
If the process is truly random and you have five targets, it doesn't matter if the receiver picks the the middle one more than any of the others. The correct target is always going to have a 20% chance of landing there. Not difficult to understand Ken.

amherst
 

Back
Top Bottom