• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The Ganzfeld Experiments

amherst said:
These recent arguments are so patently absurd and demonstrate such a staggering lack of familiarity with the ganzfeld that I must admit, I'm quite disappointed. Why haven't any of you read any of the papers yet? What are you afraid of? It will take you a maximum of fifteen minutes to read the nontechnical article Bem wrote. This small sacrafice of your time would go a long way into making our discussion worthwhile.

Paul writes:
"Personally, I don't understand why subjective judging isn't discarded in favor of the receiver simply selecting one of four possible targets"

From the original Psychological Bulletin article:
"The sender is sequestered in a separate acoustically isolated room, and a visual stimulus (art print, photograph, or brief videotaped sequence) is randomly selected from a large pool of such stimuli to serve as the target for the session. While the sender concentrates on the target, the receiver provides a continuous verbal report of his or her ongoing imagery and mentation, usually for about 30 minutes. At the completion of the ganzfeld period, the receiver is presented with several stimuli (usually four) and, without knowing which stimulus was the target, is asked to rate the degree to which each matches the imagery and mentation experienced during the ganzfeld period. If the receiver assigns the highest rating to the target stimulus, it is scored as a "hit." Thus, if the experiment uses judging sets containing four stimuli (the target and three decoys or control stimuli), the hit rate expected by chance is .25. The ratings can also be analyzed in other ways; for example, they can be converted to ranks or standardized scores within each set and analyzed parametrically across sessions. And, as with the dream studies, the similarity ratings can also be made by outside judges using transcripts of the receiver's mentation report"


Everyone, you do realize that whoever ranks the targets according to the degree to which they matched the receivers images during the sending phase, whether it be the receiver himself(as usually is) or an outside judge, is completely blind to what the correct target is. You do understand that don't you?


Read the articles.

amherst

Doesn't that paragraph just reaffirm that the Ganzfeld is not a true free-response test (and is instead a forced-choice "multiple choice" test)?

Also, what kind of conversation are you looking at having exactly? It is unlikely you are going to get a validation of the test here, as the limited information in the reports most likely gives us only a vague idea of the actual test procedure and data, and nothing any real value to discuss the test accuracy and validity.
 
DaveW quoting Hyman:

quote:
--------------------------------------------------------------------------------
As far as I can tell, I was the first person to do a meta-analysis on parapsychological data. I did a meta-analysis of the original ganzfeld experiments as part of my critique of those experiments. My analysis demonstrated that certain flaws, especially quality of randomization, did correlate with outcome. Successful outcomes correlated with inadequate methodology. In his reply to my critique, Charles Honorton did his own meta-analysis of the same data. He too scored for flaws, but he devised scoring schemes different from mine. In his analysis, his quality ratings did not correlate with outcome. This came about because, in part, Honorton found more flaws in unsuccessful experiments than I did. On the other I found more flaws in successful experiments than Honorton did. Presumably, both Honorton and I believed we were rating quality in an objective and unbiased way. Yet, both of us ended up with results that matched our preconceptions.
--------------------------------------------------------------------------------


Did you miss amherst's post before? I paste in below:

In his essay, Rhetoric Over Substance: The Impoverished State of Skepticism, Charles Honorton wrote:

"The next line of criticism concerned the effects of procedural flaws on the study outcomes. In our meta-analysis of the ganzfeld studies, Hyman and I independendently coded each study's procedures with respect to potential flaws involving sensory cues, randomization method, security, and so on. Here Hyman and I did not agree: my analysis showed no significant relationship between these variables and study success, while Hyman claimed that some of the flaw variables, such as the type of randimization, did correlate with results. In his initial assessment, Hyman claimed there was a nearly perfect linear correlation between the number of flaws in the study and its success (Hyman, 1982); this analysis contained a large number of errors that Hyman later attributed to typing errors (communication to Honorton, November 29, 1982). Later, Hyman (1985) claimed a significant relationship between study flaws and outcomes based on a complex multivariate analysis. However, an independent psychological statistician described the analysis as "meaningless" (Saunders 1985). Finally, Hyman agreed that "the present data base does not support any firm conclusion about the relationship between study flaws and study outcome" (Hyman & Honorton, 1986, p. 353). Were our differences in flaw assessment simply reflections of our respective biases? Perhaps, but independent examination of the issue by non-parapsychologists has unanimously failed to support Hyman's conclusions (Atkinson, Atkinson, Smith & Bem. 1990; Harris & Rosenthal, 1988a, 1988b; Saunders, 1985; Utts, 1991). In an independent analysis using Hyman's own flaw codings, two behavioral science methodologists concluded, "Our analysis of the effects of flaws on study outcomes lends no support to the hypothesis that Ganzfeld research results are a significant function of the set of flaw variables" (Harris & Rosenthal, 1988b, p. 3)."
 
I've posted this in a previous thread but feel that since most of you aren't reading the articles I've listed, placing it here might give you an indication of how seriously the ganzfeld should be taken:

Daryl Bem, a psychologist from Cornell University who has also held positions at Harvard and Stanford, wrote in a 1993 commemorative issue of the Journal of Parapsychology that:

"Although I was already familiar with the ganzfeld procedure, it was Chuck's detailed data based response to Hyman's critique that persuaded me to relinquish a large measure of my previous skepticism and to seriously entertain the possibility that the psi effect was genuine. Chuck's rhetorical skills were considerable, but it was his ability to get the data to speak for themselves that carried the argument so forcibly."

Ray Hyman himself has commented:

"Honorton's experiments have produced intriguing results. If independent laboratories can produce similar results with the same relationships and with the same attention to rigorous methodology, then parapsychology may have indeed have finally captured its elusive quarry." (1991, p.392)

Harvard psychologist Robert Rosenthal and his co-author Monica Harris, in their National Research Council Report (commissioned by the army to evaluate psi experiments)wrote that:

"The situation for the ganzfeld domain is reasonably clear. We feel it would be implausible to entertain the null [hypothesis] given the combined [probability] from these 28 studies....When the accuracy rate expected under the null [hypothesis] is 1/4, we estimate the obtained accuracy rate to be about 1/3."

In the Skeptical Inquirer,Vol. 17, Spring 1993, 306-308. Susan Blackmore had this to say about Honorton's work:

"Over the next few years Honorton and his team at Princeton worked with their system and in 1990 published the results of 11 experiments with 241 volunteer subjects and 355 ganzfeld sessions (Honorton et al. 1990). I can only imagine the amount of time and work involved in this from my own experience with a simple ganzfeld experiment with just 20 trials. The results of these automated studies were staggeringly significant. My own impression from reading the paper many times was that the experiments were very well designed and the results certainly not due to chance. If they were due to something other than psi it was not obvious what it was. In other words, these experiments stood out from all the mass of failed, barely significant, or obviously flawed studies."

amherst
 
Poor Charles was within days of publication when he died. I think it's worth noting in this thread the tremendous work and commitment Charles Honorton gave toward the advancement of science. His spirit does indeed live on in many ways. It seems though that the time for his full recognition, is yet to come.
 
Interesting Ian said:
DaveW quoting Hyman:




Did you miss amherst's post before? I paste in below:


Nope. They disagreed over the analysis. That is what it says. Then it goes on to say 'non-parapsychologists" (which, interestingly, includes several believers!) diagreed with Hyman, too. I'm still unimpressed.
 
Lucianarchy said:
Poor Charles was within days of publication when he died. I think it's worth noting in this thread the tremendous work and commitment Charles Honerton gave toward the advancement of science. His spirit does indeed live on in many ways. It seems though that the time for his full recognition, is yet to come.

If you want to honor a man, at least learn how to spell his name.
 
Ian said:
That does happen sometimes doesn't it?? Sometimes it's the receiver who chooses, sometimes judges.

Are you saying it always should be the receiver who chooses and never judges who choose? Is there any statistically significant differences when the receiver chooses as compared to judges?
I think it should always be the receiver. However, as I said above, this subjective judging thing is complicated. I don't know how it might affect the results, which is why I said we have to discuss a specific protocol.

Your gut feeling that it doesn't matter, however, isn't worth a damn.

~~ Paul
 
The problem with talking about the ganzfeld database is that for the most part, you’re simply pitching a quote from one esteemed laureate against another. This is pretty dull.

As I shall now demonstrate.

Re Bem and Honorton’s PRL database, Susan Blackmore said "I have come to the conclusion that Honorton has done what the skeptics have asked, that he has produced results that cannot be due to any obvious experimental flaw. He has pushed the skeptics like myself into the position of having to say it is either some extraordinary flaw which nobody has thought of, or it is some kind of fraud--or it is ESP." but also mentioned that the earlier trials were flawed, saying “By failing to mention this, Bem and Honorton imply that these are reliable research."

Two problems arise with the PRL database. One is that the highest effect sizes happen in the shortest trials, "Of the eleven ganzfeld studies, smaller samples displayed larger hit rates than larger samples. If the effect is real, this is the opposite of what you’d expect." says Lee D. Ross, psychology professor at Stanford.

And “Early on in the experiments, there was found to be some faulty wiring in the receiver’s headset. This allowed some of the information from the sender (who was allowed to vocalize the images in order to help concentration) to be heard by the receiver. Although Honorton, after fixing the problem, maintained that the flaw was not perceptible, even subliminally, to the receiver, others assert that the possibility of contamination requires that all data gathered prior to the discovery of the problem be discarded. Without that data, the results of Honorton’s experiments are no longer statistically significant (McCrone 31).”

So I’m putting this together on Word, and I’ll admit, it’s meagre stuff. Not for the quality of the argument, but because I’m just cutting and pasting someone else’s words. As is amherst. Talking about the ganzfeld database is like watching birds flying high above your head: it’s too distant. It’s a matter of waiting for such and such a person to complete an experiment and then talking about it after the event. That’s no way to do science. Too passive, too reactive.

(By the way, amherst, you’re right: neither SAIC nor PEAR specifically state they used the ganzfeld procedure! You learn something new every day.)
 
Oh, and independant judges tend to to better in picking out targets than the receivers. I don't know why.
 
Ersby said:
Two problems arise with the PRL database. One is that the highest effect sizes happen in the shortest trials, "Of the eleven ganzfeld studies, smaller samples displayed larger hit rates than larger samples. If the effect is real, this is the opposite of what you’d expect." says Lee D. Ross, psychology professor at Stanford.

Er . .why? Why should one expect this? I'd expect the opposite.
 
Ersby said:
Oh, and independant judges tend to to better in picking out targets than the receivers. I don't know why.

Because they might tend to only go by what the receiver has actually said.
 
Paul C. Anagnostopoulos said:
Or because involving a third person just opens another avenue for leaks.

~~ Paul

What avenue? No, if the receiver selects what she thinks is the actual target, she may misinterpret a psychological gravitation towards a certain target for a parapsychological one so to speak.

So I'm in disagreement with you. I think it is much better if the judges choose the target going by what the receiver has said.
 
Oh cripes Ian, I don't know what avenue. The history of psi is littered with people asking "what avenue" and then discovering one, such as the one I mentioned above. People just look like fools when they throw their hands up in the air and ask "What avenue? I just know I've controlled for all possible sensory leakage."

According to your objection, all psi experiments should involve judges. Perhaps so.

~~ Paul
 
Paul C. Anagnostopoulos said:
Oh cripes Ian, I don't know what avenue. The history of psi is littered with people asking "what avenue" and then discovering one, such as the one I mentioned above. People just look like fools when they throw their hands up in the air and ask "What avenue? I just know I've controlled for all possible sensory leakage."

According to your objection, all psi experiments should involve judges. Perhaps so.

~~ Paul

We can always speculate about some possible avenue for sensory leakage. But the more important task is to try and isolate any psi effect. Maybe there are more avenues for sensory leakage. But the experiment needs to be done right.

OK, you say: "The history of psi is littered with people asking "what avenue" and then discovering one".

I was not aware of this. Where did you get this information from. Give me say 3 examples where it was established that a positive result obtained in parapsychology was actually due to something else.
 
Interesting Ian said:


Because they might tend to only go by what the receiver has actually said.

I agree with this, I think that a careful judging protocol is a standard in psychological reasearch, such as counting acts of violence of TV.

I think the main goal would be to make the images used for sending as simple as possibe, that way the protocol for judging a hit would be clearer. That is one of the good things about the Rhine experiments.
 
Interesting Ian said:
Give me say 3 examples where it was established that a positive result obtained in parapsychology was actually due to something else.

  1. Ghosts. Turned out to be cats, drafts, creaky floorboards, air in pipes, rats in the attic.
  2. Mediumship. Turned out to be cold-reading, warm-reading, hot-reading, tabloid psychology.
  3. Dowsing. Turned out to be ideo-motor effect.

Should I continue?
 

Back
Top Bottom