• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The Ganzfeld Experiments

Pray tell, what ARE the ganzfield experiments if they are not remote-viewing?
 
Zep said:
Pray tell, what ARE the ganzfield experiments if they are not remote-viewing?
If you'd bothered to read any of the articles I listed in my original post then you'd understand the vast differences in methodology between the ganzfeld and the PEAR remote viewing work. I strongly suggest that you and anyone else interested in this thread at least read the first article. The paper is non-technical and nicely explains why the ganzfeld evidence is so compelling. Further, it was written for and published in The Encyclopedia of the Paranormal, a skeptical, prometheus book.

amherst
 
amherst said:



Are you claiming that a subject sometimes knew what the nature of the target would be before a session took place? If so, where did you get this information?

Radin carried out the meta-analysis.

amherst

SAIC and PEAR both used protocols in which the nature of the target was known to the reciever. In PEAR's case, the viewer knew it would be a physical location. In the case of SAIC it was a pool of photographs from National Geogaphic.

I am unaware of Radin's meta-analysis into the ganzfeld. What was its title?
 
Don't believers ever get tired of bring up the same BS experiments time and again?
 
Ersby said:


SAIC and PEAR both used protocols in which the nature of the target was known to the reciever. In PEAR's case, the viewer knew it would be a physical location. In the case of SAIC it was a pool of photographs from National Geogaphic.

I am unaware of Radin's meta-analysis into the ganzfeld. What was its title?
1. This thread isn't about the remote viewing done at SAIC or PEAR.

2. Radin's meta-analysis was published in his book The Conscious Universe.

Originally posted by thaiboxerken


Don't believers ever get tired of bring up the same BS experiments time and again?
Why don't you try to contribute something constructive and explain why you think the experiments are BS?

amherst
 
amherst said:

The paper is non-technical and nicely explains why the ganzfeld evidence is so compelling.

I still don't see it as compelling. Do you have any comment on how this is, in your estimation, not a forced choice test when it is, in the end, reduced to a choice between 4 targets? (Even Bem noted several times that the expected chance result is 25%!) Wouldn't a true free-response setup have approximately a 0% chance?
 
amherst said:

Why don't you try to contribute something constructive and explain why you think the experiments are BS?

No, I refuse to do it over and over and over again. Why don't you just do a search in the forum? There have been several threads about these BS experiments.
 
thaiboxerken said:
No, I refuse to do it over and over and over again. Why don't you just do a search in the forum? There have been several threads about these BS experiments.

Hopefully, the new Official forum will be constructed in a way so we won't have to go over the same issues again and again, with new threads dealing with something that was discussed on other threads.
 
DaveW said:
I think I found where Radin got his "astronimcally significant" quote, and the full context is, at best, hardly flattering: (from Skeptical Inquirer, March/April 1996)


quote:
--------------------------------------------------------------------------------
In the four major meta-analyses of previous parapsychological research, the pooled data sets produced astronomically significant results while the correlation between successful outcome and rated quality of the experiments was essentially zero.
--------------------------------------------------------------------------------



You can see the article here: http://www.csicop.org/si/9603/claims.html

So, it sounds to me like Ray Hyman says the results were spectacular, but the quality of the tests was horrible.

There's no correlation between the quality of the experiments and how successful the experiments were. So the experiments on average are equally successful regardless of the relative quality. This then is suggestive of a real effect, not vice versa as you suggest. :rolleyes:
 
DaveW said:
Personally, my biggest gripe with the experiments as described is that it seems the targets are something with alot of information in them (movies, in some instances, or vivid, complex pictures), and that the receivers are allowed to ramble on during their answer. (It depends on which description of the test you look at; it seems that then, some judges rate the targets against the description, which seems like an even larger source of error by throwing another person's judgement in.) The combination of lots of information given by the receiver and the large amount of information given by the targets lends itself to lots of everlap and leeway on what could be scored a hit. Why not relatively simple targets and direct responses from the receivers (ie, target 1 through 4, no rambling descriptions)?

How does this gripe of yours enable the targets to be chosen more than 25% of the time?? I really have no idea what you're talking about. The receiver cannot give any information if there is no anomalous cognition. It's as if you're assuming the reality of anomalous cognition, and the fact the receiver is allowed to ramble on about the accurate impressions he's receiving is somehow a cheat. Utterly preposterous!
 
Zep said:
FWIW, meta-analysis of crap data and experimental results does not make them any less crap results.

You might care to have YET ANOTHER look at PEAR's own meta-analysis of 25 years of their own RV (remote viewing) experiments, in which they admitted, after doing the mathematics correctly at last, that there was nothing significant found to support the contention that it existed.

We're not discussing PEAR. This was raised in another thread and you failed to respond to my questions. Please stick to the topic under debate.
 
Interesting Ian said:

The receiver cannot give any information if there is no anomalous cognition. It's as if you're assuming the reality of anomalous cognition, and the fact the receiver is allowed to ramble on about the accurate impressions he's receiving is somehow a cheat. Utterly preposterous!

Incorrect. Knowing nothing about the painting on your wall (assuming you had one), I could ramble on 10 or so descriptions and one has a pretty decent chance to be close to some aspect of it, especially if it is some detailed or "busy" picture. Heck, it's not much different than cold reading.
 
Interesting Ian said:


There's no correlation between the quality of the experiments and how successful the experiments were. So the experiments on average are equally successful regardless of the relative quality. This then is suggestive of a real effect, not vice versa as you suggest. :rolleyes:

Then why does the rest of the article talk about high success rate tests having low quality, and only the low quality tests having high success rates?
 
DaveW said:


Then why does the rest of the article talk about high success rate tests having low quality, and only the low quality tests having high success rates?

Could you quote the relevant parts of the article which state this?
 
DaveW said:
Originally posted by Interesting Ian

The receiver cannot give any information if there is no anomalous cognition. It's as if you're assuming the reality of anomalous cognition, and the fact the receiver is allowed to ramble on about the accurate impressions he's receiving is somehow a cheat. Utterly preposterous!
--------------------------------------------------------------------------------



Incorrect. Knowing nothing about the painting on your wall (assuming you had one), I could ramble on 10 or so descriptions and one has a pretty decent chance to be close to some aspect of it, especially if it is some detailed or "busy" picture. Heck, it's not much different than cold reading[/B]

This applies to all 4 stimuli. If we consider any one of the three control stimuli, then likewise there is more chance of seeing similarities in the receivers impressions compared to what the control stimuli depicts the more the receiver rambles on. Since the increase in the amount of similarities seen will, on average, be the same for each of the 4 stimuli, the chance of choosing the correct target must still only be 25%.
 
Amherst said:
If you'd bothered to read any of the articles I listed in my original post then you'd understand the vast differences in methodology between the ganzfeld and the PEAR remote viewing work.
Perhaps, but it is still extraordinarily difficult to run a psi experiment that separates telepathy, precognition, remote viewing, and micro-PK. In a ganzfeld experiment where the receiver is shown the correct target after the trial, any psi effect could be telepathy, remove viewing, precognition, micro-PK, or a combination thereof.

~~ Paul
 
The issue of subjective judging in psi experiments has a long and annoying history. Personally, I don't understand why subjective judging isn't discarded in favor of the receiver simply selecting one of four possible targets.

The only way to understand whether subjective judging is introducing bias is to analyze a specific protocol. What might seem reasonable at first glance can turn out to be bad, such as the experiments where the transcripts contained hints about the order of the trials.

~~ Paul
 
Paul C. Anagnostopoulos said:

Perhaps, but it is still extraordinarily difficult to run a psi experiment that separates telepathy, precognition, remote viewing, and micro-PK. In a ganzfeld experiment where the receiver is shown the correct target after the trial, any psi effect could be telepathy, remove viewing, precognition, micro-PK, or a combination thereof.

~~ Paul

Even if this were so, so what? It's all the same stuff really.
 
Interesting Ian said:


We're not discussing PEAR. This was raised in another thread and you failed to respond to my questions. Please stick to the topic under debate.
Ian, and Amherst,

In both the PEAR and ganzfield studies, the subjects were trying to "see" something remote from where they were located, under various conditions. The (apparent) ability to be able to do this known as "clairvoyance", or RV - remote viewing. In the PEAR experiments, the subjects tried to view a remote scene and select it as a target from a pool of available targets. In the ganzfield experiments, the judges tried to select a target from a pool of targets by rating the subjects' waffle. In other words, in BOTH SERIES OF STUDIES, the judgements of successful matches were entirely subjective. I take it I don't have to explain what the word "subjective" means in this context, and what it means for the results of the experiments.

Granted, not all studies were EXACTLY like this, but when it comes down to it, the judgement process is effectively the same in both cases - subjective analysis by people who are involved in the experiment. And it is EXACTLY this subjective process that is so much a problem.

If you had bothered to read the criticisms of the PEAR studies regarding the subjectivity of the target-matching process, you would have seen that they apply equally to ANY similar studies that are conducted where subjective analysis is required to "match" targets with subject selections. The results look great if you allow close-enough-is-good-enough type matches, but if you get more finickity about accuracy then the results approach chance. And in many cases amongst the paranormal community, the "match criteria" have been set so broad as to allow just about ANYTHING to be a "reasonable match" to the actual target. In other words, the judgements were stretched to permit the results to be a success.

The thing is that PEAR knew this situation was not acceptable, and went to the bother of trying to reduce the target and selection data to numerical points in an attempt to obtain a fair objective match analysis of their data. They even aggregated the data from a number of similar studies in order to add "depth" to the results (i.e. more tests for a bigger set of results).

And the result they got when the subjectivity was reduced and finally removed was that there was NO correlation to be found at all. None. The subjects could have got just as good results by simply guessing (i.e. by chance). They even published this outcome on their own website - I suggest you do go read it, it's quite fascinating.

So my original commentary above stands: Any studies at all into any species of remote viewing needs to be designed such that the results can be reproduced objectively. And neither the PEAR experiments nor the ganzfield experiments have met that criteria at all. That is, they are pretty much crap. And meta-analysis of crap results in crap squared.
 
Paul C. Anagnostopoulos said:
The issue of subjective judging in psi experiments has a long and annoying history. Personally, I don't understand why subjective judging isn't discarded in favor of the receiver simply selecting one of four possible targets.

The only way to understand whether subjective judging is introducing bias is to analyze a specific protocol. What might seem reasonable at first glance can turn out to be bad, such as the experiments where the transcripts contained hints about the order of the trials.

~~ Paul

The question that arises is: "Why introduce subjectivity in the first place?" The fact that this is done is highly suspicious and characteristic of all of this stuff. If there really were something, why not use Zener cards and forced choice? Simple answer really.

As I have pointed out many times before, these guys do not want clear and unequivical experiments. This is why this area reeks to high heaven.
 

Back
Top Bottom