The Ganzfeld Experiments

posted by Amearst
I don't understand why you have a problem with the meta-analysis having a standardness criteria.

UH, maybe this is the time to go read up on how meta analysis works. For the meta nalysis to have meaning, there has to be a standard protocol and procedure amongst the studies of the emat analysis, otherwise this is notes as a potential for giving strabge results in the metat analysis.
 
Dancing David said:


UH, maybe this is the time to go read up on how meta analysis works. For the meta nalysis to have meaning, there has to be a standard protocol and procedure amongst the studies of the emat analysis, otherwise this is notes as a potential for giving strabge results in the metat analysis.

That's what amherst is saying. My problem is that "standardness" was brought in after the first m-a results were known.
 
amherst,

But aside from the ganzfeld, ...
Well, I'll have to stop you right there. My point is "why doesn't this happen in the Ganzfeld". You can't start an answer to that question with the words "aside from the Ganzfeld...". Sorry.

But anyway, I do agree that there should be a concerted effort at finding gifted subjects and then running them through the ganzfeld.
Until this happens, I'll continue to wonder what exactly is being studied in Ganzfeld trials, other than statistics.
 
Ersby said:


That's what amherst is saying. My problem is that "standardness" was brought in after the first m-a results were known.

Not fair at all, you can't control after the fact. Controls must be in place at the start. You can't correct for 'standards' after the fact and include them in a meta-analysis.

But since Amhearst has ignored every valid point so far he will just keep quoting the same mistakes .
 
Loki said:
, I'll continue to wonder what exactly is being studied in Ganzfeld trials, other than statistics.

The physical setup that produced the statistics..
 
Dancing David said:

Not fair at all, you can't control after the fact. Controls must be in place at the start. You can't correct for 'standards' after the fact and include them in a meta-analysis.

Yes, I agree with that.
 
Ersby said:
I made a mistake! Two mistakes, actually. The first is that Beirman's series 4b IS in the meta-analysis. Silly me. Second is that the hit rate for those below 5.33 is actually 29.something. So they DO have have lower hit rate. By one percent!

If 5.33 didn't mean anything, then why did they mention it?

Putting 4 as the average implies that there is a value attached to the "standardness": that an experiment with standardness 3 is somehow "half as standard" as one with standardness 6, and that it is impossible to have an experiment less standard than 1. This is nonsense.
You are extremely confused. 5.33 is the average rating the students gave to each experiment. Let me spell it out for you: If you go and add each rating together and then divide that by the number of experiments in the analysis (40) you will get 5.33. This shows that majority of studies included in the meta-analysis were in fact standard. The small portion of non-standard studies (9) which were non-significant (obviously because they were non-standard), are what brought down the overall hit rate.

Your comment about the rating scale being nonsense is itself nonsense. In what seems to be an exceedingly desperate attempt to find fault with the meta-analysis, your previously nonsensical minor quibble has turned into a nonsensically major concern. And I have no idea of how I can address it any further or clearer than I already have.
You made a claim that the best results came from the most standard experiments, and you spoke about experiments scoring 7 and 6.67. But this is not the case. That's my problem.

This is the case. First off, as I've said before, everything rated above the midpoint of 4 is considered standard and therefore procedurally consistent with the PRL work. Further, if you actually go and look at the hit rates reported for the 17 studies rated 7's and 6.67's you'll see that the majority are significantly above chance. Only 5 studies were rated as 5's and only one of those had a significant hit rate. There were only three studies rated above the midpoint of four, these were all highly successful, and again, they are also standard so, so what?

amherst
 
Ersby said:


Oh dear.

And you were doing so well.

The thing about Pat Price being given "only the goegraphical coordinates" is untrue. As you can read here

http://www.gwu.edu/~nsarchiv/NSAEBB/NSAEBB54/st36.pdf

He was given the coordinates, shown the position on a map, and told it was a Russian military base. From the pdf you'll read just how inaccurate Pat Price was. The fact that this is still being touted as some kind of "evidence" depresses me.

Better to stick to the ganzfeld.
From the paper you referenced:
"The controlled session at SRI lasted for one hour (11 a.m. until noon). The rest of the session was conducted over the telephone with only the voice of the experimenter recorded on tape. -Price- commented that he was seeing a lot of things he hadn't seen the previous day and supplied the most positive evidence yet for remote viewing with his sketch of the rail-mounted gantry crane. It seems inconceivable to imagine how he could draw such a likeness to the actual crane at URDF-3 unless:

1) he actually saw it through remote viewing, or
2) he was informed of what to draw by someone
knowledgeable of URDF-3

I only mention this second possibility because the experiment was not controlled to discount the possibility that -Price- could talk to other people- such as the Disinformation Section of the KGB. That may sound ridiculous to the reader, but I have to consider all possibilities in the spectrum from his being capable to view remotely to his being supplied data for disinformation purposes by the KGB."

So your position is that Price's startlingly accurate drawing was the result of information he somehow got from the KGB?

amherst
 
amherst said:
So your position is that Price's startling accurate drawing was the result of information he somehow got from the KGB?

amherst
Or could it possibly have been sourced from aerial intelligence 'photos the CIA had of the base ?
 
amherst said:

You are extremely confused. 5.33 is the average rating the students gave to each experiment. Let me spell it out for you: If you go and add each rating together and then divide that by the number of experiments in the analysis (40) you will get 5.33. This shows that majority of studies included in the meta-analysis were in fact standard. The small portion of non-standard studies (9) which were non-significant (obviously because they were non-standard), are what brought down the overall hit rate.

amherst

If 5.33 is the average, then fine. Let's use that.
 
amherst said:


So your position is that Price's startlingly accurate drawing was the result of information he somehow got from the KGB?

amherst

From a large pdf cataloguing miss after miss, are you seriously considering that this one quote somehow completely validates you belief?

Let me ask you: how many cranes did Pat Price draw?

(And have you dropped you claim about Price being given only the coordinates?)

(Oh, and about the Radin m-a, you never did answer my point on that. What do you think now?)
 
re: the point about standardness:

Let me see if I can't explain with an analogy. Imagine if you had the Beaufort Scale in front of you and you wanted to know the average of the effect of winds in the scale. Now, the scale runs from 0 to 12, so the average would be 6, right? But looking at the actual data; 6 on the Beaufort Scale wouldn't give you the average wind speed of the scale (not in mph, nor in knots) though it'd be close, nor would it give the average rise in sea level (and it wouldn't even be close!).

So you see The Beaufort Scale is just that: a scale for grading one type of wind in comparison to another. It has no mathematical worth of its own. Similarly, this is how "standardness" works. It has not value of its own. Therefore 4 is not necessarily the average between 1 and 7. In order to show otherwise, you'd have to demonstrate how "standardness" relates directly to a numerical value.
 
Ersby said:


If 5.33 is the average, then fine. Let's use that.
This is becoming ridiculous. Ersby, you now understand that on a seven point scale 5.33 was the average rating the students gave to each study. So for you to suggest that the raters should have used 5.33 as the midpoint when:

1. You know that 4 is midway between 1 and 7, and
2. The rating average of 5.33 obviously couldn't have been known until after the ratings had been completed,

is so incredibly absurd and inane that I'm beginning to feel that we have reached a point in our discussion where you can no longer be reasoned with. If this is the case, then it is truly a pity.

amherst
 
Why were the raters instructed "do not count deviations the only effect of which is to influence the likelihood of artifacts, such as sensory leakage of the target information"? This is waved off with the remark "Such deviations are important in the broader scheme of things, but not for this exercise."

~~ Paul
 
amherst said:

This is becoming ridiculous. Ersby, you now understand that on a seven point scale 5.33 was the average rating the students gave to each study. So for you to suggest that the raters should have used 5.33 as the midpoint when:

1. You know that 4 is midway between 1 and 7, and
2. The rating average of 5.33 obviously couldn't have been known until after the ratings had been completed,

is so incredibly absurd and inane that I'm beginning to feel that we have reached a point in our discussion where you can no longer be reasoned with. If this is the case, then it is truly a pity.

amherst

Let me try again. Imagine you have seven ccommuities and you want to know the average population. The usual thing to do is to add up all the populations and divide by seven. But if you numbered the communities 1 to 7 according to size, would it be correct to take the population of community 4 and say that is the average population?

You see what I'm getting at? The numbers for standardness are just place-holders in a sequence. Like the numbers in the top ten of the charts. They don't mean anything. They have no value of their own. So saying 4 is the average may look sensible, but doesn't mean anything.

They wouldn't have know 5.33 would be the mean, no. But they could have decided beforehand how to judge where the average fell. It's interesting to note that the only time "hypothesized" appears in the paragraph discussing 5.33.

My point is that neither is better than the other. In fact, you could also justifiably talk the average to the top half of the standardness scale and take the average from those. There are any number of "averages" to use. I happen to think the one they've focused on is (a) meaningless and (b) just so happens to give the best results!
 
Paul C. Anagnostopoulos said:
Why were the raters instructed "do not count deviations the only effect of which is to influence the likelihood of artifacts, such as sensory leakage of the target information"? This is waved off with the remark "Such deviations are important in the broader scheme of things, but not for this exercise."

~~ Paul

Thagt's a good question. I wonder what deviations or experiments they had in mind.
 
amherst said:

This is becoming ridiculous. Ersby, you now understand that on a seven point scale 5.33 was the average rating the students gave to each study. So for you to suggest that the raters should have used 5.33 as the midpoint when:

1. You know that 4 is midway between 1 and 7, and
2. The rating average of 5.33 obviously couldn't have been known until after the ratings had been completed,

is so incredibly absurd and inane that I'm beginning to feel that we have reached a point in our discussion where you can no longer be reasoned with. If this is the case, then it is truly a pity.

amherst

Uh, if you can't address Ersby's or Paul's questions, it shows that you don't understand statistics and sampling.

Amhearst, we have all discussed with you why we don't find the ganzfeld studies to be proof of anything. There are toom many ways and too many reasons that the data do no have meaning in the scientific sense. There is a total lack of controls, there is no testing for response bias and matching bias. The meta analysis is seriously flawed.

These are the same critiques that we would offer of any set of studies that was so poorly designed, the flaws can be corrected and the controls put in place, but not after the fact. You still have yet to address the allegations of actual fraud. So get offf your high horse and answer some of the questions and concerns that have been posted to you.
 
Ersby said:
Let me try again. Imagine you have seven ccommuities and you want to know the average population. The usual thing to do is to add up all the populations and divide by seven. But if you numbered the communities 1 to 7 according to size, would it be correct to take the population of community 4 and say that is the average population?
No, that would be wrong. But it's a bit more complicated than that. The authors use the term mean only in the statement "The mean of the three sets of ratings on the 7-point scale was 5.33, ..." I'm not sure why they even mentioned that, and it does seem meaningless. Otherwise, they only talk about the midpoint of the scale and define standard and nonstandard accordingly. I think that's okay, since it's just an arbitrary definition. It would be interesting to place the boundary at other points and see what happens.

But what astounds me is that the paper stops right there. Where is the list of nonstandard protocol aspects? Where is the analysis of the ten most common nonstandard aspects, to see which ones contribute to the failure of the studies? Why did they ignore the standardness of the statistical analysis and the artifact-producing aspects of the protocol? It's enough to make me crazy.

I have emailed Bem to see if he will give me the raters' scoring sheets.

~~ Paul
 

Back
Top Bottom