The Ganzfeld Experiments

Most honest, informed sceptics these days accept that there is an effect. In the face of the overallbody of evidence, it would be irrational, illogical and against occam to suggest each and every last peice of scientific evidence is a result of either self delusion, cheating or collusion of some sort. In fact it is extraordinary unlikely that that should account for every psi effect on record.

The liklihood is that the effect exists and current scientific thinking does not yet understand the mechanism of action.
 
Loki said:
T'ai chi,

I suspect you didn't, but seem to want to pretend you did?

Sure - but does '1' mean "Didn't follow it at all"?

Loki, can you let him answer the question put to him?

Thanks..
 
Lucianarchy said:
Most honest, informed sceptics these days accept that there is an effect. In the face of the overallbody of evidence, it would be irrational, illogical and against occam to suggest each and every last peice of scientific evidence is a result of either self delusion, cheating or collusion of some sort. In fact it is extraordinary unlikely that that should account for every psi effect on record.

The liklihood is that the effect exists and current scientific thinking does not yet understand the mechanism of action.
And what do YOU mean by "honest, informed sceptics"? Do you mean people who are willing to uncritically accept years of badly bodged experimental hogwash, twisted statistics and plain bald-faced lying all at face value because it supports their fanatically-held but sadly contradictory and dillusional theories? Is that it? If so, then yes - honest, informed sceptics might agree with you.
 
T'ai chi,

...can you let him answer the question put to him?
Well I'm lost - I was answering you. Who is the "him" you are referring to, and which questions am I failing to let "him" answer?
 
Anyway, here are the graphs. This is effect size compared to standardness (standard on the left, moving to non-standard on the right). That large spike downward at the end is a tad peculiar. I’m no statistician, but I’d like to know if it’s possible that an experiment with just 10 sessions (10% hit rate, ie,1 hit) could have such a large negative effect (-0.65).
 
It worked! While I'm on a roll, I may as well share this graph which has effect sizes from most of the ganzfeld database from the start. I'm not good enough at excel to get it to put the years along the axis, so as a guide:

1 refers to 1974
11 to 1976
21, 1980
31, 1987
41, 1990
51, 1993
61, 1995
71, 1997
81, 1999

The most recent stuff on the graph looks good, but this doesn't include the work of 1999-2003 (since the reports don't put the effect size in their results) where the results are pretty much at chance.
 
Bah, my guide is not much of a guide. I adjusted a tiny aspect of the graph and it changed the numbers on the x axis without me noticing. Ho hum.
 
Ed said:
I understand that. The issue is inter rater variation, that is to say, how would all three rate the same stimulus? In other words, how much variability does the rating, per se, introduce?
I don't think the paper addresses this issue. They do say "The 'standardness' ratings of the three raters achieved a Cronbach's alpha of .78. The mean of the three sets of ratings on the 7-point scale was 5.33, ..." What is Cronbach's alpha?

The critical point is that the standardness is correlated with the effect size. That is interesting, in spite of the strangeness with the midpoint. But it's all just a big yawn without further analysis of the specific deviations.

~~ Paul
 
Paul C. Anagnostopoulos said:

I don't think the paper addresses this issue. They do say "The 'standardness' ratings of the three raters achieved a Cronbach's alpha of .78. The mean of the three sets of ratings on the 7-point scale was 5.33, ..." What is Cronbach's alpha?

The critical point is that the standardness is correlated with the effect size. That is interesting, in spite of the strangeness with the midpoint. But it's all just a big yawn without further analysis of the specific deviations.

~~ Paul

From the SPSS site:

Here, the reliability is shown to be low using all four items because alpha is .3924. (Note, that a reliability coefficient of .80 or higher is considered as "acceptable" in most Social Science applications).

This suggests that since the reliability is questionable there is some issue with interpreting what these raters are rating and how much an "effect" is due to these differences. That is to say that the "effect" might be an artifact of the meaurement process. I wonder how the data for the reliability measure was collected. The only way, it would seem to me, is to intersperse control trials or to have the raters all rate the same stuff. But if that is the case combining their ratings in some way is questionable.

You will note that all of this arm waving is a direct result of a sloppy design. Why were zener cards not used or a forced choice selection? Why Baroque elaboration when clear results could be handily obtained? Why the interposition of human subjectivity? Does this not, in and of itself, raise red flags? Why is this a characteristic of paranormal research?
 
By the way, I have not thought thru how this might effect the results. The point is that if there is a flaw in the paradigm there are no results so speculation is moot.
 
Paul C. Anagnostopoulos said:

I don't think the paper addresses this issue. They do say "The 'standardness' ratings of the three raters achieved a Cronbach's alpha of .78. The mean of the three sets of ratings on the 7-point scale was 5.33, ..." What is Cronbach's alpha?
~~ Paul

It is a measure of reliability, the higher the better.

Nunnaly (1978) thinks 0.7 to be an acceptable reliability coefficient.
 
T'ai Chi said:


It is a measure of reliability, the higher the better.

Nunnaly (1978) thinks 0.7 to be an acceptable reliability coefficient.

Perhaps, perhaps not. The point is that it is hardly clear. This is the problem with much woosearch, because of murky designs one ends up arguing about minutia which appears to me, increasingly, to be the goal of this stuff.

In any event, one now (even with a lowered bar on reliability) has to figure out what the implications are.

The issue is not whether the reliability is .7 or 1.0 or 0.0, the issue is why is this construct even part of the effort? That is the meta issue here. Why design an experiment where these stupid converations have even a chance of occuring? Are you (or anyone) telling me that the researchers could not see this coming a mile off? Anyone want to speculate? And if they did, why did they proceed? And if they did not, what the hell are they doing playing at research? Isn't this troubling? Same thing with Schwartzie.

The biggest issue with paranormal research is why flaws seem to be designed in. This is of far greater importance than the next mastubatory discussion of the pecifics of flawed research.
 
Ed said:

Are you (or anyone) telling me that the researchers could not see this coming a mile off? Anyone want to speculate? And if they did, why did they proceed?


Sorry, Ed, I'm not psychic. I can only go by the scientific data.


The biggest issue with paranormal research is why flaws seem to be designed in. This is of far greater importance than the next mastubatory discussion of the pecifics of flawed research.

I think it is a huge misunderstanding on your part to essentially say that ratings are built in flaws.
 
-why is this construct even part of the effort?
Because standardness is an important concept. Do the hits vary when people stray fromt the original set-up?

-Why design an experiment where
See above.

- Are you (or anyone) telling me that the researchers could not see this coming a mile off?
See what coming? You are begging the question here, Ed.

-Anyone want to speculate?
No, except you.

-And if they did, why did they proceed?
Ask the researchers. I cannot read minds.

-And if they did not, what the hell are they doing playing at research?
Ditto.

-Isn't this troubling? Same thing with Schwartzie.
No it is not troubling. It is using their rights to investigate 'psi' issues and see if anything is there.
 
And Ed,

When you said:

"The biggest issue with paranormal research is why flaws seem to be designed in. This is of far greater importance than the next mastubatory discussion of the pecifics of flawed research."

Do you feel that ratings are built in flaws?
 
T'ai Chi said:


I do, of both, so what seems to be your question? [/B]


Tai, the point is that Amhearst is close minded, he istes statistics that have lttle meaning.

"There is a twenty five percent chance that a target picture will be chosen if there are three decoys."

That is assuming random chance, we spent five pages discussing how there could be some very strong non-random influences that show that 25% may not be the 'random chance'.

Amhearst got crazy and strident in the face of the protocol flaws and continues to assert that it was a twenty five percent chance.

He does not seem to understand what either Ersby or Ed has asked.

If you study and know statistics then you know that the meta-analysis is severely flawed due to the lack of demographic matching and satadardization.

I agree that there is an effect and so far there is no proof that it is not an arftifact of a flawed design
 
T'ai Chi said:

I think it is a huge misunderstanding on your part to essentially say that ratings are built in flaws. [/B]

And your study of statistics should already have informed you that iner rater validity is a major issue , and as Ersby has pointed out, there may be an argument to be made that the label 'standard' should be carefully studied before being used.

And you can't control for the non standard and standard by just rating things on a scale Tai, as a person versed in statitics you should know that is just a smoothing of already flawed or potentialy flawed data.

It is till potentialy flawed date, the smoothing and a posteriori control measures can not make the potential for error go away.

that is why you need tight protocols and methodology.

Not to even discuss Paul's contenetion that at times the reseachers may have deliberately chosen targets to influence the results.
 

Back
Top Bottom