T'ai Chi said:
I'm not going to go into depth with this discussion and waste time if you disbelieve error exists in the first place.
So... are you denying any error exists in the experiment? YES or NO?
If YES, you're wrong and this discussion is stopped.
If NO, I'll talk about some of the various types of error. There are a lot.
The level of error in the Milikin experiment is a known and quantitative, the mass of the drop of water and the charge of the dxrop were known inconcictancies.
This is very different from the Ganzfeld studies , where there is no record of how the randomization of the targets was chosen. At least it is not mentioned.
The possibility for reciever match to any given target is not controlled ofr, there fore if there is an imperfect match rate to a random receiver response there is NOT a twenty five percent chance of the reciever statement matching.
If there is a non random chance of different pictures being chosen by the judge because of response bias then that is not controlled for then you can NOT assume that there will a random twenty five percent chance of a judge chosing a target.
If there is no actual double blind in the target assignation, and no actual double blind in the judge matching, there is a good source of error.
And that is why the meta analysis is flawed as are many of the trials and runs, there is almost no data record to look at to deteremine what has happened. Since so many sources of error are left uncontrolled in the originating studies it is really difficult to even imagine that the meta nalysis is meaningful.
So while I feel that there is potential for psi research in the future, especialy given it's low cost, the past research seems to be fraught with arror and sloopy procedures. I think it is interesting that the later studies tend to still show the Ganfeld effect and if they can control for all the known issues and quatify them, then there can be discussion of
what causes the effect.
(I have been associated with three publishged studies when I was an undergrad, as a professional I have been involved in about ten studies of treatment effectiveness, employee and management communication, treatment perception and outcomes and plain old customer satisfaction surveys.)
As a minor historian of research psychology I am aware of the effect that uncontrolled for artifacts has on data. Especialy in any meta-analysis, most of the Community and Hospital Mental Health papers that I reveiwed and read were very careful to discuss, control and lack of control, demographics and potential sampling errors, sample size and potential sample bias, trial size and all sorts of other quirky issues.
( Like is the Beck Depression Inventory an adequate measure of response to antidepressants on a long term basis, or is a quality of life form a better data tool in long term treatment. Or the evn better, do schizophrenics benefit from social skills training, why and why not. What is the validity of data gathered on suicide?)