Originally posted by amherst
The article you are referring to is not a research article but a nontechnical piece written for a skeptical encyclopedia. If you want a detailed discussion of the statistics read the Psychological Bulletin paper.
Dancing David writes:
'You presented it I responded"
I presented four articles in my original post, one of which was (as I have explained many times)a nontechnical piece written for an encyclopedia. If you had paid attention to my posts you would have easily known:
1. Why there wasn't a detailed discussion of the statistics in the article.
2. Where you could find a detailed discussion of the statistics.
Yet instead of reading the Psychological Bulletin article you incredibly write:
"It is not sufficient in any research paper to just say Across these studies, receivers achieved an average hit rate of about 35 percent. , that is sloppy reporting, it is crucial to any study like this that the number of targets tested per trial be discussed, the number of trials run and the different parameters for this alleged 35% hit rate".
I think this deserves repeating:
"I forget why but I do remember that it is bad statistics" .
Read the Psychological Bulletin article since you seem to think Bem is being untruthful.
Dancing David writes:
Did I say untruthful? Do you have a problem being polite Amhearst ? I have been polite to you. I have not accused Bem of being untruthful, I am merely stating what is standard in making a claim, that you show the evidence.
In the encyclopedia article you were referring to, Bem writes:
"Altogether, 100 men and 140 women participated as receivers in 354 sessions across 11 separate experiments during Honorton's autoganzfeld research program. The experiments confirmed the results of the earlier studies, obtaining virtually the same hit rate: about 35 percent. It was also found that hits were significantly more likely to occur on dynamic targets than on static targets. These studies were published by Honorton and his colleagues in the Journal of Parapsychology in 1990, and the complete history of ganzfeld research was summarized by Bem and Honorton in the January 1994 issue of the Psychological Bulletin of the American Psychological Association (Bem & Honorton, 1994; Honorton et al., 1990)."
Yet you have the gall to say:
Again no data to look at and verify the statement that it is "35% or that it is meaningful.
Video is more sucsesful than static.
on targets"
This implies that you think Bem doesn't have the evidence to justify what he is saying. It's basically accusing him of being dishonest, and since you didn't even bother to read the scientific article, I find your statements to be, again, incredible.
What you and others here aren't realizing is that "certain pictures" are just as likely to be targets as they are decoys. I don't know why this is so hard for you to understand.
Dancing David writes:
Excuse me but you can get off your high horse, I have not been insulting to you , if you do not understand the point I am trying to make I can try to repeat it.
Example:
The list of words generated by the reciever could be a product of 'free association' or it could be the product of actual 'psi talent'. However various pictures are going to have a higher or lower chance of matching a list of 'free association'. So regardless of wether or not there is 'psi talent', the matching for the traget picture to a list of 'free association' is something that should be controlled for.
It doesn't matter at all if the pictures are target or decoys, what matters is the probablity of any target picture matching the list of 'free association'.
I am sorry if you don't understand this point.
Say that the pictures chosen have a higher than 25% of matching the 'free association'. Then by default you will get a higher than 25% hit rate, irregardless of any psi effect.
I think that as I said this is something that could be controlled for by pretesting.
" I don't know why this is so hard for you to understand.", take your own advice, ask me what you don't undetsand.
The point I am making is that it doesn't matter at all if the picture is a target or a decoy. The chance for matching a 'free association' should be controlled for.
This is all complete nonsense. I think this ridiculous argument origniated from Robert Todd Carroll of the Skepdic. Read this:
http://skepdic.com/comments/ganzfeldcom.html
amherst
--------------------------------------------------------------------------------
Mr/Ms Amhearst, I thought of this argument all by myself, "ridiculous argument ,complete nonsense" does not an argument make.
Why is it 'complete nonsense'? Can you explain a counter argument? Or will you just engage in name calling?
It does not matter to my argument at all that there are decoys, what matters is the potential for any target picture to match a 'free association'.
For example if the pictures in a trial have the following probabilities of matching a free association:
(05%)(10%)(15%)(05%)(10%)(15%)(05%)(10%)(15%)(05%)
then the free assocaiation match idea would give an aggregate chance of 9.5%, way below the 25% hit rate.
(25%)(30%)(35%)(25%)(30%)(35%)(25%)(30%)(35%)(25%)
the chance match rate would be 29.5%
So actualy choosing pictures with a low match rate would give even better prediction on the Ganzfeld effect.
I am arguing that you need to contol the level that any picture will match a random 'free association' list.
I await your counter argument, and I hope you do better than just name calling.
David, after the ganzfeld sending phase is finished, a receiver is presented with four randomly assembled pictures on a computer screen. The only way a receivers spurious "free association" of certain pictures could potentially bias the hit rate would be if those pictures were targets more than a decoys. This possibility has been addressed and shown to be not the case by Bem in his Response to Hyman:
Content-Related Response Bias
"Because the adequacy of target randomization cannot be statistically assessed owing to the low expected frequencies, the possibility remains open that an unequal distribution of targets could interact with receivers' content preferences to produce artifactually high hit rates. As we reported in our article, Honorton and I encountered this problem in an autoganzfeld study that used a single judging set for all sessions (Study 302), a problem we dealt with in two ways. To respond to Hyman's concerns, I have now performed the same two analyses on the remainder of the database. Both treat the four-clip judging set as the unit of analysis and neither requires the assumption that the null baseline is fixed at 25% or at any other particular value.
In the first analysis, the actual target frequencies observed are used in conjunction with receivers' actual judgments to derive a new, empirical baseline for each judging set. In particular, I multiplied the proportion of times each clip in a set was the target by the proportion of times that a receiver rated it as the target. This product represents the probability that a receiver would score a hit on that target if there were no psi effect. The sum of these products across the four clips in the set thus constitutes the empirical null baseline for that set. Next, I computed Cohen's measure of effect size (h) on the difference between the overall hit rate observed within that set and this empirical baseline. For purposes of comparison, I then reconverted Cohen's h back to its equivalent hit rate for a uniformly distributed judging set, in which the null baseline would, in fact, be 25%.
Across the 40 sets, the mean unadjusted hit rate was 31.5%, significantly higher than 25%, one-sample t(39) = 2.44, p = .01, one-tailed. The new, bias-adjusted hit rate was virtually identical (30.7%), t(39) = 2.37, p = .01, tdiff (39) = 0.85, p = .40, indicating that unequal target frequencies were not significantly inflating the hit rate.
The second analysis treats each film clip as its own control by comparing the proportion of times it was rated as the target when it actually was the target and the proportion of times it was rated as the target when it was one of the decoys. This procedure automatically cancels out any content-related target preferences that receivers (or experimenters) might have. First, I calculated these two proportions for every clip and then averaged them across the four clips within each judging set. The results show that across the 40 judging sets, clips were rated as targets significantly more frequently when they were targets than when they were decoys: 29% vs. 22%, paired t(39) = 2.03, p = .025, one-tailed. Both of these analyses indicate that the observed psi effect cannot be attributed to the conjunction of unequal target distributions and content-related response biases."
http://comp9.psych.cornell.edu/dbem/response_to_hyman.html
amherst