The Ganzfeld Experiments

amherst said:

Zep, you are way out of your depth. I've already explained why your suggestion is ridiculous, yet you ignore me and continue to post this nonsense.

[snip snip snip, la la la]

If you're not going to read the papers I've listed, why don't you do yourself a favor and at least read what Hyman and Blackmore have said about this matter.

amherst
My dear Amherst.

I HAVE read Hyman and Blackmore, and a lot of others besides. Plus most of PEAR's extant reports, as well as the reports you referenced (I read them again, just to make sure they were the ones I had seen before). Not to mention a lot of other reports on the ganzfield experiments, stuff like Scoles, plus a lot of other crap, piss and corruption that attempts to masquerade as science.

Have you ever read stuff by one Victor Zammit? www.victorzammit.com He's a firm believer in whatever so-called positive evidence for psi that floats past him in his sewer too. And he doesn't question it either - thinks it's great stuff and world-bending information.

Let me correct just one thing for you:
A: If a receiver doesn't have any information as to what the correct target is he has a 25% chance of guessing the actual target.
...but the judge DOES know in advance, otherwise he couldn't score the attempt. So the scoring remains subjective. You might like to chat to Schwartz as to how this is done to best effect.
 
Ed said:
Why not zener cards and self selection of the "correct" target? Why not a clean design with no subjectivity? Why baroque paradigms that garentee discussions like this?
 
Interesting Ian said:


Look! If this experiment was not included in the meta-analysis then why the hell is it being discussed??? :mad:


"The" meta-analysis? You're very found of using the definite article. Are we referring to the "Running Head" m-a only? I wish someone had mentioned it! So why was Radin's m-a mentioned too? And the opening quote for Irwin, was that about the most recent m-a? If not, it shouldn't be in the thread.

Can we refer to Milton and Wiseman's, or is that not allowed?

Why are we discussing this experiment when

a) I have no idea what the experimental protocol involves

Why indeed.

"Temporal clues"??? I have no idea what this could possibly mean :rolleyes:

I feel sorry for you, Ian, I really do. You have nothing to add, yet you still cling onto the thread as if you have. I can't imagine why.

Oh, the bit in the abstract DOES say that the judges recieve the notes from the agents. Read it again.
 
Zep said:
And you would indeed be correct - well done!

And what you have performed as a result of that selection process is a non-psi selection of a target image from a limited target pool using a very simple optimal guessing technique. Note that you did not have to refer to any of the key words I listed either - you used a completely different and completely rational and materialistic method of "improving" your chance of success.

Materialistic method?? I do not think you understand what materialism means. Science does not remotely imply the materialist metaphysic.

Anyway, any psi research which doesn't account for this is worthless. However, the ganzfeld experiments included in the meta-analysis do not allow for this possibility.

And this is just one of the problems with the ganzfield methodologies being employed where photo images of scenery are used.

No this is not a problem. The target and the 3 decoys are randomly presented to the judges.

And I am cognitive of this type of cheating. Geller tried it a few years ago on this TV programme utilising the psychological fact that people will tend to select targets in the middle in a row of targets. I particularly remember this ploy because someone switched the targets so that the target was on one of the ends! :D

As I say, I wouldn't place any credence on experiments which allow such cheating. I'm not stupid. But from what I understand from these Autoganzfeld experiments, it is simply impossible for such cheating to happen.

This is why there is strong reason to push for the use of Zener cards or similar in testing psi - they are very simple, clearly quite unique to each other, and form a delimited target pool with which to work.

I don't believe zener cards would work very well. There again that is maybe why you want them to use them.
 
Ersby said:
I feel sorry for you, Ian, I really do. You have nothing to add, yet you still cling onto the thread as if you have. I can't imagine why.

Oh, the bit in the abstract DOES say that the judges recieve the notes from the agents. Read it again. [/B]

I certainly do not see any purpose in debating with you as I have made clear in my previous post to you. Not least because you yourself have admitted that you are getting your recollection of various experiments mixed up.

Just one thing. What research might you be referring to here? What notes? Where have I denied this?

I really wish you could learn to communicate more effectively. How the hell am I supposed to know what you're talking about? Hell, you yourself don't even know what you're talking about most of the time so how am I supposed to??
 
Hi Amhearst

Hi again Amhearst I guess you won’t respond to my previous post concerning what I consider to be errors in the protocol for the Ganzfeld studies so I will reiterate the same notion in reference to the specific articles you cited. I believe that these are the ones you cited before?

At the bottom of the post I will present what I consider to be my rationale for not using the theory that choosing from pictures should produce the 25% rate discussed because I feel that is the major flaw. There isn’t enough on the auto ganzfeld in the first article to address that.

(http://comp9.psych.cornell.edu/dbem/ganzfeld.html)
From the first article:

In 1985 and 1986, the Journal of Parapsychology devoted two entire issues to a critical examination of the ganzfeld studies, featuring a debate between Ray Hyman, a cognitive psychologist and a knowledgeable, skeptical critic of parapsychological research, and the late Charles Honorton, a prominent parapsychologist and major ganzfeld researcher. At that time, there had been 42 reported ganzfeld studies conducted by investigators in 10 laboratories.
Across these studies, receivers achieved an average hit rate of about 35 percent. This might seem like a small margin of success over the 25 percent hit rate expected by chance, but a person with this margin of advantage in a gambling casino would get rich very quickly. Statistically this result is highly significant: The odds against getting a 35 percent hit rate across that many studies by chance are greater than a billion to one. Additional analyses demonstrated that this overall result could not have resulted simply from the selective reporting of positive results and nonreporting of negative results.

It is not sufficient in any research paper to just say Across these studies, receivers achieved an average hit rate of about 35 percent. , that is sloppy reporting, it is crucial to any study like this that the number of targets tested per trial be discussed, the number of trials run and the different parameters for this alleged 35% hit rate. Again the larger the trial size and the larger the overall number of trial runs, the less that there is a chance of purely random coincidence.

The odds against getting a 35 percent hit rate across that many studies by chance are greater than a billion to one.
As discussed repeatedly in many thread this is bad statistics, if I flip a coin on different days , the chance of any flip being heads is fifty percent. If I toss the coin in a row the sucsessive chance of it being heads each time is 50%, it is only in considering the aggregate chance of the total events. IE 5 heads in a row is (.5)^5. It is not good statistics to take five coin tosses from different days and say that the same five tosses in like have an aggregate chance of (.5)^5. I forget why but I do remember that it is bad statistics, I think the problem is that they were not designated in advance as being in aggregate.

on the Auto ganzfeld
The experiments confirmed the results of the earlier studies, obtaining virtually the same hit rate: about 35 percent. It was also found that hits were significantly more likely to occur on dynamic targets than on static targets.

Again no data to look at and verify the statement that it is 35% or that it is meaningful.
Video is more sucsesful than static.
on targets
Dynamic targets contain more information, involve both the visual and auditory senses, evoke richer internal imagery, are more lifelike, have a narrative structure, and are more emotionally evocative.



My main argument, and it is not fully explicated by me, is that I am very uncomfortable with the procedure where a person gives a list of words(the reciever) and then a judge goes and rates which picture the list of words might match.

The chances are not that 25% at all, it is very dependant on the ability of the target picture to match the words stated by the reciever. Despite the fact that tyhe pictures are chosen randomly, there is a very good reason to suspect that certain pictures will randomly match any random set of words that the reciever says.

That is why is most psychological research , a very strict protocol is developed on what constitutes a match to a given object prior to the test being run. This way , any rater trained properly is going to rate the material at the same rate as any other arter.

The better target picture might just be better at matching randomly 'free association' passgaes generated by the reciever.

There is a very easy way to test for this effect. Without using the target and the sender, you would have the reciver undergo the procedure and record the lists of words that they generate while in the ganzfeld state. Later you would then have judges try to match the passages to randomly chosen pictures from the target and decoy groups. If you find that 'good' target pictures are more likely to be matched by the judges then there is a potential for certain pictures to match any 'free association' passage.

This would be a possible source of error.

And just saying that the picture are randomly chosen and randomly rated will not solve this proble.

Here is why:

The basic assumption that any picture has only a 25% chance of being matched by the words that the reciever speaks.

If this was the case then each trial in an experimental run would be randomly matched as follows
(25%)(25%)(25%)(25%)(25%)(25%)(25%)(25%)(25%)
and the agrregate average would be 25% over a large number of trials. And this is the assumption that I say may be in error.

Say that there are pictures with different levels of matching the 'free associative' lists of the reciever for example
(10%)(30%)(40%)(60%)(20%)(10%)(30%)(40%)(60%)(20%)

This sample will give an aggregate of 32%! Not an aggregate of 25% because of the random chance that a random 'free associative' list will match any given picture. The order of the pictures doesn't matter because it is just a sampling issue of the target to mathching a randomly selected list of 'free association'.

So for the procedure to work the picture would have to be pretested and selected for having only a 25% chance of matching a random 'free associative' list for the procedure to be afftective.

Thanks.
 
My original quote on the subject was...

Ersby said:

This is a reference to an experiment ... whereby the complete transcript of the senders as they moved from target to target contained clues

With a few brief exchanges, you seemed to have got the wrong idea into your head, since you said:

Interesting Ian said:
Whilst speaking down a mobile at the same time to a mobile glued to one of the judges ears?? :rolleyes:

Who said anything about talking to each other? Not me. You must have grossly misunderstood and then slipped into denial. Glad to put things straight.

And finally,

Interesting Ian said:
I certainly do not see any purpose in debating with you as I have made clear in my previous post to you. Not least because you yourself have admitted that you are getting your recollection of various experiments mixed up.

To admit a mistake is not a weakness. I find it interesting you think otherwise. You made a mistake re. my claim over a weakness of the Schlitz experiment. Will you admit it?
 
But back on subject, I think it's well to remind everyone that the positive results of the most recent meta-analysis (the "Running Head" one) was due entirely to the presence of one experiment. Is this really a sign of a replicable effect?
 
Ersby said:
To admit a mistake is not a weakness. I find it interesting you think otherwise. You made a mistake re. my claim over a weakness of the Schlitz experiment. Will you admit it? [/B]

I will certainly admit to any mistake that I make. What mistake might this be??
 
Re: Hi Amhearst

Dancing David said:
Hi again Amhearst I guess you won’t respond to my previous post concerning what I consider to be errors in the protocol for the Ganzfeld studies so I will reiterate the same notion in reference to the specific articles you cited. I believe that these are the ones you cited before?

At the bottom of the post I will present what I consider to be my rationale for not using the theory that choosing from pictures should produce the 25% rate discussed because I feel that is the major flaw. There isn’t enough on the auto ganzfeld in the first article to address that.

(http://comp9.psych.cornell.edu/dbem/ganzfeld.html)
From the first article:



It is not sufficient in any research paper to just say Across these studies, receivers achieved an average hit rate of about 35 percent. , that is sloppy reporting, it is crucial to any study like this that the number of targets tested per trial be discussed, the number of trials run and the different parameters for this alleged 35% hit rate. Again the larger the trial size and the larger the overall number of trial runs, the less that there is a chance of purely random coincidence.

The article you are referring to is not a research article but a nontechnical piece written for a skeptical encyclopedia. If you want a detailed discussion of the statistics read the Psychological Bulletin paper.
The odds against getting a 35 percent hit rate across that many studies by chance are greater than a billion to one.
As discussed repeatedly in many thread this is bad statistics, if I flip a coin on different days , the chance of any flip being heads is fifty percent. If I toss the coin in a row the sucsessive chance of it being heads each time is 50%, it is only in considering the aggregate chance of the total events. IE 5 heads in a row is (.5)^5. It is not good statistics to take five coin tosses from different days and say that the same five tosses in like have an aggregate chance of (.5)^5. I forget why but I do remember that it is bad statistics, I think the problem is that they were not designated in advance as being in aggregate.
I think this deserves repeating:
"I forget why but I do remember that it is bad statistics" .

on the Auto ganzfeld


Again no data to look at and verify the statement that it is 35% or that it is meaningful.
Video is more sucsesful than static.
on targets
Read the Psychological Bulletin article since you seem to think Bem is being untruthful.




My main argument, and it is not fully explicated by me, is that I am very uncomfortable with the procedure where a person gives a list of words(the reciever) and then a judge goes and rates which picture the list of words might match.

The chances are not that 25% at all, it is very dependant on the ability of the target picture to match the words stated by the reciever. Despite the fact that tyhe pictures are chosen randomly, there is a very good reason to suspect that certain pictures will randomly match any random set of words that the reciever says.
What you and others here aren't realizing is that "certain pictures" are just as likely to be targets as they are decoys. I don't know why this is so hard for you to understand.

That is why is most psychological research , a very strict protocol is developed on what constitutes a match to a given object prior to the test being run. This way , any rater trained properly is going to rate the material at the same rate as any other arter.

The better target picture might just be better at matching randomly 'free association' passgaes generated by the reciever.

There is a very easy way to test for this effect. Without using the target and the sender, you would have the reciver undergo the procedure and record the lists of words that they generate while in the ganzfeld state. Later you would then have judges try to match the passages to randomly chosen pictures from the target and decoy groups. If you find that 'good' target pictures are more likely to be matched by the judges then there is a potential for certain pictures to match any 'free association' passage.

This would be a possible source of error.

And just saying that the picture are randomly chosen and randomly rated will not solve this proble.

Here is why:

The basic assumption that any picture has only a 25% chance of being matched by the words that the reciever speaks.

If this was the case then each trial in an experimental run would be randomly matched as follows
(25%)(25%)(25%)(25%)(25%)(25%)(25%)(25%)(25%)
and the agrregate average would be 25% over a large number of trials. And this is the assumption that I say may be in error.

Say that there are pictures with different levels of matching the 'free associative' lists of the reciever for example
(10%)(30%)(40%)(60%)(20%)(10%)(30%)(40%)(60%)(20%)

This sample will give an aggregate of 32%! Not an aggregate of 25% because of the random chance that a random 'free associative' list will match any given picture. The order of the pictures doesn't matter because it is just a sampling issue of the target to mathching a randomly selected list of 'free association'.

So for the procedure to work the picture would have to be pretested and selected for having only a 25% chance of matching a random 'free associative' list for the procedure to be afftective.

Thanks.
This is all complete nonsense. I think this ridiculous argument origniated from Robert Todd Carroll of the Skepdic. Read this:
http://skepdic.com/comments/ganzfeldcom.html

amherst
 
Interesting Ian said:


I don't believe zener cards would work very well. There again that is maybe why you want them to use them.

Yes, they wouldn't work very well to "validate" psi abilities. That's because it would get the testing closer to an objective test with less room to wiggle. For some reason, the more factors that are protected against in these type of experiments, the less often we see "psi" effect.

Basically, if we make the test uncheatable, and there is no "psi" effect. Why is that?
 
thaiboxerken said:


Yes, they wouldn't work very well to "validate" psi abilities. That's because it would get the testing closer to an objective test with less room to wiggle. For some reason, the more factors that are protected against in these type of experiments, the less often we see "psi" effect.

Basically, if we make the test uncheatable, and there is no "psi" effect. Why is that?
"By the 1960s, a number of parapsychologists had become dissatisfied with the familiar ESP testing methods pioneered by J. B. Rhine at Duke University in the 1930s. In particular, they believed that the repetitive forced-choice procedure in which a subject repeatedly attempts to select the correct "target" symbol from a set of fixed alternatives failed to capture the circumstances that characterize reported instances of psi in everyday life.

Historically, psi has often been associated with meditation, hypnosis, dreaming, and other naturally occurring or deliberately induced altered states of consciousness. For example, the view that psi phenomena can occur during meditation is expressed in most classical texts on meditative techniques; the belief that hypnosis is a psi-conducive state dates all the way back to the days of early mesmerism (Dingwall, 1968); and cross-cultural surveys indicate that most reported "real-life" psi experiences are mediated through dreams (Green, 1960; Prasad & Stevenson, 1968; L. E. Rhine, 1962; Sannwald, 1959)."
http://homepage.mac.com/dbem/does_psi_exist.html#ganzfeld procedure

The ganzfeld has no more "wiggle" room than Rhine's forced choice card tests(which were also highly significant). You do realize that Ray Hyman agreed upon and helped design the Auto-Ganzfeld protocol don't you?

amherst
 
Re: Re: Hi Amhearst

amherst said:

The article you are referring to is not a research article but a nontechnical piece written for a skeptical encyclopedia. If you want a detailed discussion of the statistics read the Psychological Bulletin paper.

You presented it I responded


I think this deserves repeating:
"I forget why but I do remember that it is bad statistics" .


Read the Psychological Bulletin article since you seem to think Bem is being untruthful.

Did I say untruthful? Do you have a problem being polite Amhearst ? I have been polite to you. I have not accused Bem of being untruthful, I am merely stating what is standard in making a claim, that you show the evidence.

What you and others here aren't realizing is that "certain pictures" are just as likely to be targets as they are decoys. I don't know why this is so hard for you to understand.

Excuse me but you can get off your high horse, I have not been insulting to you , if you do not understand the point I am trying to make I can try to repeat it.

Example:
The list of words generated by the reciever could be a product of 'free association' or it could be the product of actual 'psi talent'. However various pictures are going to have a higher or lower chance of matching a list of 'free association'. So regardless of wether or not there is 'psi talent', the matching for the traget picture to a list of 'free association' is something that should be controlled for.

It doesn't matter at all if the pictures are target or decoys, what matters is the probablity of any target picture matching the list of 'free association'.

I am sorry if you don't understand this point.

Say that the pictures chosen have a higher than 25% of matching the 'free association'. Then by default you will get a higher than 25% hit rate, irregardless of any psi effect.

I think that as I said this is something that could be controlled for by pretesting.

" I don't know why this is so hard for you to understand.", take your own advice, ask me what you don't undetsand.

The point I am making is that it doesn't matter at all if the picture is a target or a decoy. The chance for matching a 'free association' should be controlled for.

This is all complete nonsense. I think this ridiculous argument origniated from Robert Todd Carroll of the Skepdic. Read this:
http://skepdic.com/comments/ganzfeldcom.html

amherst

Mr/Ms Amhearst, I thought of this argument all by myself, "ridiculous argument ,complete nonsense" does not an argument make.

Why is it 'complete nonsense'? Can you explain a counter argument? Or will you just engage in name calling?

It does not matter to my argument at all that there are decoys, what matters is the potential for any target picture to match a 'free association'.
For example if the pictures in a trial have the following probabilities of matching a free association:

(05%)(10%)(15%)(05%)(10%)(15%)(05%)(10%)(15%)(05%)

then the free assocaiation match idea would give an aggregate chance of 9.5%, way below the 25% hit rate.

(25%)(30%)(35%)(25%)(30%)(35%)(25%)(30%)(35%)(25%)

the chance match rate would be 29.5%

So actualy choosing pictures with a low match rate would give even better prediction on the Ganzfeld effect.

I am arguing that you need to contol the level that any picture will match a random 'free association' list.

I await your counter argument, and I hope you do better than just name calling.
 
Interesting Ian said:


Materialistic method?? I do not think you understand what materialism means. Science does not remotely imply the materialist metaphysic.
And you are the expert on this? And it's relevant?

Anyway, any psi research which doesn't account for this is worthless. However, the ganzfeld experiments included in the meta-analysis do not allow for this possibility.

No this is not a problem. The target and the 3 decoys are randomly presented to the judges.
You are still trying to equate the specific methodologies. I'm trying to demonstrate that both types of tests are scored subjectively, regardless of the methodologies.

And I am cognitive of this type of cheating. Geller tried it a few years ago on this TV programme utilising the psychological fact that people will tend to select targets in the middle in a row of targets. I particularly remember this ploy because someone switched the targets so that the target was on one of the ends! :D

As I say, I wouldn't place any credence on experiments which allow such cheating. I'm not stupid. But from what I understand from these Autoganzfeld experiments, it is simply impossible for such cheating to happen.

I don't believe zener cards would work very well. There again that is maybe why you want them to use them.
No-one is trying to manufacture failure here. We are trying to insist on objective analysis. If you can think up a more robust and objective methodology besides the Zener cards style then let's hear it!
Ian, I said nothing about cheating, and I don't believe the reputable research groups cheat at all. They are honest as they can be. The lesser reputable bunch are simply self-deluding, and then we get down to the outright nutters like Zammit and co. The problem is that these groups think their methodology is sound, and on the surface it is, but deeper analysis shows that it actually isn't - there are issues big and small that could play a decisive part in skewing results. It then takes someone from outside their group to say that the emperor has no clothes.

Not that this problem is limited to psi research either - many previously decisive studies have since been found to be influenced by unrealised factors. The way of science is to be self-correcting, though.
 
Amherst,

"By the 1960s, a number of parapsychologists had become dissatisfied with the familiar ESP testing methods pioneered by J. B. Rhine at Duke University in the 1930s. In particular, they believed that the repetitive forced-choice procedure in which a subject repeatedly attempts to select the correct "target" symbol from a set of fixed alternatives failed to capture the circumstances that characterize reported instances of psi in everyday life.

Bwahahaha.. LOL.. In other words they were dissatisfied with OBVIOUS .. EASILY quantified tests because they showed CLEARLY their was NO psi effect.. and they wanted show their WAS !

And now the “excuse”

Historically, psi has often been associated with meditation, hypnosis, dreaming, and other naturally occurring or deliberately induced altered states of consciousness. For example, the view that psi phenomena can occur during meditation is expressed in most classical texts on meditative techniques; the belief that hypnosis is a psi-conducive state dates all the way back to the days of early mesmerism (Dingwall, 1968); and cross-cultural surveys indicate that most reported "real-life" psi experiences are mediated through dreams (Green, 1960; Prasad & Stevenson, 1968; L. E. Rhine, 1962; Sannwald, 1959)."
http://homepage.mac.com/dbem/does_p...eld procedure

The ganzfeld has no more "wiggle" room than Rhine's forced choice card tests(which were also highly significant). You do realize that Ray Hyman agreed upon and helped design the Auto-Ganzfeld protocol don't you?

Pshaw…It has a crap load more “wriggle room” than cards would have or we wouldn’t have just had 4 pages of argument about it.

If their was a “psi effect” simple cards would reveal it IMMEDIATELY… Even a result slightly over 25 % (consistently with large numbers of examinations) would show some effect.. if psi so pissweakly dependant upon circumstance, timing and arty farty complex approaches then it is plain nonsense…

You just keep bending over backwards in a desperate attempt to keep finding a non-existent effect !

Ian,

I don't believe zener cards would work very well. There again that is maybe why you want them to use them.

Off course they don’t work well for you.. they CLEARLY show NO psi effect exists and you so dearly want it too exist !

Sadly you offer NO reason why cards are would not work.. or do you adhere to the “it has to be in this sort of dreamy, silly, vague, almost real .. etc” type circumstances Amherst has alluded to ?
 
Originally posted by amherst

The article you are referring to is not a research article but a nontechnical piece written for a skeptical encyclopedia. If you want a detailed discussion of the statistics read the Psychological Bulletin paper.

Dancing David writes:
'You presented it I responded"
I presented four articles in my original post, one of which was (as I have explained many times)a nontechnical piece written for an encyclopedia. If you had paid attention to my posts you would have easily known:
1. Why there wasn't a detailed discussion of the statistics in the article.
2. Where you could find a detailed discussion of the statistics.
Yet instead of reading the Psychological Bulletin article you incredibly write:
"It is not sufficient in any research paper to just say Across these studies, receivers achieved an average hit rate of about 35 percent. , that is sloppy reporting, it is crucial to any study like this that the number of targets tested per trial be discussed, the number of trials run and the different parameters for this alleged 35% hit rate".



I think this deserves repeating:
"I forget why but I do remember that it is bad statistics" .


Read the Psychological Bulletin article since you seem to think Bem is being untruthful.

Dancing David writes:
Did I say untruthful? Do you have a problem being polite Amhearst ? I have been polite to you. I have not accused Bem of being untruthful, I am merely stating what is standard in making a claim, that you show the evidence.
In the encyclopedia article you were referring to, Bem writes:
"Altogether, 100 men and 140 women participated as receivers in 354 sessions across 11 separate experiments during Honorton's autoganzfeld research program. The experiments confirmed the results of the earlier studies, obtaining virtually the same hit rate: about 35 percent. It was also found that hits were significantly more likely to occur on dynamic targets than on static targets. These studies were published by Honorton and his colleagues in the Journal of Parapsychology in 1990, and the complete history of ganzfeld research was summarized by Bem and Honorton in the January 1994 issue of the Psychological Bulletin of the American Psychological Association (Bem & Honorton, 1994; Honorton et al., 1990)."

Yet you have the gall to say:
Again no data to look at and verify the statement that it is "35% or that it is meaningful.
Video is more sucsesful than static.
on targets"
This implies that you think Bem doesn't have the evidence to justify what he is saying. It's basically accusing him of being dishonest, and since you didn't even bother to read the scientific article, I find your statements to be, again, incredible.

What you and others here aren't realizing is that "certain pictures" are just as likely to be targets as they are decoys. I don't know why this is so hard for you to understand.

Dancing David writes:
Excuse me but you can get off your high horse, I have not been insulting to you , if you do not understand the point I am trying to make I can try to repeat it.

Example:
The list of words generated by the reciever could be a product of 'free association' or it could be the product of actual 'psi talent'. However various pictures are going to have a higher or lower chance of matching a list of 'free association'. So regardless of wether or not there is 'psi talent', the matching for the traget picture to a list of 'free association' is something that should be controlled for.

It doesn't matter at all if the pictures are target or decoys, what matters is the probablity of any target picture matching the list of 'free association'.

I am sorry if you don't understand this point.

Say that the pictures chosen have a higher than 25% of matching the 'free association'. Then by default you will get a higher than 25% hit rate, irregardless of any psi effect.


I think that as I said this is something that could be controlled for by pretesting.

" I don't know why this is so hard for you to understand.", take your own advice, ask me what you don't undetsand.

The point I am making is that it doesn't matter at all if the picture is a target or a decoy. The chance for matching a 'free association' should be controlled for.

This is all complete nonsense. I think this ridiculous argument origniated from Robert Todd Carroll of the Skepdic. Read this:
http://skepdic.com/comments/ganzfeldcom.html

amherst
--------------------------------------------------------------------------------



Mr/Ms Amhearst, I thought of this argument all by myself, "ridiculous argument ,complete nonsense" does not an argument make.

Why is it 'complete nonsense'? Can you explain a counter argument? Or will you just engage in name calling?

It does not matter to my argument at all that there are decoys, what matters is the potential for any target picture to match a 'free association'.
For example if the pictures in a trial have the following probabilities of matching a free association:

(05%)(10%)(15%)(05%)(10%)(15%)(05%)(10%)(15%)(05%)


then the free assocaiation match idea would give an aggregate chance of 9.5%, way below the 25% hit rate.

(25%)(30%)(35%)(25%)(30%)(35%)(25%)(30%)(35%)(25%)


the chance match rate would be 29.5%

So actualy choosing pictures with a low match rate would give even better prediction on the Ganzfeld effect.

I am arguing that you need to contol the level that any picture will match a random 'free association' list.

I await your counter argument, and I hope you do better than just name calling.
David, after the ganzfeld sending phase is finished, a receiver is presented with four randomly assembled pictures on a computer screen. The only way a receivers spurious "free association" of certain pictures could potentially bias the hit rate would be if those pictures were targets more than a decoys. This possibility has been addressed and shown to be not the case by Bem in his Response to Hyman:

Content-Related Response Bias

"Because the adequacy of target randomization cannot be statistically assessed owing to the low expected frequencies, the possibility remains open that an unequal distribution of targets could interact with receivers' content preferences to produce artifactually high hit rates. As we reported in our article, Honorton and I encountered this problem in an autoganzfeld study that used a single judging set for all sessions (Study 302), a problem we dealt with in two ways. To respond to Hyman's concerns, I have now performed the same two analyses on the remainder of the database. Both treat the four-clip judging set as the unit of analysis and neither requires the assumption that the null baseline is fixed at 25% or at any other particular value.

In the first analysis, the actual target frequencies observed are used in conjunction with receivers' actual judgments to derive a new, empirical baseline for each judging set. In particular, I multiplied the proportion of times each clip in a set was the target by the proportion of times that a receiver rated it as the target. This product represents the probability that a receiver would score a hit on that target if there were no psi effect. The sum of these products across the four clips in the set thus constitutes the empirical null baseline for that set. Next, I computed Cohen's measure of effect size (h) on the difference between the overall hit rate observed within that set and this empirical baseline. For purposes of comparison, I then reconverted Cohen's h back to its equivalent hit rate for a uniformly distributed judging set, in which the null baseline would, in fact, be 25%.

Across the 40 sets, the mean unadjusted hit rate was 31.5%, significantly higher than 25%, one-sample t(39) = 2.44, p = .01, one-tailed. The new, bias-adjusted hit rate was virtually identical (30.7%), t(39) = 2.37, p = .01, tdiff (39) = 0.85, p = .40, indicating that unequal target frequencies were not significantly inflating the hit rate.

The second analysis treats each film clip as its own control by comparing the proportion of times it was rated as the target when it actually was the target and the proportion of times it was rated as the target when it was one of the decoys. This procedure automatically cancels out any content-related target preferences that receivers (or experimenters) might have. First, I calculated these two proportions for every clip and then averaged them across the four clips within each judging set. The results show that across the 40 judging sets, clips were rated as targets significantly more frequently when they were targets than when they were decoys: 29% vs. 22%, paired t(39) = 2.03, p = .025, one-tailed. Both of these analyses indicate that the observed psi effect cannot be attributed to the conjunction of unequal target distributions and content-related response biases."
http://comp9.psych.cornell.edu/dbem/response_to_hyman.html



amherst
 
Aussie Thinker said:
Amherst,



Bwahahaha.. LOL.. In other words they were dissatisfied with OBVIOUS .. EASILY quantified tests because they showed CLEARLY their was NO psi effect.. and they wanted show their WAS !

And now the “excuse”



Pshaw…It has a crap load more “wriggle room” than cards would have or we wouldn’t have just had 4 pages of argument about it.

If their was a “psi effect” simple cards would reveal it IMMEDIATELY… Even a result slightly over 25 % (consistently with large numbers of examinations) would show some effect.. if psi so pissweakly dependant upon circumstance, timing and arty farty complex approaches then it is plain nonsense…

You just keep bending over backwards in a desperate attempt to keep finding a non-existent effect !

Your ignorance is profound. As I've said before, the ganzfeld protocol was one aproved and partially designed by parapsychology's most sophisticated skeptic, Ray Hyman.

"After the 1985 meta-analysis were published, Hyman and Honorton agreed to write a joint communique. in the the communique, which was published in 1986, they began by describing the points on which they agreed and disagreed:

'We agree that there is an overall significant effect in this data base that cannot reasonably be explained by selective reporting or multiple analysis. We continue to differ over the degree to which the effect constitutes evidence for psi, but we agree that the final verdict awaits the outcome of future experiments conducted by a braoder range of investigators and according to more stringent standards.'


They then specified in detail the "more stringent standards" that future experiments would have to follow to provide evidence that satisfied the skeptics. Honorton was especially interested in getting Hyman to agree publicly to these critera, as skeptics are notorious for changing the rules of the game after all previous objections have been met and new experiments continue to provide significant results.
The new standards, acceptable to both Honorton and Hyman, included such things as rigorous precautions against sensory leakage, extensive security procedures to prevent fraud, detailed descriptions of how the targets were selected, full documentation of all experimental procedures and equiptment used, and complete specifications about what statistical tests were to be used to judge success. With a recipe agreed to by the leading ganzfeld psi researcher and the leading skeptic, the stage was set to see whether future ganzfeld studies would continue to show successful results. if they did, then the skeptics would be forced to admit that something interesting was going on."(Radin,97)


Further:
"The automated ganzfeld procedure was critically examined by several dozen parapsychologists and behavioral researchers from other fields, including well-known critics of parapsychology. In addition, two "mentalists," magicians who specialize in the simulation of psi, examined the experiment to ensure that it was not vulnerable to inadvertent sensory leakage or to deliberate cheating on the part of the participants."
http://comp9.psych.cornell.edu/dbem/ganzfeld.html

The success of the ganzfeld under such stringent controls forced Ray Hyman to comment:

"Honorton's experiments have produced intriguing results. If independent laboratories can produce similar results with the same relationships and with the same attention to rigorous methodology, then parapsychology may indeed have finally captured its elusive quarry"(1991, pg. 392)

amherst
 
Amherst

Your ignorance is profound. As I've said before, the ganzfeld protocol was one aproved and partially designed by parapsychology's most sophisticated skeptic, Ray Hyman.

"After the 1985 meta-analysis were published, Hyman and Honorton agreed to write a joint communique. in the the communique, which was published in 1986, they began by describing the points on which they agreed and disagreed:

'We agree that there is an overall significant effect in this data base that cannot reasonably be explained by selective reporting or multiple analysis. We continue to differ over the degree to which the effect constitutes evidence for psi, but we agree that the final verdict awaits the outcome of future experiments conducted by a braoder range of investigators and according to more stringent standards.'


They then specified in detail the "more stringent standards" that future experiments would have to follow to provide evidence that satisfied the skeptics. Honorton was especially interested in getting Hyman to agree publicly to these critera, as skeptics are notorious for changing the rules of the game after all previous objections have been met and new experiments continue to provide significant results.
The new standards, acceptable to both Honorton and Hyman, included such things as rigorous precautions against sensory leakage, extensive security procedures to prevent fraud, detailed descriptions of how the targets were selected, full documentation of all experimental procedures and equiptment used, and complete specifications about what statistical tests were to be used to judge success. With a recipe agreed to by the leading ganzfeld psi researcher and the leading skeptic, the stage was set to see whether future ganzfeld studies would continue to show successful results. if they did, then the skeptics would be forced to admit that something interesting was going on."(Radin,97)


Further:
"The automated ganzfeld procedure was critically examined by several dozen parapsychologists and behavioral researchers from other fields, including well-known critics of parapsychology. In addition, two "mentalists," magicians who specialize in the simulation of psi, examined the experiment to ensure that it was not vulnerable to inadvertent sensory leakage or to deliberate cheating on the part of the participants."
http://comp9.psych.cornell.edu/dbem/ganzfeld.html

The success of the ganzfeld under such stringent controls forced Ray Hyman to comment:

"Honorton's experiments have produced intriguing results. If independent laboratories can produce similar results with the same relationships and with the same attention to rigorous methodology, then parapsychology may indeed have finally captured its elusive quarry"(1991, pg. 392)

Oh yeah .. I am ignorant because I want an indisputable test ???

Your STUPID Ganzfield tests just end up in disputes all over the place.. interpretation, subjectivity etc etc..

What is wrong with a simple card tests ???.. Make it as dreamy as you like. (You have produced no REAL reason for excluding them )

Simple test

Give the sender 1 random card from 4 and the receiver gets to chose from all 4 cards..
Lets have a Circle, Square, triangle and cross !

What the hell is wrong with this INDISPUTABLY simple experiment ???

You can rant all you like but if this sort of test produces NO psi effect then there is NO psi effect.
 
Nice to see my predicition of it becoming a battle of quotes has come true.

About this whole 'best guessing techniques', I'm not convinced it's such a killer argument. It was shown that PEAR had a best guessing technique, and if you're comparing the sender's notes to just one target, then sure, it'd be fine.

Best guessing techniques, in the classic ganzfeld set up, would only work if the receiver had a thorough knowledge of the target set, received feedback after every trial, and the target sets were chosen without replacement. Like a professional card shark knows which cards remain in the pack during a game. Even then, this wouldn’t increase the hit rate, but could increase the accuracy of any hits that are gained.

I agree with Amherst: I’m not at all sure if response bias can influence the hit rate. If a receiver knows that half of the target pool have a water feature, and he talks about water in his mentation, his hit rate remains at 25% (I think so, anyway).

True, he has a 50:50 chance of the target being a picture with a water feature. Plus, only a 1 in 8 chance of all the other targets having water too, so that situation (which’d give a 25% hit rate by chance) is less frequent. In short, the water target would probably be judged against one water decoy (50% hit rate by chance) or two (33% hit rate by chance). Trouble is, this only happens half the time. There’s still a 50:50 chance that the target has no water in it, so wouldn’t be considered as a hit by the judge in the first place.

Response bias only comes into play if the random selection of targets happens to coincide with the natural tendencies of people to imagine/avoid describing certain things. It’s a post hoc explanation, largely. Study 302 in the Does Psi Exist work by Bem and Honorton is a nice example of this. The trail, which was unfinished, ran for 25 trails and got a hit rate of 64% (ie, 16 hits instead of 7). Bem and Honorton saw that the targets had not been chosen in a regular manner: that the targets including a human figure (Bugs Bunny, actually) and water were chosen more often than the snake or sex scene, and this reflected what people are more likely to talk about when viewing. Adjusting for this gave a new base line score of 34% (8 or 9 hits) against which the hit rate should be compared. It was still an impressive score (adjusted to 54% on Bem’s paper) but also works as an illustration of what response bias can do.
 

Back
Top Bottom