• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Ganzfeld effect

Yahweh said:

Think of a number between 1 and 4...

Highlight:
<div style="background:#000000">Were you thinking of the number 3?

You'll notice that a reciever will have a tendency to rate the third image as "most likely". Of course, if the image the sender selected is chosen at random, it shouldnt matter much on any kind of unconscious "weighting" of likelyhood on the part of the reciever.</div>

I chose 2, but anyway, do you have actual evidence that a receiver will have a tendency to rate the 3rd image as most likely? :)
 
T'ai Chi said:
I chose 2, but anyway, do you have actual evidence that a receiver will have a tendency to rate the 3rd image as most likely? :)

I already asked for that. In the post right before yours.
 
CFLarsen said:

Do you agree or disagree that the hit rate was lower in the autoganzfeld studies (in comparison to the ganzfeld studies), Claus?

AGREE or DISAGREE?
 
How are hits rated in the auto ganfeld studies Tai?

BTW, did you want to respond to the issue of exmperimental error , or are you just going to box with CFL? I am a firm believer in the scientific method, if they tighten the criteria for what constitutes a hit then we can discuss the statistical validity. I do totaly disagree withe the aggreagte lumping of results with such sloppy methodology, and then they say things like "the number of hits from these twenty eight studies is like a random chance of one in a million."

If they tighten up the criteria and the methodology then it would be significant. Social science, even not realted to psi researcg, is rife with poor methodology.
 
Tai Chi, here's four different pictures.

Suppose these had been the set of four "targets" used on the the ganzfield viewer. Only one of four is selected. The viewer then responds with words like "yellow", "child", "flower" and "car". Which one would YOU choose?

What if the viewer described their vision as: "I see something round and something yellow." Now which one would YOU choose as the best match?

Now another question: Which ones would you choose as NOT matching that criteria? having chosen them, would you accept that the remaining pictures will be possible matches for the criteria? What percentage of the all targets is that?

zep

yellow%20flower.jpg
r%20yellow%20car.jpg
pc_ar-blue.jpg
child_bent_over_picking_flower.jpg
 
Zep, thank you!
I don't think that those pictures were chosen to deflate confusion but they do point out the problems with interpretation.

yellow: at least two picture will be positive for yellow.
eye: an interpretation that the flower could be mistaken for an eye.
hat: at least two of the pictures contain hats.
happy: what if they say that the flower could be happy as could the kids.
old: the guy with the car looks old.
young:the flower and all the children could get a hit on that.
round: the flower and the cars can be a hit with that.

ETC, etc, etc.

The less cluttered the picture the better, the more narrow the interpretaions the better.

For example words that i would consider a hit on the flowqer:
Flower, yellow flower, black eye susan, sun flower, they are all realted to the flower state. I would (myself) exclude all answers yellow!

If you use color then the whole set of signals should be nothing but colors that would reduce the ambiguity!

Where is Tai?
 
Actually, it was way harder than this when it came to PEAR's testing (they did much testing on the ganzfield stuff). They had a person go to place and try and "send" the scene back to the viewer. Have a look at a typical scene they MIGHT have viewed (Mt Rainier). What do YOU think are the central aspects of this scene? Now what are the "standout" bits - the bits that grab your attention?

DSCN6930NH-couple-at-rainie.jpg
 
And this one needs some thinking about - what are the notable aspects of this picture FOR YOU?

Davies%20Window%20View.JPG
 
These are wonderful examples
sky (MT Rainer) (and gosh is it pretty there)
yellow(Arizone)
dark (room)

there are probably a good twenty words for each picture.
 
Zep said:
Tai Chi, here's four different pictures.


You present too simple of a scenario, one which I'd think researchers would be well aware of. I think you'd have to show first that they don't choose pictures to be different from the others.

If the pictures were instead a book, a cat, a car, and Jupiter, their descriptions would be more exclusive.

I'm thinking the researchers took this into consideration and choose pictures based on exclusivness, but I'm not 100% certain. We'd need to contact them and ask I guess if it was not made clear in some paper somewhere.
 
Dancing David said:
How are hits rated in the auto ganfeld studies Tai?


The percipient or an independent judge(s) compare the obtained impressions to four different target pictures/video clips.


BTW, did you want to respond to the issue of exmperimental error , or are you just going to box with CFL?


Both. :)


I do totaly disagree withe the aggreagte lumping of results with such sloppy methodology, and then they say things like "the number of hits from these twenty eight studies is like a random chance of one in a million."


I fail to see where it is sloppy.
 
T'ai Chi said:
You present too simple of a scenario, one which I'd think researchers would be well aware of. I think you'd have to show first that they don't choose pictures to be different from the others.

If the pictures were instead a book, a cat, a car, and Jupiter, their descriptions would be more exclusive.

I'm thinking the researchers took this into consideration and choose pictures based on exclusivness, but I'm not 100% certain. We'd need to contact them and ask I guess if it was not made clear in some paper somewhere.
Here is one of the original questions used to "rate" the scenes: Is any significant part of the scene hectic, chaotic, congested, or cluttered? You have to answer YES/NO. And here again is a scene to consider.

DSCN6930NH-couple-at-rainie.jpg


Oh, and I made a spelling boo-boo too. This is Mt Rainie, not Mt Ranier.
 
In Ganzfield the person judging the hit doesn't know what the target is and just picks the closest one, there is nothing subjective about a hit in that context. It doesn't matter if there were 2 similar pictures and the description could have applied to both but the person guessed wrong, it's still a miss. And if the description had nothing to do with any of the pictures at all but the guess is correct it still counts as a hit. Autoganzfield addressed the issue of potential bias because of image content by having the images selected randomly from a large pool. It looks like a very well designed study to me, in fact Randi's challenge to Sylvia Brown actually incorporates some similar principles.
 
T'ai Chi said:

The percipient or an independent judge(s) compare the obtained impressions to four different target pictures/video clips.

fair 'nuff. As I said, as someone who has studied the social sciences i feel it is important to narrow the criteria as much as possible. this is solely to reduce the experimenter based error. It is simply a matter of importance in a lot of research, and believe me I do not single out psi research. It happens alot in social science in general. One study with unclear methodology on twenty non-random individuals will get hauled out repeatedly by pundits for whatever cause. the 'just say no to sex' form of sex education, it is an area where certain poorly designed studies are hauled out frequently. There has never been replication of results , but they get hauled out all the same.


Both. :)



I fail to see where it is sloppy.

It is not the meta-analysis that i am calling sloppy, although some people also abuse the statistics of meta-analyis as well. If there are twenty eight studies and let us say that roughly half have poor methodology then that sloppy methodology gets incorporated into the meta-analysis. Which makes that analysis garbage.
Then there is the combining of results to give some sort of random statistical power, where they go 'in each study the chance of this event randomly is one in twenty, we had one hundred studies and therefore the random cahnce of this happening is one in 20,000'. THis is a total abuse of the nature or random events. And the whole idea that you can say "random chance is X, we got x+10 and therefore there is an effect" is a mtter of some debate. The question in statistics is how much random variation there normaly is in the sample to begin with and then to decide in the result rises above the level of the random noise/variation in the sample
 
curious said:
In Ganzfield the person judging the hit doesn't know what the target is and just picks the closest one, there is nothing subjective about a hit in that context. It doesn't matter if there were 2 similar pictures and the description could have applied to both but the person guessed wrong, it's still a miss. And if the description had nothing to do with any of the pictures at all but the guess is correct it still counts as a hit. Autoganzfield addressed the issue of potential bias because of image content by having the images selected randomly from a large pool. It looks like a very well designed study to me, in fact Randi's challenge to Sylvia Brown actually incorporates some similar principles.

Hi Curious, welcome to the forum!

I think I don't understand this sentence, sorry I worked a double shift yesterday:
In Ganzfield the person judging the hit doesn't know what the target is and just picks the closest one, there is nothing subjective about a hit in that context.
If I understand what you wrote, this would just make the situation worse. But my brain is very fuzzy right now, please clarify.


And if the description had nothing to do with any of the pictures at all but the guess is correct it still counts as a hit.

This confuses me even more.

In the sites I looked at prior, the sender was looking at pictures and the reciever was making verbal statements. At some point the experiementer was judging if the words matched the pictures. Are you discussing a different methodology?
 
Hi and TY for the welcom.:)

If I understand what you wrote, this would just make the situation worse. But my brain is very fuzzy right now, please clarify.
My brains fuzzy atm also, I hope I make better sense this time. The recievers descriptions of the image (or whatever media that is used) are given to a judge (who actually could be the reciever) who is then shown the target image along with 3 other pictures, not knowing which was the target. At this point the judge is basically just deciding which image they think is closest to the description the reciever gave. No matter how close, or how ridiculously off the descriptions may be, one image is always chosen and if it's correct it counts as a hit. So a hit rate of 25% should be expected, and if the expereiment is done properly anything much different than 25% is, odd.

In the sites I looked at prior, the sender was looking at pictures and the reciever was making verbal statements. At some point the experiementer was judging if the words matched the pictures. Are you discussing a different methodology?
I'm not sure. If the experiementer was unaware of what the target was and the comparisons were part of the decision process for picking just one picture, than it could have been Ganzfeld methodology.
 
posted by Curious
At this point the judge is basically just deciding which image they think is closest to the description the reciever gave.

Now that is a really interesting methodology, so there are two stages to the experiemnt, in phase one the sender looks at a picture and the reciever says a list of words.
In phase two another person then reads the list of words and judges which picture matches the list words. Or as you say which picture is the closest to the list of words. Hmm...

I will have to think about this, on the surface this would be a good example of a place to introduce experiemental error but I will have to consider and get back to you.
 

Back
Top Bottom