The Ganzfeld Experiments

Loki said:
But in the end, if there is a finite pool of photos, and each photo is chosen as the target the same number of times, then it doesn't matter what the 'bias' might be (provided it's constant!). For example, if we had a pool of just 2 photos, and one has a 'bias' of 80% to the other's 20%, then as long as (a) we select each photo as a target 50% of the time and (b) the reciever selects according to the bias, we should get a "by chance" result.
Indeed so, as long as no partial credit is awarded by the judging process.

Of course, this assumes that 'selection bias' is a constant. Perhaps people start of a session with a preference for "water" scenes, but subconsciously tire of this, and eventually start to choose against water scenes in later trials?
Interesting thought.

Sorry, I keep thinking we've switched to talking about psychics, so my remarks about probabilities were directed at psychic abilities.

~~ Paul
 
There is another assumption here, too: that the decoys are selected at random from the same pool of photos as the targets.

~~ Paul
 
We're going round in circles. First, get down from your high horse: I'm not saying Radin is lying, I'm saying he's mistaken. Since this is a mistake oft repeated, then I hardly think this counts as a black mark against his character! So I have no evidence that he's lying, because I don't think he's lying. I think he IS mistaken and I have evidence that he's mistaken. Even in his own words, he includes in his analysis the first meta-analysis (by itself containing only 28 of the first 40 ganzfeld experiments), the PRL experiments and then post PRL experiments until '97. He himself doesn't mention the work from '85-'91. Why do you assume that he included them?

Quite apart from the fact that his figures are too low to encompass the entire ganzfeld database.

And I still don't see how he could have those kinds of figures (over 280 for Edinburgh, 500 plus for Durham) unless he had the results from those large scale experiments published in 1997. If you could answer that, that'd be nice.

As for the Beirman paper I previously said:

Well, in 1993 Beirman, in his paper “Anomalous information access in the Ganzfeld: Utrecht - Novice series I and II” did a brief overview of these and found that the effect size post 1985 had dropped considerably. He writes:

“However, the point remains that the 17 Ganzfeld experiments reported since the first metaanalysis in 1985 and for which we could infer the effect size that we were able to locate, do conflict with the outcomes reported in that 1985-analysis which incorporated 28 studies. In fact the effect-sizes do regress to chance expectation as can be seen from the linear regression analysis.”


amherst said:

-The only Bierman paper I remember us discussing was the one he co-authored on response bias. Since all of his articles are available on his website http://a1162.fmg.uva.nl/~djb/publications/ can you please provide a link to the one you are referring to so I can read it?

-Since the abstracts you've listed for the 85-91 ganzfeld studies don't provide much detail, I'd like to read what Bierman says about them before I give any real comment. I will just reiterate that Radin claims he included every published ganzfeld replication attempt as of early 97 in his meta-analysis. And he certainly knew about the studies you listed in your report.

-Radin pg. 88:
"From 1974 to 1997, some 2,549 ganzfeld sessions were reported in at least forty publications by researchers around the world."

Radin is claiming he obtained all his data from published material. Unless you have evidence that he is lying (which you don't), you should retract your claim that he added pre-published studies.

amherst
 
Loki said:

There is a 25% chance of picking the 'target' if a purely random selection process is used. If the process is not random (and we both think that human thought processes are NOT purely random?) then I'm not sure exactly how we calculate the 'probability per photo'.


I've been thinking of having 4 photos like normal, but then have one that is entirely red, one that is entirely blue, one that is entirely black, and one that is entirely white, for example. Let the ganzfeld participants know what the colors are, but not the picture, and inform them they have to 'see' one of these colors for the reading to count.

Hmm, just playing around with that idea, not sure if that would solve anything. And not sure if we could dictate what they have to see in a session. :P

Of course, if they claimed to be able to view just colors, we could only use the colored paper and not the pictures. :) I think that would be 25% probability because there is nothing to debate, nothing to judge.

That is why I think the pictures need to be as simple as possible with not much 'going on' in the pictures. Maybe they are, I don't know, I haven't seen many of them, has anyone else?
 
Ersby said:

Nope, nothing else. Just the effect sizes. Oh well, not to worry. Does something simple like the mean effect size have any worth? Or is it just a meaningless (ha ha) number?

Well it would be like taking the mean of a lot of r's (correlation coefficients), for example. It might say something useful for description.
 
Ersby said:
We're going round in circles. First, get down from your high horse: I'm not saying Radin is lying, I'm saying he's mistaken. Since this is a mistake oft repeated, then I hardly think this counts as a black mark against his character! So I have no evidence that he's lying, because I don't think he's lying. I think he IS mistaken and I have evidence that he's mistaken.
You're getting mixed up. There are two things you claim Radin is wrong about. The first being that he wasn't all inclusive in his analysis pre-early 97. That's a matter I will deal with below. The second thing you claim Radin is wrong about, and the one I was obviously refering to when I told you that you should retract your claim, is his statement that all of the experiments he listed in his meta-analysis were taken from published material.

You don't really think that it's an "oft repeated" mistake for researchers to unknowingly include studies which haven't been published yet into meta-analyses of published material, do you? Radin explicitly states that he got all of his data from published sources. How could he be mistaken about a published study verses an unpublished one? It is clear that if you are right about this, Radin is not mistaken but lying.
Even in his own words, he includes in his analysis the first meta-analysis (by itself containing only 28 of the first 40 ganzfeld experiments), the PRL experiments and then post PRL experiments until '97. He himself doesn't mention the work from '85-'91. Why do you assume that he included them?
Since he says the overall hit rate of his meta-analysis "...is the combined estimate based on all available ganzfeld sessions, consisting of a total of 2,549 sessions.", and since you, a layman, know about the studies conducted from 85-91, certainly Radin, a proffessional with access to all the studies, knew about them too and included them.

I should note here that Radin writes: "Figure 5.4 includes all studies where the chance hit rate was 25 percent."
Quite apart from the fact that his figures are too low to encompass the entire ganzfeld database.
He reports 2,549 total sessions reported as of early 1997. How many do you think there had been up to that time? Unless you can come up with a different number, backed by studies, you can't say that his figures are "too low".
And I still don't see how he could have those kinds of figures (over 280 for Edinburgh, 500 plus for Durham) unless he had the results from those large scale experiments published in 1997. If you could answer that, that'd be nice.
You seem to think it would be impossible to conduct 289 sessions in three years, why? And I already gave you a detailed response as to how, since we don't know the exact date the 1997 Durham work was published, if it was published in early 1997 and included in Radin's analysis, everything fits. But if it wasn't published before Radin's analysis and not included, I still don't see how everything doesn't.
As for the Beirman paper I previously said:
quote:
--------------------------------------------------------------------------------

Well, in 1993 Beirman, in his paper “Anomalous information access in the Ganzfeld: Utrecht - Novice series I and II” did a brief overview of these and found that the effect size post 1985 had dropped considerably. He writes:

“However, the point remains that the 17 Ganzfeld experiments reported since the first metaanalysis in 1985 and for which we could infer the effect size that we were able to locate, do conflict with the outcomes reported in that 1985-analysis which incorporated 28 studies. In fact the effect-sizes do regress to chance expectation as can be seen from the linear regression analysis.”

--------------------------------------------------------------------------------
In that same paper he also writes:
"If we take a global look at the present series there is no clear sign of any paranormal effect in the data. As argued in the results-section the direct scoring rates however do not invalidate previous meta-analysis. Actually, if we compare the present results with novice series from other laboratories the global chance results are to be expected. So it seems too early to draw negative conclusions from this chance result."

It is not too early to draw conclusions any longer. We now know based on the recent meta-analyses that "...replications yield significant effect sizes comparable with those obtained in the past." http://comp9.psych.cornell.edu/dbem/updating_the_ganzfeld_data.htm

amherst
 
amherst said:
You're getting mixed up. There are two things you claim Radin is wrong about. The first being that he wasn't all inclusive in his analysis pre-early 97. That's a matter I will deal with below. The second thing you claim Radin is wrong about, and the one I was obviously refering to when I told you that you should retract your claim, is his statement that all of the experiments he listed in his meta-analysis were taken from published material.

You don't really think that it's an "oft repeated" mistake for researchers to unknowingly include studies which haven't been published yet into meta-analyses of published material, do you? Radin explicitly states that he got all of his data from published sources. How could he be mistaken about a published study verses an unpublished one? It is clear that if you are right about this, Radin is not mistaken but lying.

You know, your insistence on trying to bring emotive issues into the argument is pretty tiresome. Let's see, Radin writes to the six institues asking for results, and gets results which, although unpublished are due to be published in a short amount of time. So, in his book he says they are published. This is more artistic licence then lying, if you want my opinion.

Since he says the overall hit rate of his meta-analysis "...is the combined estimate based on all available ganzfeld sessions, consisting of a total of 2,549 sessions.", and since you, a layman, know about the studies conducted from 85-91, certainly Radin, a proffessional with access to all the studies, knew about them too and included them.

Nope, I don't buy this at all. You're making the assumption that a professional instantly knows more than an amateur. This is not necessarily the case. Does Radin make any reference to experiments from that time? Munson "FRNM: a ganzfeld replication", for example?

Besides, take a look at that word "available". Does that mean available to him at the time? I think Radin himself is not making the claim that you say he is.

He reports 2,549 total sessions reported as of early 1997. How many do you think there had been up to that time? Unless you can come up with a different number, backed by studies, you can't say that his figures are "too low".

I already showed how his figures are wrong. You're opinion seems to be that Radin is right because he says so.

You seem to think it would be impossible to conduct 289 sessions in three years, why? And I already gave you a detailed response as to how, since we don't know the exact date the 1997 Durham work was published, if it was published in early 1997 and included in Radin's analysis, everything fits. But if it wasn't published before Radin's analysis and not included, I still don't see how everything doesn't.

It's not impossible, but unlikely, especially if you consider that this large scale experiment seems to have vanished from trace. And that this large scale experiment that has vanished JUST SO HAPPENS to have similar results to one published in 1997.
 
Ersby, this is the current state of our debate:
In previous posts, your major argument for Radin's analysis not being all inclusive was to say that the 1,661 sessions reported in the Milton and Wiseman analysis did not add up with Radin's supposedly all inclusive one of 2,549, since, if you add the 1,661 together with the 355 from PRL and the original 762 you get 2,778.

So, I pointed it out to you that Radin's analysis only included everything pre-early 97 and then showed you that only 22 of the 40 studies in Milton's and Wiseman's analysis were published 97 or earlier. These studies accounted for only 770 sessions and therefore fit with Radin's all inclusive 2,549 session analysis.

For the sake of thoroughness and to avoid possible confusions later, I also pointed out that the 1997 Edinburgh study of 128 sessions, which was reported in the (updated) Milton and Wiseman analysis, was not included in Radin's. I noted that we could know this because Radin wrote that "The Edinburgh experiments conducted from 1993 to 1996 (and still ongoing), consisted of five published reports and 289 sessions ..." So according to Radin, his analysis only included Edinburgh studies published from 1993 to 1996, not the 97 study used in the (updated) Milton and Wiseman analysis.

Yet, for some reason I don't quite understand, you have been trying to show that this study was in fact included in Radin's analysis even though what he's written indicates that it was not. The only reason I can think of for you wanting to do this is that, if you showed Radin used a pre-published study, even though what he wrote indicates that he didn't, then you could also claim he used other studies listed in the Milton/Wiseman analysis which hadn't been published pre-early 97. You could then say he picked and chose which unpublished studies he used in his analysis. Including successful ones and excluding those which were unsuccessful. When I asked you if this was the case in a previous post, you gave me a vague answer. But this is the only (and also quite an absurd) reason I can see for your placing such a great importance on that Edinburgh study, which, even if it had been included in Radin's analysis, changes nothing.


amherst
 
Now that I think about it, maybe the 128 session Dalton study was actually published in early 1997, and Radin did include it in his analysis. Since the quote only gives the period the Edinburgh studies were conducted from and not the period of publication, it is quite possible that the Dalton study was published in early 97 and therfore included. If this is so, or if it is not so, like I said before, it doesn't change anything. Everything still indicates that Radin included every published pre-early 97 study in his analysis.

amherst
 
I'm not sure why so much emphasis is being placed on Radin's meta-analysis...Storm/Ertel (2001) published a reply to the Wiseman/Milton meta-analysis in the Psychological Review in which they did an even more comprehesive meta-analysis than Radin (they found 11 unpublished studies from the period 1982-86 and also included experiments post 1997). However, the question is whether this effect is replicable. Certainly some experiments get results which are not likely due to chance, although most are within the realm of chance. In addition, the problem is clouded, mainly in the most recent data, by a trend toward "psi missing" ie. negative z scores and effect sizes.

The 42 studies (later reduced to 28 to only account for direct hits) analyzed by Hyman and Honorton had clear problems involved in them. Honorton and Hyman debated as to how these problems may have contributed or not contributed to the results, but both agreed that the answer could only be resolved by tests which eliminated these problems...hence the autoganzfeld. But the autoganzfeld database is plagued with possibilities for sensory leakage that arose from faulty wiring. The fact that this was present once again means that these experiments cannot be taken as strong evidence for a paranormal process.

It is likely that at least some of the experiments performed using the Ganzfeld technique are not due to chance, even in the latest database. However, reasons why these experiments have succeeded while the majority fail is not clear, and so the effect is not reliably replicable right now. I would focus on the signifcant experiments to see whether they contain certain flaws which can account for the results.
 
dharlow said:
I'm not sure why so much emphasis is being placed on Radin's meta-analysis...Storm/Ertel (2001) published a reply to the Wiseman/Milton meta-analysis in the Psychological Review in which they did an even more comprehesive meta-analysis than Radin (they found 11 unpublished studies from the period 1982-86 and also included experiments post 1997).

I've only been able to find the abstract.

But the autoganzfeld database is plagued with possibilities for sensory leakage that arose from faulty wiring. The fact that this was present once again means that these experiments cannot be taken as strong evidence for a paranormal process.
It seems like you may actually be thinking of the faulty solder joint which was discovered in a lab during the earlier experiments. Bem:

"This leakage is discussed in detail in the original Honorton et al.(1990) report in the Journal of Parapsychology. In the actual experiment, the white noice is very loud (about 68db so that the receiver cannot hear his/her own voice). When the solder joint was discovered--before the studies had been concluded. The potential problem was assessed by turning off the white noise completely and seeing if any soundtrack could be heard. A number of people put on the headphones and were instructed to listen as carefully as possible for ANY noise. None of them could hear anything. Next an amplifier was placed between the soundtrack circuit and the headphones and turned to full gain; again the white noise was turned off completely. Now a very dim background noise could be heard. It was concluded that if nothing could be heard with the white noise turned off completely, it was unlikely that anything could have been detected in 68db of noise. A conclusion with which I concur.

But there is more. The leakage problem was fixed and the studies continued. The results were then analyzed to see if performance declined after the fix. Answer: No. In fact, performance improved. Moreover, performance throughout was uncorrelated with the noise level used."

If you were refering to something else then please give more details and explain where you got the information.

amherst
 
amherst said:


If you were refering to something else then please give more details and explain where you got the information.

amherst

I've seen what they wrote...however, I've talked with George Hansen, who was a part of the experimental team and often acted as sender in the trials. I talked with him some time ago about these experiments, so I do not want to distort his position...however, he led me to believe that this problem was more serious than the 1990 paper indicated. If you're interested in following up on this, you can contact him through his website...www.tricksterbook.com. In any event, I came away from the conversation with more doubt about the veracity of these experiments than before.
 
dharlow said:


I've seen what they wrote...however, I've talked with George Hansen, who was a part of the experimental team and often acted as sender in the trials. I talked with him some time ago about these experiments, so I do not want to distort his position...however, he led me to believe that this problem was more serious than the 1990 paper indicated. If you're interested in following up on this, you can contact him through his website...www.tricksterbook.com. In any event, I came away from the conversation with considerably more doubt about the veracity of these experiments than before.

You can't give me any details as to what he said which led you to believe the problem was more serious than the paper indicated?

amherst
 
amherst said:


You can't give me any details as to what he said which led you to believe the problem was more serious than the paper indicated?

amherst

No, because 1.) I don't have my notes on my conversation with him before me right now and 2.) I don't want to misquote anything he told me about a research project out of respect for him and his collegues. I will say that George has told me that he doesn't consider the Ganzfeld experiments to be the strong replicable evidence for psi that many parapsychologists do.

George is very accessible and I'm sure will answer any questions you have about these experiments.
 
amherst said:
Now that I think about it, maybe the 128 session Dalton study was actually published in early 1997, and Radin did include it in his analysis.

amherst

You know what's funny? (And this surprised me when I found it!) I was looking through my files on the subject and found a list of abstract from the SPR conference for 2002. There Dalton's 1997 work is listed as an unpublished thesis. In 2002!

Anyway, I took some time to detail my argument in more detail, which I'll post in seperate posts for clarity's sake.
 
Okay, just in case the misunderstanding is my fault, I’ve decided to go back to square one. I’ve looked over my files again, if nothing else to reassure myself I’m not going mad! I’m going to set out my arguments as clearly as possible. Although, I get the feeling I’m pushing against an open door here since I’m not at all convinced that Radin himself is making the claim that Amherst is making in his stead.

I typing this up on Word, with all my files to hand, so there’s no reliance on my memory, and I’m not typing this frantically in an internet cafè before my credit runs out. Hopefully this will help.

First I’ll take the question which experiments have been omitted from meta-analyses. I’m going to detail my arguments, which might get quite boring, so I’ve highlighted the missing experiments themselves in bold.

Radin used Honorton’s m-a. This was published in 1985 covered 28 experiments from 1974-1981. Namely:

Ashton “A 4-subject study in the GF”
Braud “Free response GESP performance during anexperimental hypnagogic state induced by visual and acoustic GF techniques: A replication and extension”
Child “Psi missing in free response settings”
Honorton “Length of isolation and degreeof arousal as probable factors influencinf information retrieval in the GF”
Honorton “Psi-mediated imagery and ideation in an experimental procedure for regulating perceptual input”
Palmer “The influence of psychological set on ESP and OBE”
Palmer “Scoring Patterns in an ESP GF experiment”
Palmer “An ESP GF experiment with transcendental meditators”
Raburn “Expectation and transmission factors in psychic functioning”
Raburn as above
Rogo “ESP in the GF: an exploration of parameters”
Rogo as above
Rogo “The use of short duration GF to facilitate psimediated imagery”
Sargent “Exploring psi in the ganzfeld”
Sargent as above
Sargent as above
Sargent as above
Sargent as above
Sargent “Response structure and temporal incline in gf free response gesp testing”
Sargent “GF psi optimazation in relation to session duration”
Sargent “GF ESP Performance with variable duration testing”
Schmitt “Free Response ESP during GF stimulation: The possible influence of menstrual cycle phase”
Sondow “Effect of associations and feedback on psi in the gf: is there more than meets the judges eye?”
Sondow “Target qualities and affect measures in an exploratory psi gf”
Terry “Psi information retrieval in the GF: 2 confirmatory studies”
Terry as above
Wood “Free response GESP performance folowing gf stimulation.....”
York “The DMT as indicator of psychic performance as measured by….”

From this era we know that there were 40 experiments from which these 28 were picked because they used the same method of scoring, ie, direct hitting. However, this seems to be the dark ages of the ganzfeld process, and no details of those experiments can be found on the internet. These are the only ones I have information for but are not listed:

Braud, Wood “The influence of immediate feedback on free-response GESP performance during ganzfeld stimulation”
Karnes, Ballou, Susman, Swaroff “Remote viewing: Failures to replicate with control comparisons.”
Karnes, Susman “Remote viewing: A response bias interpretation.”
Stanford “The influence of auditory ganzfeld characteristics upon free-response ESP performance”
Schlitz, Gruber “Transcontinental remote viewing.”
Blackmore “Extrasensory Perception as a cognitive process”



Then Beirman mentions that between ’81 and ’93 there’d been a drop in the cumulative effect sizes. This is the dataset that I believe is lacking from Radin’s figures simply because he doesn’t mention it. I’ve listed it here without the Honorton data which, of course, was later included in the PRL figures.

Milton “A possible directive role of the agent in the gf”
Houtkooper “Why the GF is conducicve to ESP: a study of OT and the percipient order effect”
Murrey “A GF psi experiment with control condition”
Haraldsson “Perceptual defensiveness, GF and the percepientorder effect”
Sargent “Response structure and temporal incline in GF free response gesp testing”
Palmer “A GF experiment with subliminal sending”
Kanthamani “An experiment in GF & dreams: confirmatory study”
Milton “The effect of agent strategies on the percipients experience in the ganzfeld”
Stanford “Cognition and mood during GF: the effects of extraversion and noise vs silence”
Stanford “Psychological response to the GF-esp setting: the role of noise vs silence ...”
Dalen “A prototypical GF psi experiment with a control condition”
Stanford “Session based verbal predictors of free-response esp performanmance in the GF”
Kanthamani “An experiment in gf and dreams with a clairvoyant technique”
Munson “FRNM Ganzfeld: an attempted replication”
Broughton “Assessing the PRL success model on an independent database”


This overlaps with the Milton/Wiseman(’91 to ’97) and Bem et al meta- analyses (’91-‘99). The experiments in this time (’81-’99) that are not covered (not by Beirman, nor M/W nor Bem et al) are:

Stanford, Roig “Toward understanding the cognitive consequences of the auditory stimulation used for ganzfeld: Two studies.”
Sargent “A ganzfeld GESP experiment with visiting subjects”
Stanford, Angelini “Effects of noise and the trait of absorption on ganzfeld ESP performance.”
Hill “Applied psi: Remote internal viewing: Methodology and preliminary results”
Schlitz, Haight “Remote viewing revisited: An intrasubject replication”
Targ, Targ, Lichtarge “Realtime clairvoyance: A study of remote viewing without feedback”
Sudhakar et al “Belief and personality factors of participants: A study in an ESP/ganzfeld setting”
Hearne, “A forced-choice remote-viewing experiment”
Stanford, Frank, Kass, Skoll “Ganzfeld as an ESP-favorable setting: I”
Stanford, Frank, Kass, Skoll “Ganzfeld as an ESP-favorable setting: II”
Beirman “Process Oriented Ganzfeld Research” series 4b”


And these (except for Beirman’s) are all from the very beginning of the ‘90s. Post PRL it seems that all ganzfeld work is pretty much accounted for in meta-analyses. Unless I’m missing something, which is always a possibility. So these are the experiments I suspect are missing from Radin’s m-a, since from what I can gather he’s relied on the Honorton m-a and PRL work to represent the first twenty years of ganzfeld research. But I don’t think he was lying. He himself said he analysed those experiments available to him, and it could well be that many of these experiments simply weren’t available.
 
Now about those figures. Radin says he looked at data from 2,549 sessions, broken down like this

Honorton’s m-a 762 sessions
PRL work, 355 sessions

So these are already done and dusted. I’ve no real problem with this. But then he talks about post-PRL experiments (the “replication attempts”). For these he gives.

Edinburgh/Koestler 289
Amsterdam 164
Cornell 25
Rhine/ Durham 590
Gothenburg 90
Utrecht 232

Which add together to make 2,507 (which leaves me wondering where the other 42 came from, but that’s a minor detail).

So, according to Radin the post-PRL work adds up to 1,390 sessions. In the Bem m-a the total number of sessions was 1,661. From this we can assume that the two cover largely the same area. But amherst claims that Radin’s m-a has no data from ’97 (or, at least, very little). This would leave the ’91 to ’96 data set standing at 957 which would appear to give Radin (or rather amherst) enough wriggle room to maintain that Radin’s m-a covers all ganzfeld experiments from year dot.

But this leaves further problems, since all of a sudden those figures for individual institutes don’t make sense. If those 90 sessions from Gothenburg don’t come from Parker’s work in 1997, where do they come from? The hit rate seems the same. Is there another 90 session ganzfeld experiment from Gothenburg with similar results that I’ve not heard of? If so: details, please!

The same thing occurs with Durham. Take out Broughton's work and there's a lot of sessions to make up.

Simply put, the figures don’t make sense. I’m reminded of the image of someone trying to cover a floor with a carpet that’s simply too small. When Amherst insists that such and such figures cover post-PRL work, the carpet slips to reveal the floor in one corner. When he says they cover all work in the g’feld, not ’97, then the carpet slips again and a different corner is laid bare. There’s simply not enough material. The figures don’t cover the entire ganzfeld database.

And I don’t think Radin is making that claim. He talks about “available” experiments, and makes it clear he’s talking about a meta-analysis, the PRL work and post-PRL work. He makes no explicit mention of the experiments from Beirman’s paper, nor of those left off Honorton’s m-a, so there’s really no reason to believe he included them.
 
Ersby said:
Post PRL it seems that all ganzfeld work is pretty much accounted for in meta-analyses. Unless I’m missing something, which is always a possibility.

Forgetting for a moment the pre-PRL studies, let me ask you a simple question: Why do you think the PRL work hasn't been replicated? Based on the analyses from Radin, and Bem/Palmer/Broughton, it is clear that, like the PRL work (which you admit not having an explanation for), a significant number of the (standard) replications have been getting results which are anomalous. I'll go back to my original question, what more do you need? Even more studies?

amherst
 
amherst said:
You could then say he picked and chose which unpublished studies he used in his analysis. Including successful ones and excluding those which were unsuccessful. When I asked you if this was the case in a previous post, you gave me a vague answer.

amherst

I saw this while reading off-line. I apologise for not giving a proper reply early: I must have misunderstood the question. I can happily state now that I do not believe that Radin "cherry-picked" the best of the nearly-published '97 work for inclusion in his analysis.
 
amherst said:


Forgetting for a moment the pre-PRL studies, let me ask you a simple question: Why do you think the PRL work hasn't been replicated? Based on the analyses from Radin, and Bem/Palmer/Broughton, it is clear that, like the PRL work (which you admit not having an explanation for), a significant number of the (standard) replications have been getting results which are anomalous. I'll go back to my original question, what more do you need? Even more studies?

amherst

Alas, I'm back in the internet café, with only my dodgy memory and my steadily decreasing credit for company.

If we are to draw a line in the sand and say that the autoganzfeld work of PRL is the new year zero, then the data looks good, I admit. But not great. The strictest replication of the PRL work came out with chance results. Creativity is posited as a key to good results, but the most recent (Koestler) investigation into creativity came out with no psi effect. I understand "Replication" to mean doing the same experiment and getting the same results. This appears not to be the case with psi. It may take a while for me to adjust to this new paradigm, or it may be that if you include all the non-meta analysised data, that there is no effect to speak of.

But as for the point about not being able to explain a certain result: this is true, but only in the same way that I am not able to explain the behavior of set 20 in the PRL studies. Both are events far beyond chance, yet only one demands an explanation. Why? The behaviour of set 20 was MUCH more unexpected than the results of the PRL experiments. Doesn't it demand an explanation too?

Lastly, I'm simply not convinced that standard replications get anomolous results. Since the meta-analysis in '99, there's been little good results to shout about. I think Parker's work (Gothenburg) is the most promising since it is the most consistent, and I hope it'll produce something worthwhile. But I'm not holding my breath.
 

Back
Top Bottom