Has this topic been discussed on the board yet?!:
http://www.lfr.org/LFR/csl/media/videoclips/DiscoveryCA/discoveryCA.html
http://www.lfr.org/LFR/csl/media/videoclips/DiscoveryCA/discoveryCA.html
Which is also what happened to the guy in the video in the opening post.The experimental result from the acoustic study gives the rate prior to acoustic stimuli as P1 = 0.091 and prior to controls as P0 = 0.053; therefore a response was a rather rare event. That is, on the average only one or two stimuli in each session might contain an anticipatory response.
Read the paper carefully, specifically the bottom of page 116 and top of 117. Note that they start with 190 people, then "screen" them to select 100 people for the formal analysis of the data. I.e., as far as I can tell, they discard the data that doesn't fit.
Yawn.
I don't understand his argument on pages 270-271.This is a fascinating psi topic. The best paper I've read on this is :
http://www.scientificexploration.org/jse/articles/pdf/18.2_radin.pdf
It's Dean Radin's latest set of experiments investigating the presentiment effect - a pre-stimulus response of the autonomic nervous system to future emotional stimuli, where the occurance of the future emotional stimulus cannot be predicted by the nervous system in any known way.
Can anyone find a problem with this study?
This is a fascinating psi topic. The best paper I've read on this is :
http://www.scientificexploration.org/jse/articles/pdf/18.2_radin.pdf
It's Dean Radin's latest set of experiments investigating the presentiment effect - a pre-stimulus response of the autonomic nervous system to future emotional stimuli, where the occurance of the future emotional stimulus cannot be predicted by the nervous system in any known way.
Can anyone find a problem with this study?
I don't understand his argument on pages 270-271.
I don't understand why he selects 13 people out of the total number, based on their strong performance, and analyses only their data. The idea that he is testing is that perhaps the pattern of calm/exciting previous to the current trial influences the current excitment level. But then he examines only data from people who happened to do really well. By chance some people are going to do really well. That says nothing about whether the pool of people, as a whole, are influenced by previous trials.
If I was to design the experiment, I would want a long time period between each trial; say, at least 10 minutes. This would hopefully eliminate the past trials from influencing the current trial.
It's odd that Radin should choose a measurement that previously had found no significant effect, and in fact showed a marginal effect in the opposite direction (with more change in pre-calm trials). His earlier paper is here:
http://www.boundaryinstitute.org/articles/presentiment99.pdf
Maybe he didn't have a choice with the equipment he had.
Plus, on page 266 he does a stouffer z calculation on the four trials that range in size from 240 to 2059 trials. That bothers me a little.
Here's another paper from Bierman that describe their findings as "marginally significant" but apart from that, I can't make head nor tail of it. Anyone know about brain scans?
The problem is that the stouffer z doesn't take into account the size of the experiment. If you had a number of small scale experiments with high z-scores and one large scale experiment with a small z-score, a stouffer z would treat them equally. Of course, here there's only four experiments they're not too dissimilar in size and the second largest experiment is the most successful, which is why it only bothers me a little.Why does it bother you?
But you can't do that in science. Gather your data, and then decide how you are going to analyze it. That's called data mining. Suppose this hadn't worked out for him. Then he could have chosen a different way to analyze the data, and found a correlation.I think he does this so that there are a sufficient number of sequential calm targets before an emotional one, in order to test the anticipatory strategy hypothesis. He says that he forms an arbitrary division of targets into calm and emotional ones based on where the targets lie in the order of emotionality ratings. Based on this target selection he then finds out that 13 individuals achieve significance. Why look for sequential effects in the other 90 when they don't even show a presentiment effect?
We are not talking about anticipatory strategies (if you mean that in a psi sense). If I jump out of a closet and scream at you you'd be quite startled. If I did it an hour later you'd be startled again. But if I did it 3 seconds later you wouldn't be startled at all. It takes time for your system to settle down after excitement.Stop them from influencing in what way? This will not eliminate anticipatory strategies and the hypothesis will certainly have to be tested even if you space out the trials in this way.
But it's worse than that in this case. Here, he specifically chose 13 people where they show a lot of hits. Pretend for a second that this isn't real data from people doing the trial, but just simulated data created by a computer - so we know there is no effect to observe. Most of the random data is going to show no correlation, but a few data sets, maybe, oh, 13 or so, will show some strong hits.
So you go and analyze those, and guess what? They probably aren't going to that anticipatory period over several calm sessions, merely because the strong performance means that the data shows calm before calm targets, and excitement before exciting targets. I.e. he preselected data that probably wasn't going to exhibit the patterns he was hoping to not find.
It also preassumes the conclusion - that psi is responsible for this.
That's my hypothesis. I'd actually like to see that experiment run. Generate random data for the human trials, and see if you can find similar correlations. If so, the experimental results are bunk.
How does he do this? Performs post hoc analyses on a data subset that is likely to support the psi hypothesis. Bad science.
He did this, in a more blantent way, in an earlier paper...
...I can't find a link to that paper, but Paul started a thread on it a few years ago, and the paper was pretty much trashed. I've searched the archives and can't find it.
He asserts that, but does not prove that, to my understanding. Could you explain why this would be true?He selected data that had the highest chance of showing this effect, if it were present!
Untrue. Uniform random data will show exactly that in some subsets. Other subsets will show nothing, and yet other subsets will show a negative correlation.Back to our computer generated data. Within our 13 significant samples, our data isn't going to show an anticipatory strategy because we know one is not present - its just random data.
I see what you are saying. My test rules out some kinds of antipatory strategies, but not all of them. Agreed.Returning to the other issue, if you increase the interval between trials as you suggested, there will still be the possibility that anticipatory strategies can influence the results. The participant can still remember what the last trial was, regardless of how long its been since. The hypothesis will still have to be tested.