• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Prestimulus Response Experiments?

I don't think so.

The paper is on this page (it's the first one):

http://www.lfr.org/LFR/csl/academic/library.html

Personally, I've never sat down and read the papers that cover this phenomenon, but it seems pretty interesting.

ETA: Bah, that's just an abstract and a bit of the results. I'll have a look to see if I can find the paper somewhere.
 
Last edited:
Okay, I found May's first paper on the subject.

www.parapsych.org/papers/10.pdf

It's pretty dense in technical jargon, so I'd need to sit down and read it properly. This sentence caught my eye:
The experimental result from the acoustic study gives the rate prior to acoustic stimuli as P1 = 0.091 and prior to controls as P0 = 0.053; therefore a response was a rather rare event. That is, on the average only one or two stimuli in each session might contain an anticipatory response.
Which is also what happened to the guy in the video in the opening post.
 
edited:

I origianlly wrote that they were discarding data. Now, I'm not so sure.

I originally wrote:
Read the paper carefully, specifically the bottom of page 116 and top of 117. Note that they start with 190 people, then "screen" them to select 100 people for the formal analysis of the data. I.e., as far as I can tell, they discard the data that doesn't fit.

Yawn.

But on 117 they do use data from 190 people. I guess the original paper discarded data, but this one is using data from all 190 to do the DAT analysis? I can't tell.


Are they using the subset of 100 people to conclude that "somethign" is happening, and then using the data from the full 190 people to analysis the DAT hypothesis?
 
Last edited:
This is a fascinating psi topic. The best paper I've read on this is :

http://www.scientificexploration.org/jse/articles/pdf/18.2_radin.pdf

It's Dean Radin's latest set of experiments investigating the presentiment effect - a pre-stimulus response of the autonomic nervous system to future emotional stimuli, where the occurance of the future emotional stimulus cannot be predicted by the nervous system in any known way.

Can anyone find a problem with this study?
 
This is a fascinating psi topic. The best paper I've read on this is :

http://www.scientificexploration.org/jse/articles/pdf/18.2_radin.pdf

It's Dean Radin's latest set of experiments investigating the presentiment effect - a pre-stimulus response of the autonomic nervous system to future emotional stimuli, where the occurance of the future emotional stimulus cannot be predicted by the nervous system in any known way.

Can anyone find a problem with this study?
I don't understand his argument on pages 270-271.

I don't understand why he selects 13 people out of the total number, based on their strong performance, and analyses only their data. The idea that he is testing is that perhaps the pattern of calm/exciting previous to the current trial influences the current excitment level. But then he examines only data from people who happened to do really well. By chance some people are going to do really well. That says nothing about whether the pool of people, as a whole, are influenced by previous trials.

Also, the independent reviews are meaningless, since once again Radin hides his data, and refuses to share either the reviewer's names or the reports that they wrote.

If I was to design the experiment, I would want a long time period between each trial; say, at least 10 minutes. This would hopefully eliminate the past trials from influencing the current trial.
 
This is a fascinating psi topic. The best paper I've read on this is :

http://www.scientificexploration.org/jse/articles/pdf/18.2_radin.pdf

It's Dean Radin's latest set of experiments investigating the presentiment effect - a pre-stimulus response of the autonomic nervous system to future emotional stimuli, where the occurance of the future emotional stimulus cannot be predicted by the nervous system in any known way.

Can anyone find a problem with this study?

Don't have time to read the study just now, (at work)-but I find a problem with the phrase "where the occurance of the future emotional stimulus cannot be predicted by the nervous system in any known way."
There are only so many types of emotional stimulus. Unless there is one I have not , ever, experienced before, I suspect I could anticipate any of them. As with any psycho / neurological experiment, the observer effect is complicated by the fact that the subject is also an observer.
 
It's odd that Radin should choose a measurement that previously had found no significant effect, and in fact showed a marginal effect in the opposite direction (with more change in pre-calm trials). His earlier paper is here:

http://www.boundaryinstitute.org/articles/presentiment99.pdf

Maybe he didn't have a choice with the equipment he had.

Plus, on page 266 he does a stouffer z calculation on the four trials that range in size from 240 to 2059 trials. That bothers me a little.

Here's another paper from Bierman that describe their findings as "marginally significant" but apart from that, I can't make head nor tail of it. Anyone know about brain scans?

http://www.quantumconsciousness.org/pdfs/presentiment.pdf
 
Don't know a lot about brain scans, but used to date a college professor whose livlihood was doing and analysing fRMI studies. Doing studies in the time domain are problematic for reasons that I no longer recall. But basically the BOLD technique relies on analysizing blood oxygen levels in the brain, and this response is very slow (on the order of several seconds) compared to neuronal activities. In all the technique is highly debated, and she often found herself defending her science due to these concerns.

But the kicker is on page 8, paragraph 4.1. Isn't he basically saying that he didn't get a statistical correlation, but that if we imagine that we perform a data filtering that he didn't perform that the data would be better, and thus we can pretend (imagine) that the data supports the hypothesis.

That's a neat trick. My data doesn't support my hypothesis, but let's imagine that it does, if I would just perform an analysis that I can't be bothered to make.
 
Hmm. Selecting the ones that "responded"...

Let's see, if we have random results, and we select the ones who guess right...

???? I'm not sure that happened here, but somehow selecting data after the experiment doesn't sound good. ????
 
"about once every 45 seconds, varying by about 20 seconds or so" (from memory, I won't guarantee absolute word-for-word), sessions lasting half an hour.

His subjective report is that he could not tell when the sound would be coming; this is not surprizing. There are some fairly predictable conditioning effects that do not reach "conscious awareness". (For example, in a list of words, one word is chosen for pairing with electric shock; subjects typically will learn the association enough to have a sympathetic nervous system response, but only about 50% will be able to name the target word.)

Two possibilities--if they are cherry-picking, like they are in the video, they can choose hits to keep and misses to toss. I can't believe that they would be that careless, although it would not be the first time such a thing has happened.

The other possibility is more intriguing, and has the advantage of being quite testable. Simple temporal conditioning would be enough to show an effect if A) there were sufficient trials to sum to see an effect, and B) the randomization was inadequate. Any random number generator will generate sequences that vary in their predictability in the short term. (Long vs. short term control sequences for random number generators are part of what messed with J. B. Rhine's results. One must compare strings of equal length--simple probability statistics!) If only some of the subjects have been able to demonstrate the precognitive effects, perhaps the subjects' random sequences were different. A search for short-term periodicity in the random sequences could show sufficient pattern to have given some subjects enough information for temporal conditioning (it has been X long since the last startle stimulus, it must be coming again) even if it was not something they were consciously aware of.

The experts in Radin's experiment did not include behaviorists (who might have been more apt to look for this artifact), and his discussion of "anticipatory strategies" did not sufficiently explore this possibility (IMO), although it was the closest of the potential artifacts. I remember Radin's experiment; a fellow behaviorist who read it when it was first out also came up with the same temporal conditioning potential artifact.

Fortunately, it is not only testable, but easily guarded against in future studies.

(and I wonder whether the 3.5 second precognitive spike is an artifact of the randomization schedule. This also could be tested; the VT (variable time) schedule could be lengthened or shortened from its 45 seconds, and/or the variance could be altered from its +/- 20 seconds. If such changes altered the 3.5 second spike (or eliminated it), it seems more clearly a schedule-induced artifact.
 
I don't understand his argument on pages 270-271.

I don't understand why he selects 13 people out of the total number, based on their strong performance, and analyses only their data. The idea that he is testing is that perhaps the pattern of calm/exciting previous to the current trial influences the current excitment level. But then he examines only data from people who happened to do really well. By chance some people are going to do really well. That says nothing about whether the pool of people, as a whole, are influenced by previous trials.

I think he does this so that there are a sufficient number of sequential calm targets before an emotional one, in order to test the anticipatory strategy hypothesis. He says that he forms an arbitrary division of targets into calm and emotional ones based on where the targets lie in the order of emotionality ratings. Based on this target selection he then finds out that 13 individuals achieve significance. Why look for sequential effects in the other 90 when they don't even show a presentiment effect?

If I was to design the experiment, I would want a long time period between each trial; say, at least 10 minutes. This would hopefully eliminate the past trials from influencing the current trial.

Stop them from influencing in what way? This will not eliminate anticipatory strategies and the hypothesis will certainly have to be tested even if you space out the trials in this way.
 
It's odd that Radin should choose a measurement that previously had found no significant effect, and in fact showed a marginal effect in the opposite direction (with more change in pre-calm trials). His earlier paper is here:

http://www.boundaryinstitute.org/articles/presentiment99.pdf

Maybe he didn't have a choice with the equipment he had.

Maybe. He also may be simply trying to asses whether SCR is a good measure of this effect or not since experiments earlier than the one you linked gave positive results, as did experiments by other researchers using SCR. I think the experiment that did not give a significant effect using SCR is the odd one out so to speak.

Plus, on page 266 he does a stouffer z calculation on the four trials that range in size from 240 to 2059 trials. That bothers me a little.

Why does it bother you?

Here's another paper from Bierman that describe their findings as "marginally significant" but apart from that, I can't make head nor tail of it. Anyone know about brain scans?



BOLD technique is very prone to artifacts if you don't know what you're doing. The effect would have to be quite large to rule that out I think. Interesting that the same effect seems to show up though. Ideally, an MEG experiment should be done. Massively better in terms of temporal and spatial resolution. However, I think there's only a handful of MEG machines in the world. Not promising...
 
Why does it bother you?
The problem is that the stouffer z doesn't take into account the size of the experiment. If you had a number of small scale experiments with high z-scores and one large scale experiment with a small z-score, a stouffer z would treat them equally. Of course, here there's only four experiments they're not too dissimilar in size and the second largest experiment is the most successful, which is why it only bothers me a little.
 
I think he does this so that there are a sufficient number of sequential calm targets before an emotional one, in order to test the anticipatory strategy hypothesis. He says that he forms an arbitrary division of targets into calm and emotional ones based on where the targets lie in the order of emotionality ratings. Based on this target selection he then finds out that 13 individuals achieve significance. Why look for sequential effects in the other 90 when they don't even show a presentiment effect?
But you can't do that in science. Gather your data, and then decide how you are going to analyze it. That's called data mining. Suppose this hadn't worked out for him. Then he could have chosen a different way to analyze the data, and found a correlation.

But it's worse than that in this case. Here, he specifically chose 13 people where they show a lot of hits. Pretend for a second that this isn't real data from people doing the trial, but just simulated data created by a computer - so we know there is no effect to observe. Most of the random data is going to show no correlation, but a few data sets, maybe, oh, 13 or so, will show some strong hits. So you go and analyze those, and guess what? They probably aren't going to that anticipatory period over several calm sessions, merely because the strong performance means that the data shows calm before calm targets, and excitement before exciting targets. I.e. he preselected data that probably wasn't going to exhibit the patterns he was hoping to not find.

It also preassumes the conclusion - that psi is responsible for this.

That's my hypothesis. I'd actually like to see that experiment run. Generate random data for the human trials, and see if you can find similar correlations. If so, the experimental results are bunk.



Stop them from influencing in what way? This will not eliminate anticipatory strategies and the hypothesis will certainly have to be tested even if you space out the trials in this way.
We are not talking about anticipatory strategies (if you mean that in a psi sense). If I jump out of a closet and scream at you you'd be quite startled. If I did it an hour later you'd be startled again. But if I did it 3 seconds later you wouldn't be startled at all. It takes time for your system to settle down after excitement.

Now, assume for a second there is no psi effect. None. Take an experiment where there are twice as many calm trials as excited trials, as Radin has done. This means that exciting trials are relatively few and far between. It is likely after an exciting trial that it will be followed by a calm trial or two because of that uneven distribution. So, you are presented an exciting picture, you show a response, and slowly start calming down. You are presented a calm picture and you remain calm. But then you start getting excitied because consciously or unconsciously you know something exciting is coming up. Sooner or later that exciting picture is shown, and lo and behold, prior to that picture you were showing excitement.

Hey, they admit that this happens. They've done tests and it does happen that way. So the burden on the experiementers is to show that the results are do to psi, and not this well known, verified behavior.

How does he do this? Performs post hoc analyses on a data subset that is likely to support the psi hypothesis. Bad science.

He did this, in a more blantent way, in an earlier paper. He gathered his data, found no effect, but found that some people performed negatively - ie worse than chance. What does he do? Presupposes his conclusion, decides that these people are exhibiting negative psi, and discards their data from the data sets. Now his statistical analysis shows a positive correlation, and he decides he has proven his hypothesis.

The problem with that is obvious. In uniformly distributed random data, some will show a negative correlation, some positive, some neutral. If you discard the negative, the rest of the data will show a slight positive correlation.

I can't find a link to that paper, but Paul started a thread on it a few years ago, and the paper was pretty much trashed. I've searched the archives and can't find it.
 
But it's worse than that in this case. Here, he specifically chose 13 people where they show a lot of hits. Pretend for a second that this isn't real data from people doing the trial, but just simulated data created by a computer - so we know there is no effect to observe. Most of the random data is going to show no correlation, but a few data sets, maybe, oh, 13 or so, will show some strong hits.

So we have data created to simulate SCR which is generated to form a random distribution of SCR values around a baseline value. Is that what you're essentially suggesting here? If so, yes, a small sample of trials will by chance show a significant difference between calm and emotional trials. With you so far.
So you go and analyze those, and guess what? They probably aren't going to that anticipatory period over several calm sessions, merely because the strong performance means that the data shows calm before calm targets, and excitement before exciting targets. I.e. he preselected data that probably wasn't going to exhibit the patterns he was hoping to not find.

He selected data that had the highest chance of showing this effect, if it were present! Back to our computer generated data. Within our 13 significant samples, our data isn't going to show an anticipatory strategy because we know one is not present - its just random data. However, someone who did not know that the whole dataset was generated by random distribution would also find that our small sample does not show an anticipatory strategy. And that's all they could conclude. That's what is being tested after all - whether or not an anticipatory strategy can account for our selected data.

Now, considering that we need a sufficient length of calm trials before an emotional one in order to test the anticipatory hypothesis, and considering that the best case for such a strategy being present is within those trials that show a positive result, I see no problem with Radin's approach. If there is no anticipatory strategy observed in those 13 high scorers then what chance is there of finding one in the rest of the data! Indeed, if you're sceptical it seems a bit pointless trying to find anticipatory strategies in data that does not show a significant effect! What if he found such strategies in the 90 other non-significant individuals? Wouldn't that actually increase his confidence that such strategies are not responsible for the overall results because it would demonstrate that such strategies are not sufficient to get positive results.

It also preassumes the conclusion - that psi is responsible for this.

Maybe. But it seems not to be anticipatory strategies!

That's my hypothesis. I'd actually like to see that experiment run. Generate random data for the human trials, and see if you can find similar correlations. If so, the experimental results are bunk.

No not at all. You also have produce overall significant result from such random data. If you plan just to run your random data program and show that there is a significantly positive subset that shows no anticipatory strategy then that's all you can conclude. That's all that Radin is trying to show too - that anticipatory strategies are not present in his best data. Taken in isolation his selected subset could still be just a fluke (granted, he doesn't acknowledge this) but when they are put back in the overall data, the results remain significant.



How does he do this? Performs post hoc analyses on a data subset that is likely to support the psi hypothesis. Bad science.

What? He performs the test on data that is most likely to show the anticipatory effect you talked about! The test is designed to demonstrate if such a strategy is responsible for the positive results.

Returning to the other issue, if you increase the interval between trials as you suggested, there will still be the possibility that anticipatory strategies can influence the results. The participant can still remember what the last trial was, regardless of how long its been since. The hypothesis will still have to be tested.

He did this, in a more blantent way, in an earlier paper...

...I can't find a link to that paper, but Paul started a thread on it a few years ago, and the paper was pretty much trashed. I've searched the archives and can't find it.

That's a pity because I'd like to check out this claim for myself. Paul if you reading, can you help?
 
He selected data that had the highest chance of showing this effect, if it were present!
He asserts that, but does not prove that, to my understanding. Could you explain why this would be true?

I offered a counterexample that suggests that it would be less likely to show this effect.

Back to our computer generated data. Within our 13 significant samples, our data isn't going to show an anticipatory strategy because we know one is not present - its just random data.
Untrue. Uniform random data will show exactly that in some subsets. Other subsets will show nothing, and yet other subsets will show a negative correlation.

Returning to the other issue, if you increase the interval between trials as you suggested, there will still be the possibility that anticipatory strategies can influence the results. The participant can still remember what the last trial was, regardless of how long its been since. The hypothesis will still have to be tested.
I see what you are saying. My test rules out some kinds of antipatory strategies, but not all of them. Agreed.


ETA:

Assume 4 trials, with random data. 4 so I can write out all the possibilities - obviously a real experiment will require more trials. I simplify by rating each trial as calm or excited, rather than the continuum that is actually measured.

With random data we generate (c = calm, e = excited):

CCCC
CCCE
CCEC
CCEE
...
EEEE

When matched against a specific picture series, some of those show no correlation, some positive correlation, and some negative. I'm sure we agree on that point. But, as far as the non-psi anticipatory strategy goes, some of those will show the anticipatory effect, and some won't. However, the better the match, the less of that effect it will show, I argue.

For example, suppose the picture series was CCCE. Take the data set CCCE. That will test low for the antipatory practices, because there is no build up from calm to excited. yet it is the highest possible performace possible for that picture series. Radin, by selecting the 13 highest performers, is quite probably selecting against the anticipatory strategy, not for it. The point being that the anticipatory strategy will only yield a modest positive correlation; random data can (and will) yield higher correlations, just by the law of large numbers.
 
Last edited:

Back
Top Bottom