Dancing David said:
Amhearst, we have all discussed with you why we don't find the ganzfeld studies to be proof of anything.
Neither does anyone else, but as evidence, some do.
Dancing David said:
Amhearst, we have all discussed with you why we don't find the ganzfeld studies to be proof of anything.
You are completely clueless Ersby. Why on earth are you talking about 4 as if the authors had claimed it was the average rating? 4 is the MIDPOINT between 1 and 7, MIDPOINT! The only average rating which was mentioned was 5.33.Ersby said:Let me try again. Imagine you have seven ccommuities and you want to know the average population. The usual thing to do is to add up all the populations and divide by seven. But if you numbered the communities 1 to 7 according to size, would it be correct to take the population of community 4 and say that is the average population?
You see what I'm getting at? The numbers for standardness are just place-holders in a sequence. Like the numbers in the top ten of the charts. They don't mean anything. They have no value of their own. So saying 4 is the average may look sensible, but doesn't mean anything.
They wouldn't have know 5.33 would be the mean, no. But they could have decided beforehand how to judge where the average fell. It's interesting to note that the only time "hypothesized" appears in the paragraph discussing 5.33.
Ersby, I really do want to commend you for actually reading and trying to understand articles which supply evidence that contradicts your belief system. It is much more than most so-called skeptics do. But when you misunderstand the material this severely, and it pains me to say this, I think maybe it would have been best if you had never even tried in the first place.My point is that neither is better than the other. In fact, you could also justifiably talk the average to the top half of the standardness scale and take the average from those. There are any number of "averages" to use. I happen to think the one they've focused on is (a) meaningless and (b) just so happens to give the best results!
I don't know why it astounds you that the paper doesn't list the specific nonstandard aspects. As you know, the students were instructed to blindly rate each experiment according to how well its procedure mirrored the standard PRL one. Now unless you're suggesting that the three different raters, all blind to the experimental outcomes, without interacting with each other, and basing their decisions soley on the information they were given by the authors(references to which are given in the article), all somehow falsely rated the standardness of the experiments, you really don't need to know the specifics. All you need to know is that experiments rated as nonstandard had procedures which went against the information the raters used (which again, you can look at) detailing standardness.Paul C. Anagnostopoulos said:But what astounds me is that the paper stops right there. Where is the list of nonstandard protocol aspects? Where is the analysis of the ten most common nonstandard aspects, to see which ones contribute to the failure of the studies? Why did they ignore the standardness of the statistical analysis and the artifact-producing aspects of the protocol? It's enough to make me crazy.
I have emailed Bem to see if he will give me the raters' scoring sheets.
~~ Paul
T'ai Chi said:
Neither does anyone else, but as evidence, some do. [/B]
If you previous rantings and posturings had not revealed you as a poseur, this staement surely would.posted by Amhearst
I don't know why it astounds you that the paper doesn't list the specific nonstandard aspects.
I don't need to know them if all I care about is the simple conclusion of the paper. But since the specifics were recorded, it seems almost silly not to go into them in some detail, no?Amherst said:
I don't know why it astounds you that the paper doesn't list the specific nonstandard aspects. As you know, the students were instructed to blindly rate each experiment according to how well its procedure mirrored the standard PRL one. Now unless you're suggesting that the three different raters, all blind to the experimental outcomes, without interacting with each other, and basing their decisions soley on the information they were given by the authors(references to which are given in the article), all somehow falsely rated the standardness of the experiments, you really don't need to know the specifics.
Aren't you interested to take this further, in order to find out which specific alterations of the protocol are the ones that ruin the results? That has the potential to uncover what is really going on in these experiments.All you need to know is that experiments rated as nonstandard had procedures which went against the information the raters used (which again, you can look at) detailing standardness.
But the average is most likely meaningless. Why did the authors even mention the average standardness?"Decided beforehand how to judge where the average fell"? What decision would need to take place? You find out what the average is by dividing the combined ratings by the number of experiments.
Dancing David said:
Evidence of how much flawed methodology can change results?
amherst said:
You are completely clueless Ersby. Why on earth are you talking about 4 as if the authors had claimed it was the average rating? 4 is the MIDPOINT between 1 and 7, MIDPOINT! The only average rating which was mentioned was 5.33.
Ersby, I really do want to commend you for actually reading and trying to understand articles which supply evidence that contradicts your belief system. It is much more than most so-called skeptics do. But when you misunderstand the material this severely, and it pains me to say this, I think maybe it would have been best if you had never even tried in the first place.
amherst
Paul C. Anagnostopoulos said:
Otherwise, they only talk about the midpoint of the scale and define standard and nonstandard accordingly. I think that's okay, since it's just an arbitrary definition. It would be interesting to place the boundary at other points and see what happens.
~~ Paul
Now that's interesting. If I raise it to 5, does that push enough standard studies into the nonstandard category to even out the two categories? Likewise, if I lower it to 3, does that drag enough nonstandard studies into the standard category to even them out?Ersby said:
As far as I can tell, put it anywhere else and the effect diminishes.
Ersby said:Paul got me thinking about the rating procedure and the advice given to people judging the standardness of the database.
And since the result of this process doesn't produce a simple graph that directly maps 'high hit rates' to 'high standardness' and 'low hit rates' to 'low standardness', then the decisions on what constitutes 'standardness' tells us what?I don't know why it astounds you that the paper doesn't list the specific nonstandard aspects. ... you really don't need to know the specifics. All you need to know is that experiments rated as nonstandard had procedures which went against the information the raters used (which again, you can look at) detailing standardness.