Alright. Since you asked, I'll comment on your primer on scientific testing.
This is actually the second time I'll have listened to this episode. I admit that initially I had treated this as "wheatgrass part 2."
I particular like how you explain the difference between a double-blind test and a triple-blind test, pointing out that you get best results when the subjects are blind to placebo vs. real stuff, the distributors are blind to which is which, and the statistician being blind to which was used by whom.
I think even people who are somewhat aware of what a double-blind test is are not prone to thinking on who does the number crunching. Before listening to this a second time as per your request in your listener feedback episode, even I had fallen back into assuming that it must be the administrators of the experiment (the ones who distribute the placebo and the real test substance) that does the number crunching. It's easy to think this is always done and is necessary because it is what we are told is done by every quack out there, and it numbs us to the value of blind testing. We most often see a group of pseudoscientists doin the numbers for their own studies and skewing them, and require a debunking by a more jargon-fluent expert to explain to us what was done wrong.
It's easy to forget the idea of hiring an objective third party statistician with no political or personal ties to either the subjects or the administrators to ensure that the statistics are not skewed and reveal any inappropriate biases inherent to the study. It's also easy to dismiss because pseudoscientists often hire third party statisticians who DO have a bias and an agenda to push (see Intelligent Design "research"), and non-scientists become jaded very quickly.
Also beyond most people's thought range is that there are two ways to set up a double-blind study. The first is one in which the subjects and the statistician are blind but the administrators are not. The other is when the subjects and administrators are blind, but the statistician is not. They are both equally slimey when objectivity is not rigorously attained from the non-blind party, and especially so when a study done irresponsibly is taken to the media before peer-reviewed scientific journals. I doubt even most journalists know the difference.
All-in-all, for a short podcast, it was very good.
This is actually the second time I'll have listened to this episode. I admit that initially I had treated this as "wheatgrass part 2."
I particular like how you explain the difference between a double-blind test and a triple-blind test, pointing out that you get best results when the subjects are blind to placebo vs. real stuff, the distributors are blind to which is which, and the statistician being blind to which was used by whom.
I think even people who are somewhat aware of what a double-blind test is are not prone to thinking on who does the number crunching. Before listening to this a second time as per your request in your listener feedback episode, even I had fallen back into assuming that it must be the administrators of the experiment (the ones who distribute the placebo and the real test substance) that does the number crunching. It's easy to think this is always done and is necessary because it is what we are told is done by every quack out there, and it numbs us to the value of blind testing. We most often see a group of pseudoscientists doin the numbers for their own studies and skewing them, and require a debunking by a more jargon-fluent expert to explain to us what was done wrong.
It's easy to forget the idea of hiring an objective third party statistician with no political or personal ties to either the subjects or the administrators to ensure that the statistics are not skewed and reveal any inappropriate biases inherent to the study. It's also easy to dismiss because pseudoscientists often hire third party statisticians who DO have a bias and an agenda to push (see Intelligent Design "research"), and non-scientists become jaded very quickly.
Also beyond most people's thought range is that there are two ways to set up a double-blind study. The first is one in which the subjects and the statistician are blind but the administrators are not. The other is when the subjects and administrators are blind, but the statistician is not. They are both equally slimey when objectivity is not rigorously attained from the non-blind party, and especially so when a study done irresponsibly is taken to the media before peer-reviewed scientific journals. I doubt even most journalists know the difference.
All-in-all, for a short podcast, it was very good.