• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

'Expensive' placebos work better than 'cheap' ones, study finds

Gord_in_Toronto

Penultimate Amazing
Joined
Jul 22, 2006
Messages
26,456
It's common knowledge that the brighter colored and more pervasive placebos work better. Acupuncture works better than a pill, massage better than a pill too. Dummy surgery is probably the most potent so far. They actually studied it on knee ligaments.

I think I'll volunteer for the Sex Therapy Study- Orgasm must work better then all of them put together.
 
Old marketing trick which is still used and danced around all the time in retail. People perceive more $ = better so sometimes charging more for the same product/same quality can get more sales (or at least more money given the higher cost).
 
:) I have experienced this in one of my hobbies. Double the price for one of my products and demand almost doubled along with it in the couple of months that followed.

Several years ago, before I moved to project mgmt, I was a finance guy and worked in SW pricing. This was the sort of thing we grappled with all the time - perceptions created by price.
 
Yep. Gauging (or is it gouging) people's stupidity can be surprisingly tricky. :)
 
:) I have experienced this in one of my hobbies. Double the price for one of my products and demand almost doubled along with it in the couple of months that followed.

Several years ago, before I moved to project mgmt, I was a finance guy and worked in SW pricing. This was the sort of thing we grappled with all the time - perceptions created by price.

my uncle tried to sell his boat for $1200. Got a lot of calls, but no buyer. Hell of a deal.
So he jacked the price up to $2500. Sold it the day the ad came out.
Costs more, must be better.
 
Old marketing trick which is still used and danced around all the time in retail. People perceive more $ = better so sometimes charging more for the same product/same quality can get more sales (or at least more money given the higher cost).

Tell most people that a cheap bottle of wine really cost $100, and they'll rave about how good it is.

Steve S
 
lol - don't even get me started on the pretentious, moronic wine snobs (who seem to be breeding like cockroaches)! But yes booze in general is a place where this kind of thing is rampant as well (mostly wine and hard liquor).
 
I suppose you could try reading the newspaper article or the abstract of the paper from the journal in which the study is reported: http://www.neurology.org/content/early/2015/01/28/WNL.0000000000001282.short
A mere 12 subjects, all given both placebos, and no control group?

Results: Although both placebos improved motor function, benefit was greater when patients were randomized first to expensive placebo
Yeah right. Let's look at the actual figures,
Code:
	                                        Mean difference (95%CI), p-value
“Cheap” versus “expensive” placebo in first period     0.10 (-0.1, 0.31), 0.28
“Cheap” versus “expensive” placebo in second period    0.03 (-0.31, 0.36), 0.88
Hardly convincing...
 
A mere 12 subjects, all given both placebos, and no control group?

Yeah right. Let's look at the actual figures,
Code:
	                                        Mean difference (95%CI), p-value
“Cheap” versus “expensive” placebo in first period     0.10 (-0.1, 0.31), 0.28
“Cheap” versus “expensive” placebo in second period    0.03 (-0.31, 0.36), 0.88
Hardly convincing...

But it does "work" within the parameters of the study. Like you, I am always suspicious of studies with very small sample sizes. I presented this as interesting rather than Earth shattering.
 
But it does "work" within the parameters of the study.

Actually, no it doesn't. Look at those p values. Random chance is by far a better explanation.

And taking it up a level... from a design perspective: there were no controls to isolate placebo arms from non-placebo arms, so we don't know if there was any placebo effect at all.

Specifically, there was no non-treatment control. The results could be the random fluctuations associated with nontreatment. We don't know. The experiment is not designed to detect a placebo effect.

Also from a design perspective: what is the standard deviation of variability from run to run in 4 hour sessions? Why 4 hours?

This experimental protocol for this condition is so vulnerable to Simpson's Paradox that it is not advised for this type of inquiry.



Like you, I am always suspicious of studies with very small sample sizes. I presented this as interesting rather than Earth shattering.

It's not just the small sample sizes - it's exacerbated by the high variability from minute to minute of the effects being measured. The patients are a random noise generator.

Every experiment is interesting, but some are limited to the discussion of how pretty much anything can get published these days, so "follow the data" has become more of a recipe for "get misinformed by publication bias" over the last 20 years. The design does not lead to the conclusions - the conclusions are not following from the experiment - they are an unsupported claim by the authors. We see that all the time.

The interesting question is 'how did this pass peer review in Neurology?'

I anticipate we'll see analysis on neurologica shortly.

Regarding the 4-hour runs... I suspect the effect disappears when the power of randomization declines over longer timeframes as patients' presentment reverts to a mean.
 
Actually, no it doesn't. Look at those p values. Random chance is by far a better explanation.

And taking it up a level... from a design perspective: there were no controls to isolate placebo arms from non-placebo arms, so we don't know if there was any placebo effect at all.

Specifically, there was no non-treatment control. The results could be the random fluctuations associated with nontreatment. We don't know. The experiment is not designed to detect a placebo effect.

Also from a design perspective: what is the standard deviation of variability from run to run in 4 hour sessions? Why 4 hours?

This experimental protocol for this condition is so vulnerable to Simpson's Paradox that it is not advised for this type of inquiry.





It's not just the small sample sizes - it's exacerbated by the high variability from minute to minute of the effects being measured. The patients are a random noise generator.

Every experiment is interesting, but some are limited to the discussion of how pretty much anything can get published these days, so "follow the data" has become more of a recipe for "get misinformed by publication bias" over the last 20 years. The design does not lead to the conclusions - the conclusions are not following from the experiment - they are an unsupported claim by the authors. We see that all the time.

The interesting question is 'how did this pass peer review in Neurology?'

I anticipate we'll see analysis on neurologica shortly.

Regarding the 4-hour runs... I suspect the effect disappears when the power of randomization declines over longer timeframes as patients' presentment reverts to a mean.

Now I'm sorry I mentioned it. :o
 
Now I'm sorry I mentioned it. :o

No, don't be - sorry if I seemed negative.

It really is valuable to discuss studies that are weak, as skeptics need to know how to look for the signs.

The other disappointing learning from these is that they actually constitute the majority, so eventually it dawns on skeptics that the data doesn't talk in the way we'd like. There's a study to prove anything and everything. The data is often deliberately misrepresented in the interest of press release power.
 
No, don't be - sorry if I seemed negative.

It really is valuable to discuss studies that are weak, as skeptics need to know how to look for the signs.

The other disappointing learning from these is that they actually constitute the majority, so eventually it dawns on skeptics that the data doesn't talk in the way we'd like. There's a study to prove anything and everything. The data is often deliberately misrepresented in the interest of press release power.

:w2:
 

Back
Top Bottom