• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Cognitive dissonance studies' mistake

Joined
Aug 29, 2005
Messages
569
FCP: Free Choice Paradigm
CD: Cognitive Dissonance
M. Keith Chen said:
In one of the simplest forms of a FCP, the object of study is the shifts in a subject’s choices. In a recent FCP of this type (Egan, Santos & Bloom 2007), the experiment begins with subjects rating a number of objects on a five-point scale. Then, three objects that are rated equally (say rated 4) are chosen for use in a second stage of the experiment. Note, importantly, that the discreteness of the scale leaves open the possibility that these items might not be perfectly equivalent; for example, a subject may truly rate one of the items 4.1, one 4.26, and one 4.3.
In a second stage then, a subject is asked to choose between a randomly chosen two of these items, say A and B. Calling the object which the subject chooses A, the subject is then asked to choose between B (the initially rejected item), and C (the third item that was rated 4). If subjects are more likely to choose C than B in this choice, they are said to suffer from CD.
I argue that this was to be expected in subjects with no CD. In fact, subjects should be expected to choose good C 66% of the time.
The draft paper can be found at http://www.som.yale.edu/faculty/keith.chen/papers/CogDisPaper.pdf. Chen argues that there is a widespread methodological failure in experimental psychology using the FCP.
 
I predict that this will cause cognitive dissonance in the minds of many experimental psychologists, who will resolve the dissonance by questioning the paper's validity.
 
I don't think that would be a suprise, there are a number of artifacts to scaling. But if the variation is not measured in a subscale, then how can you know if the dissonant effect if from a .1 or a .9.

there are other influences I would be much more worried about, the tendency of subjects to not represent what they really think in the first place and the stuff where people always seem to chose the middle value if there is an odd number to the scale. There are many different strategies to cope, expanding scales (say twenty point) , have people mark thier position on a line, rather than assign a number.

You could even try a spider web style of decision making where people tried at actually map the variables that effect the choice.

Self answering surveys are a wonder and a mess at the same time. Such as the Beck depression Invenetory, even given a binary (yes,no) choice people will then put down an answer that does not reflect what they say in a later interview. It is wonderful, wierd and fraught with peril.

So it would not suprise me in the least if scaling has a huge source of hiden error.
 
Chen argues that there is a widespread methodological failure in experimental psychology using the FCP.

I didn't read the paper, but from the abstract you quoted this is nothing but a form of the Monty Hall problem.

Suppose your true preference goes 1,2,3. You're either presented with 1,2, 1,3, or 2,3 with equal probability. In the first case you will choose B, but in the other two you will choose C (just like switching in the MH problem). The correct odds are 2/3, not 66% - but presumably that's what he means.

If people really wrote papers overlooking such an obvious fact, I'm shocked.
 
Last edited:
I didn't read the paper, but from the abstract you quoted this is nothing but a form of the Monty Hall problem.

Suppose your true preference goes 1,2,3. You're either presented with 1,2, 1,3, or 2,3 with equal probability. In the first case you will choose B, but in the other two you will choose C (just like switching in the MH problem). The correct odds are 2/3, not 66% - but presumably that's what he means.

If people really wrote papers overlooking such an obvious fact, I'm shocked.

I'd be shocked too. So I read some of the referenced papers. Here's the most blatant failure.

The Origins of Cognitive
Dissonance

Evidence From Children and Monkeys
Louisa C. Egan, Laurie R. Santos, and Paul Bloom

This paper shocks me. Here's the childrens' procedure:
Setup:
The experimenter assessed children’s preferences for different
stickers using a smiley-face rating scale that included six
faces ... Each child included in the sample
rated stickers until the experimenter was able to identify at least
two triads of stickers for which the child had equal liking (i.e.,
stickers the child had matched to the same face on the scale).

Procedure:
Once a child had rated the stickers, the experimenter randomly
labeled the stickers in each triad as A, B, and C. The child
was then given choices involving each triad of stickers. Each
child participated in one of two conditions, either the choice
condition or the no-choice condition. In the choice condition, the
child was given one choice between A and B ... Next, the child was given a
similar choice between the unchosen alternative (i.e., eitherAor
B, depending on which option the child had chosen) and C (i.e.,
the novel yet equally preferred alternative) ...
In the no-choice condition, each child received either A or B ... randomly ...
After receiving this sticker, the child was given a
choice between the unreceived alternative (again, either A or B,
depending on which one the experimenter had just given the
child) and the equally preferred alternative, C.

Result:
Children in the choice condition were more
likely to prefer option C (mean percentage choice of C = 63.0%)
than were children in the no-choice condition (mean percentage
choice of C = 47.2%). Average choice of C in the choice condition differed reliably from chance, according to a one-sample
t test with a hypothesized mean of 50%, t(14) 5 2.28, p 5 .04,
two-tailed. This was not true for the no-choice condition, t(14)5
0.53, p 5 .60, two-tailed.
They also did a very similar experiment with capuchins.

That one is clearly knock-down a bad test. But most FCP experiments use a different procedure. The paper argues against this more subtle procedure as well, and to my eyes, convincingly.
 
Last edited:
It does seem like an odd way to measure CD-- note, the problem lies with the researcher then and not necessarily the construct.

Also, unless I'm missing something, the paper hasn't been published. It's just a draft?
 
It does seem like an odd way to measure CD-- note, the problem lies with the researcher then and not necessarily the construct.

Also, unless I'm missing something, the paper hasn't been published. It's just a draft?

The draft paper can be found at http://www.som.yale.edu/faculty/keith.chen/papers/CogDisPaper.pdf. Chen argues that there is a widespread methodological failure in experimental psychology using the FCP.
I realize now that linking to and discussing a draft paper might be socially unacceptable. I don't have any actual knowledge about it, so someone please take me to task if I shouldn't.

Yes, true, you should be able to take the same data and analyze it correctly. But it may mean we need to reexamine quite a lot of CD research.
 
I dunno that it's unacceptable to discuss it, but until and unless it gets published in a good journal, it would be premature to slam experimental psychology as a whole (theoretically, if it's that bad-- I didn't read it-- it won't get in a good journal, and the field will then be saved!).
 
I do think it's odd (if I skimmed it right) to base the key manipulation on banking on a test's imprecision (that a 4 might be a 4.3 or a 4.1).

I'd be perhaps more impressed if they used something rated a 3 earlier, but subjects now picked it over a 4, because a prior pick on two 4 items had created dissonance. Then we don't need to get into all this probability crap. That would be fairly striking and less complicated-- jmo.

ETA, I see the guy's a management professor. We do state of the art experimental psych research in management, but it's circa 1972 (that said, I bet his salary's 150,000$ + !).
 
Last edited:

Back
Top Bottom