• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

IQ Tests

OK Bpesta, I wasn't trying to 'ding' you. I was just curious what you thought.

I can tell you, I remember thinking how stupid the tests were. I knew exactly what they were asking for, but felt strangely belligerent. I put the beads on in the exact reverse order they asked for. I came up with morbid or silly pattern associations, then justified them. I'd complete sequences incorrectly (based on what the test wanted to see), then would explain how my answer was also valid. It wasn't until high school that I was apathetic enough to not bother... and they concealed this IQ test as a 'Pre-SAT cognitive awareness analysis'... or something like that. I think I was vaguely aware of what they were doing, but that was close to 1990, and wasn't there something of an uprising against IQ tests in the end of the '80s?


Cool, but it was a good ding, nonetheless, so appreciated.

In your scenario, I'd argue it's not the test's fault. You can't force someone to try on an IQ test. If you did try and the results were inconsistent, that would be another matter.

I know I open a can of worms here-- motivational explanations for differences in scores on IQ test. I'll just leave it open for now if anyone wants to argue that angle.

But, one of my favorite quotes in the area:

Why is Yale (or Harvard, etc) such a good school?

Because it makes people smart?

or

Because smart people go there?
 
Ok, so explanatory precision isn't proof of causality, I'd agree.

But, it's important, I think nonetheless

Oh, of course. Studying unreal things is stupid... and so is studying real things that don't have an effect.

.
I think your challenge to me is to prove that g exists independent of factor analysis?

Here we get into terminological quagmires. The usual definitions of 'g' that I've seen define it as the underlying common factor behind variance on different types of IQ tests, essentially defining it as the statistical finding, without regard to its reality. I don't want to pose you a challenge that is impossible by definition (since anything that isn't the result of factor analysis, by definition, isn't 'g').

Are you arguing there is not some common "mental process" that drives most of the variance on different types of IQ tests? That g is just a "statistical artifact," and if only psychologists (those who invented factor analysis) would do it right, they'd see they're like the archeologist studying his shovel?

I can live with this, more or less. I accept that the statistical common factor exists, I accept that it has (possibly spurious) explanatory capacity, I have yet to see evidence of its underlying mental reality.

The criterion validity of IQ for things like educational achievement or job performance is among the most replicated effects in all of psychology. And, when one patials g out of the IQ test, the validities always crash.

Similarly, if one partialed out how mental speed moderates the relationship between paper and pencil IQ tests and job performance, the validities crash.

Isn't that a reasonable test of reality?

Nt at all. It's the same causation vs. correlation argument again.

When we control for the price of rum in Havana, we find that ministers' salaries in London are random.


Incidentally, the same factor analysis techniques have been used to show conclusively that there are only 5 basic personality traits (5 g's so to speak).

Are these traits statistical artifacts too?

I don't know. You tell me. If the argument used to "show conclusively" is another fallacious inference of causation from correlation, then, yes, they might well be.

How come the personality people can't find a general factor of personality, yet the IQ people can't avoid one-- all of them use factor analysis. If the technique were bogus for IQ, wouldn't it produce the same types of results for personality?

Different data set. If I compare ministers' salaries with rum prices with average rainfall in Peru, I'm likely to find two major factors, not just one.
 
Not at all. It's the same causation vs. correlation argument again.

When we control for the price of rum in Havana, we find that ministers' salaries in London are random.

I don't agree here. In Psychology, we have the "problem" that we are dealing with an immensely more complicated object than most other scientists: people!

There is no way to test "reality" involving complex constructs like "intelligence" or "personality" in human beings in a experiment in its strict sense.

BUT: we are not just gathering data (like rum prices in Havana and ministers salaries in London), hoping to find some correlation, but we start with a hypothesis ("g as measured with instrument XY shows a positive correlation with job success (e.g. salary, position))), and repeated findings consistent with the hypothesis can be interpreted as confirmatory.

Fact is: different tests measuring different facettes of "intelligence" show evidence of a general factor, and that general factor can be validated by external criteria (consistent with different hypotheses).

Don't confuse "IQ" with a certain test result. A test result can never be more than an estimate of a true value. Also don't confuse g with intelligence. g (in the g-factor model - there are others, but the g-factor model is the most widely accepted) is an underlying factor, but intelligence is multi-facetted. The model is able to make predictions that are confirmed in reality to a certain point (a lot better than rolling dice or asking a psychic ;o) ). If you come up with a model and get better predicions, you are the next star in psychology - just like in every other scientific field.

There is no one test that measures "intelligence", but they always estimate certain facettes of it, including g (also as an estimation).

There's a old saying among psychologists: "What is intelligence"? "It's what the test measures".
 
Last edited:
It seems like you're arguing factor analysis will produce different results in different data sets-- in other words, if one "factor" explains most of the item variance in a battery of tests, factor analysis will confirm that, as it has with IQ.

On the other hand, if 5 separate, uncorrelated factors are needed to explain the variance in a battery of tests, factor analysis will confirm that too, as it has with personality.

If this is your argument, I'd submit that this is the whole purpose of factor analysis: To figure out, based on how items and tests intercorrelate, the underlying psychological constructs that are causing (yes, causing!) people to score the way they do.

So, assume we give a battery of cognitive tests to 1000s of people, with items ranging from vocabulary to memory span to abstract reasoning, to block design.

What does arranging blocks real fast to form a picture have to do with one's vocabulary level?

Nonetheless, people who are fast at arranging blocks-- on average-- also have a bigger vocabularies. This is so much so that the difference between Tom's score on his vocab and Tom's score on his block design is smaller than the difference between Tom's score on vocab and Mary's score on vocab.

In other words, despite apparent vast differences in the type of cognitive test, how well a person does on one test predicts how well that person does on the other.

This is the positive manifold, and for almost 100 years no one has been able to NOT find it whenever testing people on a battery of (seemingly different) cognitive abilities tests.

It's hard to argue there are 7 different types of important intelligences (gardiner, for example) when the scores on each "independent" measure of IQ are correlated with each other. There might be 7 correlated subfactors of IQ, but the data always show the general factor explains more of the item variance than the other factors.

If that's a problem inherent in FA, then FA should produce the same results with different data sets (like with personality). It doesn't.

Give 1000s of people a battery of test items measuring all aspects of personality (ranging from asking how much you like crossword puzzles, to how much you worry about things, to how often you show up for appointments on time).

Despite 100s of different adjectives we can use to describe someone's personality, the factor analyses show that there are only 5 independent personality factors (5, though, not 1, as with g).

Why should such a flawed technique like FA produce such consistent (but different) results for personality and for IQ.

As far as the correlations being spurious, the convergent and divergent validity of g is well established (as mentioned above). The criterion validity of g (whether you want to call g innate intelligence, cultural bias or statistical artifact) has to be about the most replicated effect in psychology.
 
I can live with this, more or less. I accept that the statistical common factor exists, I accept that it has (possibly spurious) explanatory capacity, I have yet to see evidence of its underlying mental reality.

It has not only "explanatory" capacity, but actually predicting capacity.

Why would you like to see its underlying mental reality? How should that be done? Have you ever seen gravitation? The g-factor model is just that: a model. It works to a certain point, and it's able to predict reality (e.g. someones success in a certain job-position) without christal balls or cards. Of course the prediction is not 100%, but it's the best we have at the moment, and it is validated according to scientific standards.
 
It seems like you're arguing factor analysis will produce different results in different data sets-- in other words, if one "factor" explains most of the item variance in a battery of tests, factor analysis will confirm that, as it has with IQ.

On the other hand, if 5 separate, uncorrelated factors are needed to explain the variance in a battery of tests, factor analysis will confirm that too, as it has with personality.

If this is your argument, I'd submit that this is the whole purpose of factor analysis: To figure out, based on how items and tests intercorrelate, the underlying psychological constructs that are causing (yes, causing!) people to score the way they do.

Well, if this is really the case, then the whole process of "factor analysis" is an extended fallacy.

Fortunately, this is not usually how factor analysis is applied.





It's hard to argue there are 7 different types of important intelligences (gardiner, for example) when the scores on each "independent" measure of IQ are correlated with each other.

Ho-hum. Now show me some evidence that does not assume that correlation implies causation.

There might be 7 correlated subfactors of IQ,

Or IQ might not exist in the first place!

As far as the correlations being spurious, the convergent and divergent validity of g is well established (as mentioned above). The criterion validity of g (whether you want to call g innate intelligence, cultural bias or statistical artifact) has to be about the most replicated effect in psychology.

Which does not make 'g' real!

Let me phrase it a bit more directly, then. Please complete the following sentence:

"If 'g' were a real psychological phenomenon instead of a reification of a statistical artifact, then we would observe ...."

I submit that there is no correlation-based criterion that can complete that sentence, because, as you have almost certainly been taught, correlation does not imply causation. "Convergent and divergent validity" are simply correlations between different aspects of behavior -- but provide no causal or ontological evidence about the underlying "factors" inferred.

Note I'm not even asking, at this point, for a citation what has been observed. Present me with a thought experiment if you like. Explain to me an experiment that you could set up that would demonstrate the reality of 'g.' Alternatively, explain an experiment that you could set up that would demonstrate the unreality of 'g.' I'll be glad to help you run either experiment if necessary.
 
It has not only "explanatory" capacity, but actually predicting capacity.

Big deal.

I can predict that if rum prices are higher in 2010 than they are today, ministers' salaries will also be higher in 2010.

Will this prove the reality of the "factor" correlating the two?

Predictive models with the wrong underlying ontology are a standard aspect of (failed) science. Remember phlogiston?
 
Ho-hum. Now show me some evidence that does not assume that correlation implies causation.

Predictive quality.


Or IQ might not exist in the first place!
What is IQ for you? Do you deny that there are differences between people regarding their cognitive capabilities? Then I suggest you take a trip to the fstdt forum... ,)



Which does not make 'g' real!

It doesn't have to be. It's a model that works and that has been confirmed by data.
Let me phrase it a bit more directly, then. Please complete the following sentence:


"If 'g' were a real psychological phenomenon instead of a reification of a statistical artifact, then we would observe ...."
Exactly what we do observe: that a connection between measured g and real life data existy according to the model. It predicts what it is intended to predict.


I submit that there is no correlation-based criterion that can complete that sentence, because, as you have almost certainly been taught, correlation does not imply causation. "Convergent and divergent validity" are simply correlations between different aspects of behavior -- but provide no causal or ontological evidence about the underlying "factors" inferred.
We are talking about predictive validity here.
 
Last edited:
We are talking about predictive validity here.

Phlogiston was "predictively valid," too. And yet it doesn't exist; it was a construct artifact of the experimental framework of the time.
 
Note I'm not even asking, at this point, for a citation what has been observed. Present me with a thought experiment if you like. Explain to me an experiment that you could set up that would demonstrate the reality of 'g.' Alternatively, explain an experiment that you could set up that would demonstrate the unreality of 'g.' I'll be glad to help you run either experiment if necessary.
I don't understand what you're trying to say. Are you saying that 'g' might conceivably be real, but that you believe that it happens not to be real? Or are you saying that it's not real by definition?

We can say "correlation doesn't imply causation", but it would be nice to understand why the correlation exists. Do you have any sort of explanation for it in mind?

Maybe this is getting too philosophical, but what is "causation" anyway, if not "correlation whenever we check"?
 
Phlogiston was "predictively valid," too. And yet it doesn't exist; it was a construct artifact of the experimental framework of the time.
It predicts some things incorrectly. Otherwise, we'd still believe in it. No?
 
"If 'g' were a real psychological phenomenon instead of a reification of a statistical artifact, then we would observe ...."

That this statistical artifact predicts:

How fast one can touch the light bulb that turns on among a set of other light bulbs.

The speed with which a single neuron in the brain fires.

How fast one can judge which of two lines is longer.

Job performance for any job, with better accuracy than any other single measure of job performance.

Training ability for any job (even more strongly than it predicts job performance).

GPA to an r of .50

Years of education to an r of .55

Socio economic status to an r of .33

Teenage pregnancy rates (r = -.19)

Juvenile delinquency rates (r = -19)

of all traits studied, IQ being the single best predictor of leadership.

how best to pair of 3-person tank teams in the Israeli army (three people all high in this statistical artificat will perform tons better than three lows, and even much better than 2 highs and one low).

One's IQ at 6 years of age, given one's IQ at 3 months!

One's IQ in adulthood given one's IQ as a kid.

The IQ of your biological offspring, even if you didn't raise them.

But not the IQ of your foster parents...

I appreciate correlation does not imply causation (though causation implies correlation). Given that the inventor of factor analysis also invented the rank order correlation, and given that experimental psychology drives statistical discovery, I think it's a fair bet that these researchers realize this.

SO, the only way to get at cause and effect would be to randomly assign people to different levels of g (obviously impossible) and then see how they do on various measures of life success.

Without random assignment, one can't get at cause and effect.

If you're gonna stick with this criticism, then you should throw out all / most of psychology, as we also can't get at cause (this way) whenever we study the effects of age, gender, race, political preference, marital status, personality, anything where it's impossible to randomly assign people to it's levels.

That's throwing the baby out.

I'd be willing to agree at this point that g measures the white male's desire to capitalize on an evil, spurious, statistical conspiracy to keep women, minorities, children and those in wheel chairs from advancing in life.

Whatever the case, boy does it predict important life outcomes!
 
From my lay perspective, as a parent of school-aged children, this is precisely the path we're heading down. Education 'experts' are becoming more and more afraid of buckling down and calling idiots idiots,

Of course, they're not "idiots", they're "special". :rolleyes:

ETA: However, as Gump said, stupid is as stupid does. In the end, you are what you DO, now how well you think.
 
I appreciate correlation does not imply causation (though causation implies correlation). Given that the inventor of factor analysis also invented the rank order correlation, and given that experimental psychology drives statistical discovery, I think it's a fair bet that these researchers realize this.

I believe it's a fairbet that Spearman realized that, yes

I have yet to see any evidence that Burt, Eyesenk, or Herrnstein did.
 
...snip...
Whatever the case, boy does it predict important life outcomes!

It's interesting that life expectancy, has much the same properties as this g.

However, from what I've read, this is attributed to the fact that doing successfully in these areas increases your social standing, which in turn increases your life expectancy.
 
I don't understand what you're trying to say. Are you saying that 'g' might conceivably be real, but that you believe that it happens not to be real? Or are you saying that it's not real by definition?

I'm saying I've seen no evidence that it's real. Personally, I believe that it's not real because I've seen no evidence that it's real, and it ends up getting filed in the same drawer as dragons, unicorns, and the Easter Bunny. But I'm willing to believe in the reality of the Easter Bunny under appropriate epistemological circumstances that haven't yet happened.

Factor analysis can certainly find real things. But it's also well-known as being able to "find" things that don't really exist.


We can say "correlation doesn't imply causation", but it would be nice to understand why the correlation exists.

It would indeed. And for nearly a century of research, no one has even investigated the question.

Maybe this is getting too philosophical, but what is "causation" anyway, if not "correlation whenever we check"?

Hume answered this one long ago -- "correlation whenever we check" doesn't work. The cock's crowing does not cause the sun to rise no matter how many times we check. There are a lot of formal philosophical definitions of "causation," none of which are entirely satisfactory.... but they're a lot better than naive "correlation."

[Phlogiston] predicts some things incorrectly. Otherwise, we'd still believe in it. No?

Yes, but only because someone started to actually think about the implications of reifying phlogiston as a substance. Since "substances" have positive weight, an unburned piece of wood should weigh more than the products of combustion. Doing the experiment was rather tricky, but when they finally found out how to do it, they found that "phlogiston" had negative weight, and a better description of the actual substance involved would be "lack of oxygen."

Another nail in the coffin was when people realized that the substance phlogiston would be exhaustible, but found that "heat" could be generated (via friction) more or less without limit, without exhausting the "phologiston."

As a result, we now know of two separate aspects of combustion -- the chemical process of oxidation, and the revision of "heat" as a kind of energy, not a substance, that is produced by many processes including oxidation. From one factor ("phlogiston") comes two, because we actually thought about what we were doing.

There's an obvious, if somewhat facetious, analogy to the g-model of "intelligence." Perhaps what we measure as IQ is in fact a failure of mental process; the actual human mind/brain is ideally capable of infinite performance, but various aspects limit it. Perhaps we should be measuring a "stupidity quotient" and searching for ways, not to enhance intelligence, but to limit (aspects of) stupidity.

Similarly, perhaps the neurologists are wrong (and the esoteric idealists were right) all along, and "intelligence" is a dualistic process tapping like a radio receiver into the "mind of God," where IQ is simply a measure of how little interference is present.

Either of these cases (which I admit I find improbable) would suggest that the g-factor model is accurate, but ontologically incorrect.

Similarly, there may be some process ("oxidation") that produces behavior ("heat"). This is one of the central questions behind much theoretical IQ work -- does the g-factor model measure the process capacity for intelligence, or does it measure the behavioral results of the processs? But of course, if the g-factor itself does not exist, then the question itself reveals a fundamental misapprehension caused by the reification of 'g.'

But to cut directly to the chase, what test could you perform to falsify the first hypothesis above in favor of a 'real' g-factor model?
 
Maybe this is getting too philosophical, but what is "causation" anyway, if not "correlation whenever we check"?
It is correlation that we don't see any other possible factors to explain.
 

Back
Top Bottom