• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Does Cognitive testing and training work?

Here's what Sternberg says:

Of course, there is good evidence for
the validity of so-called g-based measures for predicting many different criteria. It is unclear why
people continue to conduct studies on the external validity of g-based predictors, as their validity
has already been conclusively shown and there are no more serious skeptics to convince.
Although there is no harm in endlessly repeating arguments already made or in conceptually
replicating studies done over the past 100 years or so, it is not clear that there is much benefit
either, as the arguments already are established as correct. A better use of intellectual, financial,
and time resources is to seek psychologically to understand g through internal-validation studies,
as many investigators are doing, or to explore diverse classes of expanded measures—outside the
range of g-based measures typically used on conventional tests--that might add to the external
validity of g-based measures. Our explorations of such expanded measures suggest that they can
successfully augment the prediction provided by g-based measures.


http://www.isironline.org/resources/pdf/abstracts2001.pdf
 
Dann

I've used IQ score and g here interchangeably. It's shorthand for this:

We can't see intelligence, nor can we tell how smart someone is by looking at him. Instead, we test for it (as we do for any other thing we can't see-- like knowledge of a college course and the midterm exam the professor gives, or personality traits and the big 5 test, etc, etc).

The test score I can observe, and I use it to make inferences about the thing I cannot observe (in this case, intelligence).

This gets us into very basic concepts in psychometrics like reliability and validity.

The shared variance among test items on an IQ test is g. That to me is intelligence. The non-shared variance in test items is caused by error or uniqueness. Though it contributes to one's IQ score, it is not g, nor is it intelligence (it might reflect some more narrow and specific mental ability though).

Fortunately, one can separate the IQ test score from g. It's been done for decades with factor analysis. Now, it's done very elegantly with a technique called Structural Equation Modeling. SEM is taking over social science as a research tool; it's used everywhere and not just for IQ research. It's clear and non-controversial that one can use IQ test scores to estimate g, and that the g estimates are pure (the g factor that emerges only does so if it explains the shared variance among the indicators, after partialling out error and uniqueness).

It's hard to take anything you say about IQ seriously when you refuse to accept even the basic empirical facts.
 
Very interesting! I've been using predicative all my life. About 2 weeks ago, someone called me on it (in a context not related to anything re IQ). I spent a small amount of time googling it. Seems like predictive is more popular than predicative, but enough people use predicative for me to think I'm not making a typo.

It makes me wonder now though if I and the others who use it are wrong-- it should be predictive?

If anyone has info on the origins of predicative in the context of testing, please share.

What does this say about your IQ? ;)

The reason why I don't find the distribution of IQ question particularly interesting (to me-- just my opinion) is that the tests obviously "work". By that I mean even though they only rank people, the ranks correlate significantly with many important outcome variables.

But if you don't care about the distribution of IQ, why use it at all? The result of IQ tests is a number, easily comparable to other people. That's the idea of having an IQ factor: To rank people.

And, again, g often emerges as the single best predictor of the outcome.

What, exactly, does g predict?

I stumbled across a conference abstract by R Sternberg (very big name in the field, very anti-g). He wasn't questioning the validity of g, rather he was calling for people to stop doing research on it because it's so well established that g is reliable and has "predictive" validity that people are wasting time further studying it. He's getting sick of people replicating and re-replicating the same stuff, and he argues that it's so well established that further replications are a waste of resources.

This to me is a compelling appeal to authority. Read anything by Sternberg and it's clear he's as anti-g as Gould. Yet he admits the points I've been trying to make here (re g being measurable, ranking people accurately, and correlating with many outcome variables).

That really ought to stop the debate at this level. That IQ/g can be measured, and predicts things is an empirical fact.

Debates about what g is and whether it's largely genetic and the explanation for group differences on it are ok. Really, though, the other stuff is figured out and non-controversial among even experts who despise the concept (true experts; not hacks like Gould).

Do we still debate whether psychics can talk to dead people?

I've used IQ score and g here interchangeably.

You can't do that, if IQ can change, but g is constant.

Any chance of you addressing post #88?
 

Back
Top Bottom