• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

IQ Tests

But you are just dancing around the inevitable; given a high positive correlation are you looking at cause/effect or other variables in the mix that effect both. If A>C,and A>B, and you observe B & C with high correlation, what now?

And in the case of IQ tests, what variables do you select to explain "reasonable" correlations of test scores to success in specific endeavors? Do you suggest IQ test results are all meaningless?
A problem is that correlation doesn't show A>C or A>B. It could be C>A and B>A. Pearson's r doesn't distinguish. Maybe I don't understand what you mean. However, nothing is stopping a D from causing A, B and C.
 
A problem is that correlation doesn't show A>C or A>B. It could be C>A and B>A. Pearson's r doesn't distinguish. Maybe I don't understand what you mean. However, nothing is stopping a D from causing A, B and C.

If I'm understanding correctly, the point he's getting at is that something is causing the correlation. Correlation alone doesn't tell us if A causes B or B causes A or some other factor, C, causes both A and B.

However, it's reasonable to assume that there is something causing the correlation. If it isn't someone's innate intelligence (code name g) then what might it be that is causing the consistent correlations between performance on intelligence tests and all those other things?
 
If I'm understanding correctly, the point he's getting at is that something is causing the correlation. Correlation alone doesn't tell us if A causes B or B causes A or some other factor, C, causes both A and B.

However, it's reasonable to assume that there is something causing the correlation. If it isn't someone's innate intelligence (code name g) then what might it be that is causing the consistent correlations between performance on intelligence tests and all those other things?


Indeed, this is the key problem I have with Kitty's argument against g. The convergent (predicts things it should) and divergent (doesn't predict things it shouldn't) validity of g is amazing.

the validity exists for nearly every important life outcome. g predicts it (often better than any other single measure).

Even if it's a third variable (not g or intelligence as measured by IQ tests) that's causing the correlations, at a practical level, so what.

wonderlic.com lets you measure g in about 11 minutes on line, and for $1.72 per test (if bought in bulk).

11 minutes and under 2$ gives you the single best measure ever discovered for predicting important life outcomes. Whatever the third variable might be, surely we couldn't measure it and get more utility than we get with g.

At a scientific level, a third variable can't be ruled out-- perhaps the relationship between g and success is mediated by something else.

But, the point I was claiming is that basic mental processes are the causal link between g and success prediction.

Control for the efficiency of these processes, and the g/success correlations go away.

Given how basic these mental processes are-- although these methods cannot rule out a 4th varaible-- no one seems to be able to offer anything viable as a candidate (for what besides g the IQ test is measuring which makes it predict things so well....)....
 
If I'm understanding correctly, the point he's getting at is that something is causing the correlation.

Which is an unproved assumption.

You may find the assumption reasonable. I don't. And the fact that no one has been able to find evidence to support the assumption in 100 years of looking makes me even more suspicious.
 
But, the point I was claiming is that basic mental processes are the causal link between g and success prediction.

Control for the efficiency of these processes, and the g/success correlations go away.

And here, again, is the basic flaw in the argument.

"Control for" is a fundamentally statistical process; it involves calculating the correlation between two entitites and subtracting off the linear contribution that can be attributed to correlation.

You then present -- explicitly, in the above -- this correlation as proof of a "causal link."

It doesn't work that way.

no one seems to be able to offer anything viable as a candidate (for what besides g the IQ test is measuring which makes it predict things so well....)....

I'm perfectly willing to offer statistical incompence (as well as a demonstrated history of outright cheating -- see Burt's work) on the part of IQ theorists as an explanatory candidate.
 
And here, again, is the basic flaw in the argument.

"Control for" is a fundamentally statistical process; it involves calculating the correlation between two entitites and subtracting off the linear contribution that can be attributed to correlation.

You then present -- explicitly, in the above -- this correlation as proof of a "causal link."

It doesn't work that way.


I'm perfectly willing to offer statistical incompence (as well as a demonstrated history of outright cheating -- see Burt's work) on the part of IQ theorists as an explanatory candidate.

Statistical incompetence?!-- these guys invented half the statistics used in other sciences. Spearman alone has factor analyses, rank order correlations and the classical model of reliability to his credit. These three things-- without even mentioning g-- have done more to advance humanity than anything gould's done (for example).

But, surely, even though he invented the stats, Spearman didn't understand how to apply them appropriately in his area of expertise. For that we need other scientists in other areas who remember their intro to r&m mantras like corelations don't imply causality.

And, let's discredit Jensen because 100 years ago Burt *may* have faked some data.

Should we throw out Gould's punctuated equillibrium because of the pilt down hoax (?) 50 years ago (?)

Should we discredit modern medicine because doctors back in the day treated diseases with leaches?

Are you gonna tell us next that smoking doesn't cause lung cancer, as (afaik) no one's ever randomly assigned people to smoking groups to trully get at cause and effect. Sure, perhaps someone's done it with rats, but then lets drag out external validity as another buzz word and discredit that research.

I'm not sure that I'm right re my world view as it relates to g. I'm positive you're wrong in treating experimental psychology as soft.

Back to the original issue: The correlations between g and the things referenced above exist. I'm fairly certain you don't dispute that .

Your claim, I think, is that g is an artificat; an invalid indicator of intelligence (whatever that is). So, does this mean you think the correlations are either spurious (but why then have they been replicated 100s of times) or that the correlations just don't show that mental processes are the cause of g, and that g is the (partial) cause for success in many life outcomes?

Does this argument seem valid:

A real correlation (real, meaning not spurious, but replicated every time someone bothers to collect the data) between x and y implies that something is causing the relationship. It could be x causing y, or y causing x, or some third variable (or combination of third variables) that mediates the relationship between x and y.

But, it has to be one of those three possibilities, or you will not ever observe a "real" correlation.

So, there is a "real" correlation between g and the speed with which a single neuron in one's brain fires.

What's the cause?

Is it score on an IQ test causing speed of processing? (seems backward).

Is it speed of processing causing scores on the IQ test (my world view).

Or is it some third varaible causing the relationship?

If so, give me an example of a plausible third variable that might mediate this correlation.

If the third variable is the true cause, then when one controls for that third variable, the correlation between IQ and neural speed should go away. This wouldn't prove positive that the third variable was indeed the cause (that would be the fallacy of affirming the consequent), but it would cast doubt on the notion that speed causes intelligence.

Ok, so what's the third variable?
 
Experimental psychology involves scientists doing experiments. They manipulate independent variables and reliably measure any resulting effect on operationally defined dependent variables, while attempting to control for any possible confounding variables.
Correlational research has its place, but it is not doing experiments.
 
Robust correlations often imply some sort of causal story, whether common cause or something more complicated. Hans Reichenbach suggested the Principle of the Common Cause, which asserts basically that robust correlations have causal explanations, and if there is no causal path from A to B (or vice versa), then there must be a common cause, though possibly a remote one.

Reichenbach's principle is closely tied to the Causal Markov condition used in Bayesian networks. The theory underlying Bayesian networks sets out conditions under which you can infer causal structure, when you have not only correlations, but also partial correlations. In that case, certain nice things happen. For example, once you consider the temperature, the correlation between ice-cream sales and crime rates vanishes, which is consistent with a common-cause (but not diagnostic of that alone).
 
Statistical incompetence?!-- these guys invented half the statistics used in other sciences. Spearman alone has factor analyses, rank order correlations and the classical model of reliability to his credit. These three things-- without even mentioning g-- have done more to advance humanity than anything gould's done (for example).

And to the best of my knowledge, Spearman never claimed -- as you do -- that 'g' must represent a real thing.



Back to the original issue: The correlations between g and the things referenced above exist. I'm fairly certain you don't dispute that .

I do not.


Your claim, I think, is that g is an artificat; an invalid indicator of intelligence (whatever that is).

It is.

So, does this mean you think the correlations are either spurious (but why then have they been replicated 100s of times) or that the correlations just don't show that mental processes are the cause of g, and that g is the (partial) cause for success in many life outcomes?

The latter. Note the use of the word "cause" (twice) -- which cannot be confirmed statistically.

Does this argument seem valid:

A real correlation (real, meaning not spurious, but replicated every time someone bothers to collect the data) between x and y implies that something is causing the relationship.

No. It doesn't. Spearman as far as I can tell, is unique in the history of IQ psychometric practitioners in recognizing that fact.
 
No. It doesn't. Spearman as far as I can tell, is unique in the history of IQ psychometric practitioners in recognizing that fact.

Well, I'd argue, and I think philosophy of science types would agree that:

correlation is necessary but not sufficient for causality.

I think resolving whether this is true or not is key to advancing our debate here.

Any philosophy types help us out?

Correlations don't imply causation....check
Causation, though, implies correlation...check

Can we go from there to say:

Any real correlation that exists must be caused by something (or some combo of variables, whether x, y or z).

Can a "real" (i.e., replicable) correlation exist without something somewhere causing that correlation?
 
correlation is necessary but not sufficient for causality.

I think resolving whether this is true or not is key to advancing our debate here.

Demonstrably untrue, depending upon the experimental setup and specifically upon the confounding factors. Confounding factors are capable, not only of creating spurious correlations where a better-done experiment would find none, but also capable of masking genuine correlations related to underlying causes.

As a particularly incompentent example -- let's search for a relationship between height and geographic origin within the United States. (My working hypothesis -- different areas have different culture and different regional diets in particular, and I expect that the difference in diet will result in people growing taller in regions with "better" diets.)

So I gather a sample of people -- women, to avoid confounds for sex -- and check where they grew up, and measure their height.

However, I'm sufficiently clueless as to gather an unrepresentative sample, specifically from the Rockettes, who by selection are all between 5'6 and 5'11 -- and as a result I find no correlation. Experimental failure on my part has masked the genuine causal link.

Of course, what I do by blatant incompence, someone else could do by subtle incompetence, or simply by conspiracy of the random numbers (aka good old-fashioned bad luck).


Causation, though, implies correlation...check

No. See above.

Can we go from there to say:

Any real correlation that exists must be caused by something (or some combo of variables, whether x, y or z).

Only if you're happy committing (and committing to) the 'affirming the consequent' fallacy.
 
I'm not talking about betting on a null hypothesis. Not finding a correlation means it either doesn't exist, or you didn't have the power to detect it.

I mean once you have an established correlation...

In your example, diet is causing differences in height, but you tested the hypothesis on a sample with severe range restriction. So, the fact that you didn't find the correlation says nothing about the causal relationship. It's an unfair test.

I'm talking about the opposite scenario. We have the correlation; established and replicated (so it can't be spurious). If there's a third variable (or confounding variable-- same thing to me in this case) show it to us. But something must be causing the established relationship between x and y (the question being, is x the cause, or y, or z??)

I still think: causation implies correlation is a valid inference.

Can you elaborate on how it's an example of the fallacy of affirming the consquent?
 
I'm not talking about betting on a null hypothesis. Not finding a correlation means it either doesn't exist, or you didn't have the power to detect it.

There's a potential ambiguity in the word "power" here. An experiment can lack "power," in the strict statistical sense, if there is a sufficiently high probability of making a type II error. But the probability of making a type II error can really only be assessed against a stated null hypothesis -- and there's no guarantee that the null hypothesis is correct.

More accurately, any statistical hypothesis test is a comparison between the null hypothesis and the "experimental hypothesis," to see which one better explains the data. In particular, we reject the null hypothesis if the experimental hypothesis better explains the data. But if neither the null hypothesis or the experimental hypothesis actually explain the data particularly well - for example, by making the wrong sort of ontological assumptions, then you can easily end up in a statement where ontologically,, the experimental hypothesis is more correct, while observationally the null hypothesis appears more correct. And vice versa, of course.

For example, if I assume that A causes B (as my experimental hypothesis), I would predict a correlation between A and B. And that's fine as a working hypothesis but not as a conclusion. Similarly, my null hypothesis would be that there is no relationship (==correlation) between A and B.

However, suppose the actual reality of it is that A does indeed cause B, but that a third factor, C, which I am unaware of and unable to control, actually causes not-B. And suppose further that, in the experimental setup I'm using, the process I use to ensure large amounts of A also ensures large amounts of C as well. This will cause the hoped-for correlation between A and B to systematically vanish until I can correct for this bias. But ontologically, it's still the case that A causes B. I thus have a situation where I can replicate the experiment as much as I like, but I am still drawing an incorrect causal conclusion. Lack of correlation does not, in this case, prove lack of causation. And it will continue to be replicable until and unless someone identifies the C-factor and finds a way to control for it. And it's not a question of "power" in the narrow sense, but of experimental design.



I mean once you have an established correlation...

Well, this is exactly the opposite. Suppose that A and B are not causally linked, but my process of manipulating A implicitly controls C, which does have a causal link to B. In this case, I can establlish (and replicated) as strong a correlation as one likes, but the causal link one infers is bogus.

Now, of course, you can argue that what's really present is an indirect causal link between A and B-- what's there is a causal link between A and C, and between C and B. But look at what you've done -- you've 'reified' a direct causal link, when the underlying reality is two related links.

And the real problem is that the postulated underlying causal links between A and C (or C and B) are themselves only postulated. What if the actual causal structure were really A has an effect on D which has an effect on E which has an effect on C, which has an effect on .... B?

You have 'reified' a single underlying cause and causal link between A and B, a serious ontological error given the actual situation of an underlying causal web. And, of course, since in IQ studies we can't actually 'manipulate' the underlying independent variables, the possibility of such spurious reification increases manyfold.

That's the error IQ theologians tend to make. How can you demonstrate that the proposed link between A and B is genuine? Factor analysis will tell us only that A and B covary, which we knew. It will further tell us that we are capable of describing the relationship between A and B in terms of a single underlying parameter. It will not tell us whether or not that underlying parameter actually exists or corresponds to anything other than a convenient mathematical simplification.
 
Yes, but then we have a theory-- that g measured by IQ tests is the manifest indicator of the latent trait known as neural efficiency.

More specificially, that intelligence is mainly neural efficiency, and that IQ tests measure individual differences in this ability, and that individual differences in this ability have causal effects on many (mostly all?) important life outcomes.

But, we're good popperian scientists, and so we don't try to prove the above theory, instead we try to falsify it.

So, we measure other important theoretical constructs that might be the third variable-- might mediate the relationship between IQ and various life outcomes.

And note, the g theory above is easily falsifiable. Show that any psychological or physiological construct besides neural efficiency explains (via path analyses, structural equation modelling, or simple regression) the relationship between IQ and success, and the g theory is falsified.

But these studies have been done-- for decades if not 100 years.

Control for personality, motivation, emotional IQ, everything in the environment, and yet the correlation between IQ and life success remains.

So, despite many attempts at falsifying the idea that g is neural effieciency, none have done so.

And, since g is so basic, reliable to measure, and practical to measure, it's utility is psychology's biggest gift to date to humanity.

So, as a good scientist, do the studies prove my g theory? No, as Hume would roll over in his grave.

But, the studies corroborate the theory, and the theory is also is quite parsimonious; testable, observable and falsifiable.

What more do you want from a theory?

And, if it's an artifact, why does it predict success in life so well-- even when every other plausible third variable researchers could think of over 100 years of research has been controlled for?

(we've ruled out so many third variables for what IQ might be-- besides g-- that one current best theory says that africans score lower on IQ tests because they don't value intelligence, and instead have a culture that values things like spirituality, musicality, and athleticism. Talk about racist!).
 
Yes, but then we have a theory-- that g measured by IQ tests is the manifest indicator of the latent trait known as neural efficiency.
How does increased neural efficency make you better at IQ tests?
Surely increased processing power, in the brain just corresponds to arriving at an answer (be it right or wrong) quicker.

I can't remember feeling outclassed in terms of raw processing power by people smarter than me, but what is always obvious in people who I feel are smarter than me is a greater depth of knowledge and a more refined quality of thought.

This is one of those cases where perfect practice makes perfect, and it seems intuitively obvious to me that they have got there by thinking well time, and time again.

You'll get no arguement from me that neural efficency can be a limiting factor in how well you think, but I don't see how it can be more than that.
 
I have to agree on this one. I think that speed of thought is an independant factor from quality of thought; certainly, someone can quickly deduce an answer, but be absolutely wrong. And for the most part, thought quality seems to be a trainable aspect of intelligence.
 
And, if it's an artifact, why does it predict success in life so well-- even when every other plausible third variable researchers could think of over 100 years of research has been controlled for?

Because the research that has been done over the past 100 years has been so incompetent and so dominated by the 'g-must-be-real-because=it-explains-something' theology that the research findings are useless. The "controlling" is carefully set up by misuse of statistics (notice how carefully the Bell Curve shifts between entirely different measures of association -- correlation by individuals, correlation by groups, et cetera) in order to achieve a predesignated conclusion.

The idea, of course, being that if one test doesn't show what you want, you run a different test until you get what you want, and you publish only the final analysis.

And if all else fails, you falsify your data.
 
I think the idea is that if you process info fast you can process more info at one time. The benefits of this would be many-- in the same time period you can consider more options and make more associations between the concept you're thinking about and other concepts you have stored in longer term memory.

It's linked to working memory capacity-- consider the analogy of a bucket (being how much info you can think about at one time). People have different bucket sizes-- individual differences in working memory capacity.

bucket size is important in determining how much info you can process, but so too is the speed with which you can empty the bucket and add more concepts as needed.

People also differ in how fast they can clear the bucket to bring in more water. That's the speed of processing link.

This is especially important for things like bringing expertise in to bear on a given problem. The trick is to access all that knowledge and piece it together and still be able to think about it all at once.

That's what speed and capacity do.

I felt I was getting too complicated and long in my answers above, so I limited my definition of g to just mental speed. I have said before and still believe that speed and working memory capacity are inherently related, and the essence of g.

***

Dr. K.

I think it's absurd to libel a whole discipline and all the scientists therein by claiming all the research is incompetent. Try getting crap published in an A journal on cognition or intelligence-- it just doesn't happen. Some of the stuff I cited above, also, was published in Nature, which I guess publishes crap too.

You cannot dismiss this entire body of research as being shoddily conducted / incompetent. It just ain't so; it's an ignorant position. Doesn't mean I'm right about g, but I hope your blanket dismissal doesn't sway others from checking out the literature and critically evaluating it for themselves.

I'll stipulate that we can completely ignore the bell curve; pretend like it was never published, and all the conclusions in it have been replicated anyway by other researchers in peer reviewed A journals whose standards of science are just as rigorous as anything people in your field publish in.
 
I'm not making this up! How can people be so skeptical about other things yet so unwilling to look at the science on this thing:

After taking into account gender and physical stature, brain size as determined by magnetic resonance imaging is moderately correlated with IQ (about 0.4 on a scale of 0 to 1). So is the speed of nerve conduction. The brains of bright people also use less energy during problem solving than do those of their less able peers. And various qualities of brain waves correlate strongly (about 0.5 to 0.7) with IQ: the brain waves of individuals with higher IQs, for example, respond more promptly and consistently to simple sensory stimuli such as audible clicks. These observations have led some investigators to posit that differences in g result from differences in the speed and efficiency of neural processing. If this theory is true, environmental conditions could influence g by modifying brain physiology in some manner.
***

Reaction times do not reflect differences in motivation or strategy or the tendency of some individuals to rush through tests and daily tasks--that penchant is a personality trait. They actually seem to measure the speed with which the brain apprehends, integrates and evaluates information.

***

The lower-IQ woman is four times more likely to bear illegitimate children than the higher-IQ woman; among mothers, she is eight times more likely to become a chronic welfare recipient.

People somewhat below average are 88 times more likely to drop out of high school, seven times more likely to be jailed and five times more likely as adults to live in poverty than people of somewhat above-average IQ.


Below-average individuals are 50 percent more likely to be divorced than those in the above-average category.

These odds diverge even more sharply for people with bigger gaps in IQ, and the mechanisms by which IQ creates this divergence are not yet clearly understood.


But no other single trait or circumstance yet studied is so deeply implicated in the nexus of bad social outcomes--poverty, welfare, illegitimacy and educational failure--that entraps many low-IQ individuals and families.

Even the effects of family background pale in comparison with the influence of IQ. As shown most recently by Charles Murray of the American Enterprise Institute in Washington, D.C., the divergence in many outcomes associated with IQ level is almost as wide among siblings from the same household as it is for strangers of comparable IQ levels. And siblings differ a lot in IQ--on average, by 12 points, compared with 17 for random strangers.

Look at that, scientific american cites Murray, but I don't see any mention of Gould...
 

Back
Top Bottom