I'm not talking about betting on a null hypothesis. Not finding a correlation means it either doesn't exist, or you didn't have the power to detect it.
There's a potential ambiguity in the word "power" here. An experiment can lack "power," in the strict statistical sense, if there is a sufficiently high probability of making a type II error. But the probability of making a type II error can really only be assessed against a stated null hypothesis -- and there's no guarantee that the null hypothesis is correct.
More accurately, any statistical hypothesis test is a comparison between the null hypothesis and the "experimental hypothesis," to see which one better explains the data. In particular, we reject the null hypothesis if the experimental hypothesis better explains the data. But if neither the null hypothesis or the experimental hypothesis actually explain the data particularly well - for example, by making the wrong sort of ontological assumptions, then you can easily end up in a statement where
ontologically,, the experimental hypothesis is more correct, while
observationally the null hypothesis appears more correct. And vice versa, of course.
For example, if I assume that A causes B (as my experimental hypothesis), I would predict a correlation between A and B. And that's fine
as a working hypothesis but not as a conclusion. Similarly, my null hypothesis would be that there is no relationship (==correlation) between A and B.
However, suppose the actual reality of it is that A does indeed cause B, but that a third factor, C, which I am unaware of and unable to control, actually causes not-B. And suppose further that,
in the experimental setup I'm using, the process I use to ensure large amounts of A also ensures large amounts of C as well. This will cause the hoped-for correlation between A and B to systematically vanish until I can correct for this bias. But ontologically, it's still the case that A causes B. I thus have a situation where I can replicate the experiment as much as I like, but I am still drawing an incorrect
causal conclusion. Lack of correlation does not, in this case, prove lack of causation. And it will continue to be replicable until and unless someone identifies the C-factor and finds a way to control for it. And it's not a question of "power" in the narrow sense, but of experimental design.
I mean once you have an established correlation...
Well, this is exactly the opposite. Suppose that A and B are not causally linked, but my process of manipulating A implicitly controls C, which does have a causal link to B. In this case, I can establlish (and replicated) as strong a correlation as one likes, but the causal link one infers is bogus.
Now, of course, you can argue that what's really present is an indirect causal link between A and B-- what's there is a causal link between A and C, and between C and B. But look at what you've done -- you've 'reified' a direct causal link, when the underlying reality is two related links.
And the real problem is that the postulated underlying causal links between A and C (or C and B) are themselves only postulated. What if the actual causal structure were really A has an effect on D which has an effect on E which has an effect on C, which has an effect on .... B?
You have 'reified' a single underlying cause and causal link between A and B, a serious ontological error given the actual situation of an underlying causal web. And, of course, since in IQ studies we can't actually 'manipulate' the underlying independent variables, the possibility of such spurious reification increases manyfold.
That's the error IQ theologians tend to make. How can you demonstrate that the proposed link between A and B is genuine? Factor analysis will tell us only that A and B covary, which we knew. It will further tell us that we are capable of
describing the relationship between A and B in terms of a single underlying parameter. It will not tell us whether or not that underlying parameter actually exists or corresponds to anything other than a convenient mathematical simplification.