Hoyle-Narlikar Theory

Do they argue that their theory can prevent black holes from forming, ever?

If so I can falsify that immediately. Roughly, black holes form when the mass/energy M in a region of size R satisfies GM>R (G is Newton's constant). Now imagine a thin spherical shell of light collapsing on a point at the center of the shell. At some time the radius of the shell will satisfy the above relation (M is the total energy in the shell). Because of spherical symmetry and causality, the spacetime at a point inside the shell is not affected by it at all until the light actually reaches that point. It is impossible for anything inside the shell to know about its impending doom.

Therefore nothing can act to slow the shell down, and there cannot be any dynamics in the theory which prevent the shell from crossing R. Therefore a black hole horizon forms before the shell gets there, and a little later a singularity forms at the center.

This is a useful argument, because it falsifies a number of crackpot theories about black holes. The only way to avoid it is if either the theory is acausal and non-local or if it is non-linear in a way completely different from general relativity (so that black holes don't exist as solutions at all).
 
Last edited:
Add in an infinite amount of time (no beginning to the universe) then the NBH will have an infinite mass and there will be an infinite rate of matter creation.
...
What happens then is anyone's guess. But it would certainly be noticeable to the inhabitants of the universe (i.e. us). My guess is that rest of the universe gets sucked into the NBH.

IMHO The existence of black holes causes problems for any steady-state (quasi or not) cosmological theory. If a black hole exists for an infinite amount of time then it will absorb an infinite amount of matter. It's gravitational force will then be infinite. What happens to the rest of the universe? I guess you could get around it by assuming that the universe is infinite in extent and there is room for an infinite number of infinitely massive black holes.
 
Do they argue that their theory can prevent black holes from forming, ever?

If so I can falsify that immediately. Roughly, black holes form when the mass/energy M in a region of size R satisfies GM>R (G is Newton's constant). Now imagine a thin spherical shell of light collapsing on a point at the center of the shell. At some time the radius of the shell will satisfy the above relation (M is the total energy in the shell). Because of spherical symmetry and causality, the spacetime at a point inside the shell is not affected by it at all until the light actually reaches that point. It is impossible for anything inside the shell to know about its impending doom.

Therefore nothing can act to slow the shell down, and there cannot be any dynamics in the theory which prevent the shell from crossing R. Therefore a black hole horizon forms before the shell gets there, and a little later a singularity forms at the center.

This is a useful argument, because it falsifies a number of crackpot theories about black holes. The only way to avoid it is if either the theory is acausal and non-local or if it is non-linear in a way completely different from general relativity (so that black holes don't exist as solutions at all).


The paper looks at the gravitational collapse of a dust ball and states that when an ambient C-field is added to the solution of B. Datt (1938) then
It is immediately clear that in these modified circumstances a(t) cannot reach zero, the spacetime singularity is averted and the ball bounces at a minimum value amin > 0, of the function a(t).
a(t) is a time varient scale factor.
 
Because of spherical symmetry and causality, the spacetime at a point inside the shell is not affected by it at all until the light actually reaches that point. It is impossible for anything inside the shell to know about its impending doom.

.................

Therefore a black hole horizon forms before the shell gets there, and a little later a singularity forms at the center.

Arrghh, I thought that black holes were relatively simple, but I am having trouble reconciling the two statements of Sol's, shown above.

If it is impossible for anything inside the shell to know about it's impending doom, then they cannot perform the calculations that would say whether there is a black hole or not? Or they can't know for certain, never mind abstract calculations.

Does that mean it would be impossible for us to know if the entire universe is a black hole?

Or, does Sol's description special because he is speaking about a sphere of collapsing light, not dust like Narlikar and Burbidge are talking about?
 
The paper looks at the gravitational collapse of a dust ball and states that when an ambient C-field is added to the solution of B. Datt (1938) then
[/LEFT]
a(t) is a time varient scale factor.

Well, I think that's impossible. I'll have a look when I get a chance.

If it is impossible for anything inside the shell to know about it's impending doom, then they cannot perform the calculations that would say whether there is a black hole or not? Or they can't know for certain, never mind abstract calculations.

They cannot know - that's correct.

Does that mean it would be impossible for us to know if the entire universe is a black hole?

It's impossible for us to know if we are inside a giant black hole, yes, that's correct. Actually in a sense a universe that crunches is like that, and we cannot and do not know whether the universe will crunch in the future.

Or, does Sol's description special because he is speaking about a sphere of collapsing light, not dust like Narlikar and Burbidge are talking about?

No, it works the same way for dust.
 
Well, I think that's impossible. I'll have a look when I get a chance.

Sol,

I would value your opinion regarding their solution including the C-field.

It seems strange that they just "add something to the right hand side of the equation", and bingo, black holes disappear.

However, they (and you) know alot more about GR than do I.

I guess that it is similar to the cosmological constant, just a different type of field?
 
The paper looks at the gravitational collapse of a dust ball...

Well, OK - I said the theory needed to be local and causal for my argument to go through. This theory is neither, nor is it self-consistent.

The C field they add is what's called a ghost - it has a negative kinetic term. Imagine for a moment a massive particle, but with the sign of the kinetic energy reversed. Such a particle obeys a conservation law -(1/2)mv^2 + V(x) = E, where E is the energy and V(x) is the potential energy. So the particle can increase its velocity while increasing its potential energy. For example, if you start of near a star at rest (so with negative total energy), rather than falling into the star, you will accelerate away from it. So there's a kind of repulsive gravity, and that's what they're relying on to prevent black holes from forming.

There is only one small little problem with that. Because the kinetic energy is negative, the particles propagate along spacelike geodesics - that is, they move faster than light. In a relativistic theory that means they move back in time, thus destroying the sequence of cause and effect. That's bad.

Moreover, we do not live in a world described by classical physics and we must worry about QM. Consider what would happen if two such particles appeared out of the vacuum. Normally, the positive kinetic energy means there is a negative potential which pulls them back together. But negative kinetic energy means they repel each other, leading to a runaway solution.

So this model is dead from the very start.
 
A good experimental test of a cosmological theory is whether it can reproduce the cosmic microwave background (CMB) thermal anisotropy.

The latest paper on the Hoyle-Narlikar theory does include a calculation of the power spectrum of the CMB thermal anisotropy and a plot of the fit to the WMAP three year data. The authors though drop three data points
For the actual fitting, we consider the WMAP-three year data release Spergel, et al, 2006). The data for the mean value of TT power spectrum have been binned into 39 bins in multipole space. We find that the earlier fit (Narlikar, et al, 2003) of the model is worsened when we consider the new data, giving chi-squared= 129.6 at 33 degrees of freedom. However, we should note that while the new data set (WMAP-three year) has generally increased its accuracy, compared with the WMAP-one year observations, for l 700, the observations for higher l do not seem to agree. This is clear from Figure 1 where we have shown these two observations simultaneously. If we exclude the last three points from the fit, we can have a satisfactory fit giving chi-squared= 83.6 for the best-fitting parameters A1 = 890.439±26.270, A2 = 2402543.93± 3110688.86, A3 = 0.123 ± 0.033, alpha-2 = 0.010 ± 0.0001, alpha-3 = 0.004 ± 0.000004 and gamma = 3.645 ± 0.206, We shall see in the following that the standard cosmology also supplies a similar fit to the data. It should be noted that the above mentioned parameters in the QSSC can be related to the physical dimensions of the sources of inhomogeneities along the lines of Narlikar et al (2003) and are within the broad range of values expected from the physics of the processes.


So I went looking further and found the following critique of the Steady State and Quasi-SS models - scroll to the bottom to see that the fit should have included all WMAP data points (and other surveys). The relevant statement is
The plot above shows this fit: the ΛCDM model in green fits all the data very well, while the QSSC model in orange fits rather poorly. There is a difference in χ2 of 516.3 between the two models, which both have 6 free parameters. Narlikar et al. chose the CMB angular power spectrum as the one and only plot in their paper, but their model does not fit the WMAP three year data nor does it fit the CBI and ACBAR data that were already published. It is very clear that the QSSC CMB angular power spectrum model proposed by Narlikar et al. does not fit the CMB data.


Another nail in the coffin of the Hoyle-Narlikar theory.
 
A good experimental test of a cosmological theory is whether it can reproduce the cosmic microwave background (CMB) thermal anisotropy.

The latest paper on the Hoyle-Narlikar theory does include a calculation of the power spectrum of the CMB thermal anisotropy and a plot of the fit to the WMAP three year data. The authors though drop three data points


So I went looking further and found the following critique of the Steady State and Quasi-SS models - scroll to the bottom to see that the fit should have included all WMAP data points (and other surveys). The relevant statement is



Another nail in the coffin of the Hoyle-Narlikar theory.

Good ole Ed Wright; he has apparently had an axe to grind with Hoyle, Narlikar, Arp and the Burbidges for quite some time.

I wish he would have spoken a bit regarding the supposed WMAP errors in the high L portions of the power spectrum that Narlikar et.al. claim. You would think he would have mentioned their claim, and refuted it.

I think it's funny how these guys say about the three dropped data points: "no justification, totally ad hoc"; but clearly our power spectrum, based upon the CDM model, which relies on some sort of undiscovered cold dark matter, of unknown properties or behavior, matches it quite well.

What is the difference between an ad hoc description of cold dark matter (isn't this non-baryonic dark matter?) in terms of material we don't have a clue about, opposed to the exclusion of three data points that may have large amounts of experimental error (I need to check on this though)?

Seems like apples and apples to me.
 
A good experimental test of a cosmological theory is whether it can reproduce the cosmic microwave background (CMB) thermal anisotropy.

The latest paper on the Hoyle-Narlikar theory does include a calculation of the power spectrum of the CMB thermal anisotropy and a plot of the fit to the WMAP three year data. The authors though drop three data points


So I went looking further and found the following critique of the Steady State and Quasi-SS models - scroll to the bottom to see that the fit should have included all WMAP data points (and other surveys). The relevant statement is



Another nail in the coffin of the Hoyle-Narlikar theory.

Good ole Ed Wright; he has apparently had an axe to grind with Hoyle, Narlikar, Arp and the Burbidges for quite some time.

I wish he would have spoken a bit regarding the supposed WMAP errors in the high L portions of the power spectrum that Narlikar et.al. claim. You would think he would have mentioned their claim, and refuted it.

I think it's funny how these guys say about the three dropped data points: "no justification, totally ad hoc"; but clearly our power spectrum, based upon the CDM model, which relies on some sort of undiscovered cold dark matter, of unknown properties or behavior, matches it quite well.

What is the difference between an ad hoc description of cold dark matter (isn't this non-baryonic dark matter?) in terms of material we don't have a clue about, opposed to the exclusion of three data points that may have large amounts of experimental error (I need to check on this though)?

Seems like apples and apples to me.
 
Seems like apples and apples to me.

You cannot simply throw out data you don't like and then claim your model fits! The error in those points is known, quantified, and taken into account. And your comparison is wrong - the parameters of dark matter are part of the standard fit to the CMB, not anything extra. But that is really the least of the problems with this model.

Check out page 22 of Narlikar, Burbidge, and Vishwakarma. The discussion around equation (46) is really quite comical. You see, eq. (46) is a totally ad hoc fitting function - it is not derived from their model! In other words they are simply positing - by fiat - that there are fluctuations in the CMB, which they model as a pattern of sharp edged and Gaussian profile spots. With the six parameters in that ad hoc ansatz, they find a best fit - and it's not very good!

It's totally meaningless. Given any data set at all you can always find a fitting function, but without a model for where that fitting function came from... you've got nothing.

In real cosmology there is a set of equations, derived from fundamental laws of physics, from which one can compute the CMB spectrum. It's remarkable - amazing, actually - that such a thing is possible, and it's a very powerful test of the theory. Here, the fundamental laws are inconsistent and there is no derivation of anything - just an ad hoc function with no relation to the underlying theory.

By the way, let me add that the presence of a ghost field which prevents black holes from forming, aside from the fact that it is inconsistent both quantum mechanically and as a classical field theory, will almost certainly affect structure formation dramatically (because it will slow or prevent entirely the gravitational collapse that leads to the formation of stars and galaxies). I see no mention of that in the paper. I could probably estimate the effects in a few hours, but I'm not going to waste my time.

Searching their paper, the following words never appear:

ghost
causality
causal (except once in a different context)
instability (ditto)

They don't mention even once the problems I pointed out, which is a sign of either unforgivable ignorance (since I'm certain these problems have been pointed out to them many times before - they are totally obvious to anyone with any experience in field theory or modified gravity) or extreme intellectual dishonesty.
 
Oh yeah - I forgot to mention unitarity.

In physics, we like it when the probabilities for all the possible results of some process add up to 1. Not more than 1, not less than 1 - precisely 1. If they don't there's clearly a serious problem.

We also like it when that statement remains true all the time - as we say, "probability is conserved". Sensible theories have that property, which is called unitarity.

This theory is not unitary.

EDIT - and neither "unitary" nor "unitarity" appear in their paper.
 
Last edited:
Good ole Ed Wright; he has apparently had an axe to grind with Hoyle, Narlikar, Arp and the Burbidges for quite some time.

I wish he would have spoken a bit regarding the supposed WMAP errors in the high L portions of the power spectrum that Narlikar et.al. claim. You would think he would have mentioned their claim, and refuted it.
I've discussed this before on this forum. The WMAP data changed between years 1 and 3 due to an improved method of analysis. The discrepancy between points that results is because WMAP3 is better in that the results are processed in a more correct manner - everyone in the community is aware of this, aware of the change, and how it came about. It was flat out wrong to drop the points.

The data from ACBAR, CBI and other high angular resolution CMB experiments all agree, and all show that the power spectrum in that paper is a very poor fit to the data.
 
Help me understand?

Well, OK - I said the theory needed to be local and causal for my argument to go through. This theory is neither, nor is it self-consistent.

The C field they add is what's called a ghost - it has a negative kinetic term. Imagine for a moment a massive particle, but with the sign of the kinetic energy reversed. Such a particle obeys a conservation law -(1/2)mv^2 + V(x) = E, where E is the energy and V(x) is the potential energy. So the particle can increase its velocity while increasing its potential energy. For example, if you start of near a star at rest (so with negative total energy), rather than falling into the star, you will accelerate away from it. So there's a kind of repulsive gravity, and that's what they're relying on to prevent black holes from forming.

There is only one small little problem with that. Because the kinetic energy is negative, the particles propagate along spacelike geodesics - that is, they move faster than light. In a relativistic theory that means they move back in time, thus destroying the sequence of cause and effect. That's bad.

Moreover, we do not live in a world described by classical physics and we must worry about QM. Consider what would happen if two such particles appeared out of the vacuum. Normally, the positive kinetic energy means there is a negative potential which pulls them back together. But negative kinetic energy means they repel each other, leading to a runaway solution.

So this model is dead from the very start.

Sol,

Once again, I am relying on you to educate me. You talk about problems with the C-field implementation in the Narlikar et. al. paper.

They give an equation as follows:

[latex] R_{ik} - \frac{1}{2} g_{ik} R + \lambda g_{ik} = 8 \pi G[T_{ik} - f (C_i C_k - \frac{1}{4} g_{ik} C^l C_l)] [/latex]

This is their Equation (2). They derive this expression in their appendix, but they have the right hand side negative in their Equation (A22).:confused:

It seems to me that the C-field portion that has to do with the [latex] g_{ik} [/latex] part (the metric portion?) and could be associated with the cosmological constant part of the equation, like so:

[latex] R_{ik} - \frac{1}{2} g_{ik} R + \lambda g_{ik} - \frac{f}{4} g_{ik} C^l C_l = 8 \pi G[T_{ik} - f C_i C_k] [/latex]

In this way, it seems to me that the C-field is acting as a modifier to the expansion being driven by the cosmological constant.

So my question #1: Is that so different from the standard cosmological models that rely on the "classic Einstein" cosmological constant?

Obviously, the [latex] C_i C_k [/latex] part left over with the matter-energy term (?) is their matter creation implementation.

Question #2: Does having this term "hurt" the theory, or create showstoppers? (Right now, I am asking regardless of whether it could physically occur).

They also say that they use a [latex] \lambda [/latex] that is negative, which I think is opposite of the standard signage for the cosmological constant.

Question #3: Will this also do strange things, like forcing massive particles to travel on forbidden geodesics?

Hey, I understand if you have better things to do than answer ill conceived and poorly framed questions from a true math/relativity wimp.
 
CMB Anisotropy modelling?

In real cosmology there is a set of equations, derived from fundamental laws of physics, from which one can compute the CMB spectrum. It's remarkable - amazing, actually - that such a thing is possible, and it's a very powerful test of the theory. Here, the fundamental laws are inconsistent and there is no derivation of anything - just an ad hoc function with no relation to the underlying theory.

Can anyone point me to a source that would describe the physical modelling of the CMB anisotropy in the standard model? I am having a hard time finding anything.

As far as QSSC goes, I checked the 2003 Narlikar paper that is referenced in the 2008 Narlikar et. al. paper that Sol criticizes regarding the CMB modelling.

The 2003 Narlikar seems to try to base some aspect of their CMB predictions to real aspects of their theory, but in the end I think they go right back to the ad hoc solution that Sol (rightly, I deem) criticizes.

I would like to see how the angular spectrum predictions are handled in the standard theory.

Thanks!
 
Consider a Hoyle-Narlikar universe that just consists of one NBH that is big enough to have matter being created around it by MCE (there will have to be a seeding boson to start off the MCE). Some of the matter will fall into the NBH. The NBH will become more massive. This will increase the number of MCE. The rate of matter being created will increase. The rate of matter falling into the NBH will increase. The NBH will become more massive even faster. We now have a feedback loop in which the NBH keeps on getting more massive and the rate of matter creation keeps on increasing. There is no limit that I can see for this. Therefore a real universe that contains even one NBH will eventually be dominated by that NBH. Add in an infinite amount of time (no beginning to the universe) then the NBH will have an infinite mass and there will be an infinite rate of matter creation.

I thought they handled this some way, but I am having trouble finding out how....stay tuned!
 
Are you saying that in QSSC stellar evolution proceeds differently than in standard astrophysics? That solar/stellar models are all wrong?

Not at all. What I am saying is that their estimates are based upon stellar evolution taking place in a single ~13 Gyr time period since the one and only Big Bang. I think that QSSC assumes a number of "dark cinders" from previous cycles. The estimates in that paper don't include these hidden gems.

If so, then:

* why are there, apparently, no stars older than ~13 billion years?

I am not sure, I will research QSSC some more.

* where are the stars which do not conform to standard astrophysical theory?

* where are the QSSC stellar evolution models?

* what, in QSSC, is the IMF?

There may be no answers to these, because QSSC may not rely on differences from the standard positions on these subjects.

* where are all the end products of these earlier cycles, the (presumably) great numbers of white dwarfs for example?

They are mostly in the galactic halos, speeding up radial velocity in the galatic disk, and causing the 90% of MACHO events that the Big Bang time frames for stellar evolution cannot account for.

I think..................I think my lunker threw the hook!
 
It seems to me that the C-field portion that has to do with the [latex] g_{ik} [/latex] part (the metric portion?) and could be associated with the cosmological constant part of the equation, like so:

[latex] R_{ik} - \frac{1}{2} g_{ik} R + \lambda g_{ik} - \frac{f}{4} g_{ik} C^l C_l = 8 \pi G[T_{ik} - f C_i C_k] [/latex]

In this way, it seems to me that the C-field is acting as a modifier to the expansion being driven by the cosmological constant.

The C field is not a cosmological constant. Even the term you've moved to the left above does not act like a CC, because its own equation of motion (which I don't think they bothered to write) is such that its value changes with time. The pressure and energy density due to C are not equal, and unlike a CC they will change as the universe expands. It either breaks rotational invariance (bad, and not what they have in mind) or boost invariance (fine, since so does the universe). In the latter case only C_0 will be nonzero.

So my question #1: Is that so different from the standard cosmological models that rely on the "classic Einstein" cosmological constant?

Yes, very.

Question #2: Does having this term "hurt" the theory, or create showstoppers? (Right now, I am asking regardless of whether it could physically occur).

Yes - this is what I was saying above. Those terms arise in the equation when you have a ghost field of the type I described. The terms like C_0*C_0 are the kinetic energy of the C field, but they have chosen a sign such that they are negative.

They also say that they use a [latex] \lambda [/latex] that is negative, which I think is opposite of the standard signage for the cosmological constant.

That's not necessarily a problem - the CC can have either sign in principle. Its measured value is positive, but that interpretation of the data is valid only for a model without this C field.

Question #3: Will this also do strange things, like forcing massive particles to travel on forbidden geodesics?

C field excitations will propagate back in time, yes. Incidentally there are various ways of trying to handle that. It is unclear whether any of them succeed, but they all require adding quite a bit more physics to the model, and none of them result in an equation anything like the one above.

Can anyone point me to a source that would describe the physical modelling of the CMB anisotropy in the standard model? I am having a hard time finding anything.

This is an excellent resource. You might start with "CMB introduction". His Ph.D. thesis (available here) is an almost (from what I recall) complete exposition of the computation.
 
Last edited:
DeiRenDopa said:
Are you saying that in QSSC stellar evolution proceeds differently than in standard astrophysics? That solar/stellar models are all wrong?
Not at all. What I am saying is that their estimates are based upon stellar evolution taking place in a single ~13 Gyr time period since the one and only Big Bang. I think that QSSC assumes a number of "dark cinders" from previous cycles. The estimates in that paper don't include these hidden gems.
.
That's true, such 'hidden gems' are not included.
If so, then:

* why are there, apparently, no stars older than ~13 billion years?
I am not sure, I will research QSSC some more.
* where are the stars which do not conform to standard astrophysical theory?

* where are the QSSC stellar evolution models?

* what, in QSSC, is the IMF?
There may be no answers to these, because QSSC may not rely on differences from the standard positions on these subjects.
.
Actually QSSC, at least as it is described in that 2008 preprint, is quite clear: standard astrophysical theory on the formation and evolution of stars is wrong, or at least requires significant modification. This is clear from Section 5.2 (p21 of the arXiv preprint), where the authors explicitly handwave their way through 'a problem' ... that paragraph is well worth reading, slowly, it is really quite hilarious.
.
* where are all the end products of these earlier cycles, the (presumably) great numbers of white dwarfs for example?
They are mostly in the galactic halos, speeding up radial velocity in the galatic disk, and causing the 90% of MACHO events that the Big Bang time frames for stellar evolution cannot account for.

I think..................I think my lunker threw the hook!
.
Well, in addition to an apparent absence of these cinders from earlier cycles, there's also a rather striking absence of the gas and dust that would have been dumped into the ISM (interstellar medium) ... all stars have stellar winds, and during the planetary nebula stage (which stars more massive than 0.5 (?) Msol go through) much mass is expelled. Such gas and dust would be very obvious. Interestingly, the 'metals' content of the IGM (inter-galactic medium) in rich clusters is consistent with the estimated star formation history (plus standard stellar evolution) of the galaxies in those clusters, over ~13 billion years.
 

Back
Top Bottom