Dancing David
Penultimate Amazing
And then...
Do they argue that their theory can prevent black holes from forming, ever?
If so I can falsify that immediately. Roughly, black holes form when the mass/energy M in a region of size R satisfies GM>R (G is Newton's constant). Now imagine a thin spherical shell of light collapsing on a point at the center of the shell. At some time the radius of the shell will satisfy the above relation (M is the total energy in the shell). Because of spherical symmetry and causality, the spacetime at a point inside the shell is not affected by it at all until the light actually reaches that point. It is impossible for anything inside the shell to know about its impending doom.
Therefore nothing can act to slow the shell down, and there cannot be any dynamics in the theory which prevent the shell from crossing R. Therefore a black hole horizon forms before the shell gets there, and a little later a singularity forms at the center.
This is a useful argument, because it falsifies a number of crackpot theories about black holes. The only way to avoid it is if either the theory is acausal and non-local or if it is non-linear in a way completely different from general relativity (so that black holes don't exist as solutions at all).
It is immediately clear that in these modified circumstances a(t) cannot reach zero, the spacetime singularity is averted and the ball bounces at a minimum value amin > 0, of the function a(t).
Because of spherical symmetry and causality, the spacetime at a point inside the shell is not affected by it at all until the light actually reaches that point. It is impossible for anything inside the shell to know about its impending doom.
.................
Therefore a black hole horizon forms before the shell gets there, and a little later a singularity forms at the center.
The paper looks at the gravitational collapse of a dust ball and states that when an ambient C-field is added to the solution of B. Datt (1938) then
[/LEFT]
a(t) is a time varient scale factor.
If it is impossible for anything inside the shell to know about it's impending doom, then they cannot perform the calculations that would say whether there is a black hole or not? Or they can't know for certain, never mind abstract calculations.
Does that mean it would be impossible for us to know if the entire universe is a black hole?
Or, does Sol's description special because he is speaking about a sphere of collapsing light, not dust like Narlikar and Burbidge are talking about?
Well, I think that's impossible. I'll have a look when I get a chance.
The paper looks at the gravitational collapse of a dust ball...
For the actual fitting, we consider the WMAP-three year data release Spergel, et al, 2006). The data for the mean value of TT power spectrum have been binned into 39 bins in multipole space. We find that the earlier fit (Narlikar, et al, 2003) of the model is worsened when we consider the new data, giving chi-squared= 129.6 at 33 degrees of freedom. However, we should note that while the new data set (WMAP-three year) has generally increased its accuracy, compared with the WMAP-one year observations, for l ≤ 700, the observations for higher l do not seem to agree. This is clear from Figure 1 where we have shown these two observations simultaneously. If we exclude the last three points from the fit, we can have a satisfactory fit giving chi-squared= 83.6 for the best-fitting parameters A1 = 890.439±26.270, A2 = 2402543.93± 3110688.86, A3 = 0.123 ± 0.033, alpha-2 = 0.010 ± 0.0001, alpha-3 = 0.004 ± 0.000004 and gamma = 3.645 ± 0.206, We shall see in the following that the standard cosmology also supplies a similar fit to the data. It should be noted that the above mentioned parameters in the QSSC can be related to the physical dimensions of the sources of inhomogeneities along the lines of Narlikar et al (2003) and are within the broad range of values expected from the physics of the processes.
The plot above shows this fit: the ΛCDM model in green fits all the data very well, while the QSSC model in orange fits rather poorly. There is a difference in χ2 of 516.3 between the two models, which both have 6 free parameters. Narlikar et al. chose the CMB angular power spectrum as the one and only plot in their paper, but their model does not fit the WMAP three year data nor does it fit the CBI and ACBAR data that were already published. It is very clear that the QSSC CMB angular power spectrum model proposed by Narlikar et al. does not fit the CMB data.
A good experimental test of a cosmological theory is whether it can reproduce the cosmic microwave background (CMB) thermal anisotropy.
The latest paper on the Hoyle-Narlikar theory does include a calculation of the power spectrum of the CMB thermal anisotropy and a plot of the fit to the WMAP three year data. The authors though drop three data points
So I went looking further and found the following critique of the Steady State and Quasi-SS models - scroll to the bottom to see that the fit should have included all WMAP data points (and other surveys). The relevant statement is
Another nail in the coffin of the Hoyle-Narlikar theory.
A good experimental test of a cosmological theory is whether it can reproduce the cosmic microwave background (CMB) thermal anisotropy.
The latest paper on the Hoyle-Narlikar theory does include a calculation of the power spectrum of the CMB thermal anisotropy and a plot of the fit to the WMAP three year data. The authors though drop three data points
So I went looking further and found the following critique of the Steady State and Quasi-SS models - scroll to the bottom to see that the fit should have included all WMAP data points (and other surveys). The relevant statement is
Another nail in the coffin of the Hoyle-Narlikar theory.
Seems like apples and apples to me.
I've discussed this before on this forum. The WMAP data changed between years 1 and 3 due to an improved method of analysis. The discrepancy between points that results is because WMAP3 is better in that the results are processed in a more correct manner - everyone in the community is aware of this, aware of the change, and how it came about. It was flat out wrong to drop the points.Good ole Ed Wright; he has apparently had an axe to grind with Hoyle, Narlikar, Arp and the Burbidges for quite some time.
I wish he would have spoken a bit regarding the supposed WMAP errors in the high L portions of the power spectrum that Narlikar et.al. claim. You would think he would have mentioned their claim, and refuted it.
Well, OK - I said the theory needed to be local and causal for my argument to go through. This theory is neither, nor is it self-consistent.
The C field they add is what's called a ghost - it has a negative kinetic term. Imagine for a moment a massive particle, but with the sign of the kinetic energy reversed. Such a particle obeys a conservation law -(1/2)mv^2 + V(x) = E, where E is the energy and V(x) is the potential energy. So the particle can increase its velocity while increasing its potential energy. For example, if you start of near a star at rest (so with negative total energy), rather than falling into the star, you will accelerate away from it. So there's a kind of repulsive gravity, and that's what they're relying on to prevent black holes from forming.
There is only one small little problem with that. Because the kinetic energy is negative, the particles propagate along spacelike geodesics - that is, they move faster than light. In a relativistic theory that means they move back in time, thus destroying the sequence of cause and effect. That's bad.
Moreover, we do not live in a world described by classical physics and we must worry about QM. Consider what would happen if two such particles appeared out of the vacuum. Normally, the positive kinetic energy means there is a negative potential which pulls them back together. But negative kinetic energy means they repel each other, leading to a runaway solution.
So this model is dead from the very start.
In real cosmology there is a set of equations, derived from fundamental laws of physics, from which one can compute the CMB spectrum. It's remarkable - amazing, actually - that such a thing is possible, and it's a very powerful test of the theory. Here, the fundamental laws are inconsistent and there is no derivation of anything - just an ad hoc function with no relation to the underlying theory.
Consider a Hoyle-Narlikar universe that just consists of one NBH that is big enough to have matter being created around it by MCE (there will have to be a seeding boson to start off the MCE). Some of the matter will fall into the NBH. The NBH will become more massive. This will increase the number of MCE. The rate of matter being created will increase. The rate of matter falling into the NBH will increase. The NBH will become more massive even faster. We now have a feedback loop in which the NBH keeps on getting more massive and the rate of matter creation keeps on increasing. There is no limit that I can see for this. Therefore a real universe that contains even one NBH will eventually be dominated by that NBH. Add in an infinite amount of time (no beginning to the universe) then the NBH will have an infinite mass and there will be an infinite rate of matter creation.
Are you saying that in QSSC stellar evolution proceeds differently than in standard astrophysics? That solar/stellar models are all wrong?
If so, then:
* why are there, apparently, no stars older than ~13 billion years?
* where are the stars which do not conform to standard astrophysical theory?
* where are the QSSC stellar evolution models?
* what, in QSSC, is the IMF?
* where are all the end products of these earlier cycles, the (presumably) great numbers of white dwarfs for example?
It seems to me that the C-field portion that has to do with the [latex] g_{ik} [/latex] part (the metric portion?) and could be associated with the cosmological constant part of the equation, like so:
[latex] R_{ik} - \frac{1}{2} g_{ik} R + \lambda g_{ik} - \frac{f}{4} g_{ik} C^l C_l = 8 \pi G[T_{ik} - f C_i C_k] [/latex]
In this way, it seems to me that the C-field is acting as a modifier to the expansion being driven by the cosmological constant.
So my question #1: Is that so different from the standard cosmological models that rely on the "classic Einstein" cosmological constant?
Question #2: Does having this term "hurt" the theory, or create showstoppers? (Right now, I am asking regardless of whether it could physically occur).
They also say that they use a [latex] \lambda [/latex] that is negative, which I think is opposite of the standard signage for the cosmological constant.
Question #3: Will this also do strange things, like forcing massive particles to travel on forbidden geodesics?
Can anyone point me to a source that would describe the physical modelling of the CMB anisotropy in the standard model? I am having a hard time finding anything.
.Not at all. What I am saying is that their estimates are based upon stellar evolution taking place in a single ~13 Gyr time period since the one and only Big Bang. I think that QSSC assumes a number of "dark cinders" from previous cycles. The estimates in that paper don't include these hidden gems.DeiRenDopa said:Are you saying that in QSSC stellar evolution proceeds differently than in standard astrophysics? That solar/stellar models are all wrong?
.I am not sure, I will research QSSC some more.If so, then:
* why are there, apparently, no stars older than ~13 billion years?
There may be no answers to these, because QSSC may not rely on differences from the standard positions on these subjects.* where are the stars which do not conform to standard astrophysical theory?
* where are the QSSC stellar evolution models?
* what, in QSSC, is the IMF?
.They are mostly in the galactic halos, speeding up radial velocity in the galatic disk, and causing the 90% of MACHO events that the Big Bang time frames for stellar evolution cannot account for.* where are all the end products of these earlier cycles, the (presumably) great numbers of white dwarfs for example?
I think..................I think my lunker threw the hook!