• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Can causality exist without time?

The discussion lost me some time ago... :)

I linked to this thread in the other forum - I don't know whether my erstwhile opponent has seen it yet.
 
I gave a precise mathematical definition of causality earlier in the thread. For these purposes, it's not much different than >. Do you have a different definition in mind? (s. i.)

Are you referring to this? ” 1If you're curious: to determine the state at a point in spacetime it is necessary and sufficient to specify the state at every point in the past lightcone of that point, which in turn implies that one need only specify the state along a spacelike volume slicing the past lightcone.”


OK, lets try this: a causes b is a very different statement than a < b. For example, a,b,…,n < z can all be true and a,b,…n can all not cause z. Or only c may cause z. These are not equivalent concepts. Does your definition somehow refute that?

The point is simply this: what is to prevent an infinite chain of causes located at some chain of events with time coordinates all monotonically decreasing to zero, but staying positive? Hence my first post--you seem to have some particular notion of causality in mind, but I've no idea what it is. What I can say is that in GTR there is a mathematically rigorous definition for what it means for a spacetime to have a causal structure, and that the family of solutions commonly used to model the Big Bang do not violate it. (V.)

If an infinite chain of events monotonically approached any point in time, would we not run into problems with the uncertainty principle? I am not knowledgeable enough about quantum theory to be certain, but what little I do know tells me we could not have such an infinite progression.

Why not? All the Planck time tells us is that if physics extrapolates unchanged to the Planck energy, quantum effects on gravity become large. So? (s. i.)
Well, I am out of my league here. Does the Planck time not limit how many events can occur in a given time interval? Does it not quantize time?

Take an eternal universe that lasts from t=-infinity to t=+infinity. Now change coordinates to T=e^t. The description in the new coordinates is just as valid as the original. Nothing happens to causality - this is just a change in how you choose to label events. But T>0. (s. i.)
Your new coordinates do more than you think. If you say that t < 0 does not exist, that implies that a finite amount of time has passed from t = 0 until now. In your changed coordinates, you have allowed an infinite amount of time going back in time. In your T = e^t system, as t approaches 0, T approaches 1, so if there is no t < 0, there is no time T < 1, which leads to the same question. Namely, if there is no t < 0 and no causality, there can never be any t or causality.
********************************************************************************
I would like to thank you both for indulging me in this discussion. What can be more fascinalting than the nature of the universe, including questions about its age?
 
Last edited:
If an infinite chain of events monotonically approached any point in time, would we not run into problems with the uncertainty principle?
As I said, that depends on what you mean by "cause". If "causes" can be quantum states, then no, since those are [wave-]functions of (continuous) time.

I am not knowledgeable enough about quantum theory to be certain, but what little I do know tells me we could not have such an infinite progression.
If by "causes", you mean the sort of things we get from measurements of quantum systems, then it seems that QM alone breaks that sort of causality without the need for cosmological matters. Since QM only determines probability distributions of such measurements (also cf. Bell's theorem), then although prior measurements influence future ones, it's hard to see how they "cause" them.

In fact, because their "influence" is due to their effect on the wavefunction, trying make sense of measurements causing other measurements seems to lead back to "causes" being the quantum states themselves, just as previously. If you can think some other way, I'd be very interested in discussing it, but right now I just can't see it.

Your new coordinates do more than you think. If you say that t < 0 does not exist, that implies that a finite amount of time has passed from t = 0 until now.
That's a fair point. To remove such ambiguities, in GTR we can talk about lengths of geodesics rather than time coordinates.
 
Yes, the universe consists within the context of an endless chain of causes. If there was no t < 0, then something happened without any prior event, i.e., without cause, which is not possible. Hence there was a t < 0.
Actually, no. A chain of causes exists within the context of the universe. There is no reason that the universe itself should be tied to the same laws as events within it. At t=0, the laws of physics that we are familiar with cease to apply, and that includes causality.
 
OK, lets try this: a causes b is a very different statement than a < b. For example, a,b,…,n < z can all be true and a,b,…n can all not cause z. Or only c may cause z. These are not equivalent concepts. Does your definition somehow refute that?

Yes. Any spacetime point in the past lightcone of a is causally connected to it. The state at all those points goes into determining the state at a - you cannot leave any of them out, or the state at a will be indeterminate (ok, to be more precise you could specify just a Cauchy surface, but you can always choose it to pass through any of those points).

If an infinite chain of events monotonically approached any point in time, would we not run into problems with the uncertainty principle?

In standard QM time is continuous, so there are always such infinite chains of events. Very roughly, the uncertainty principle tells you that as the times get closer, the energy of the relevant physics gets higher - that's all.

Well, I am out of my league here. Does the Planck time not limit how many events can occur in a given time interval? Does it not quantize time?

No, not as far as we know.

Your new coordinates do more than you think. If you say that t < 0 does not exist, that implies that a finite amount of time has passed from t = 0 until now. In your changed coordinates, you have allowed an infinite amount of time going back in time. In your T = e^t system, as t approaches 0, T approaches 1, so if there is no t < 0, there is no time T < 1, which leads to the same question. Namely, if there is no t < 0 and no causality, there can never be any t or causality.

You really don't read very carefully, do you? I specified that t goes from -infinity to +infinity. Then T goes from 0 to infinity - that was the whole point of the example.

Does that mean a finite amount of time has passed? You can't answer that, not without knowing the metric in at least one of the two coordinate systems. In particular if the t coordinates are flat an infinite time has passed, which provides yet another counterexample to your claim (since there is obviously nothing wrong with causality even though T<0 doesn't exist).
 
Last edited:
Take an eternal universe that lasts from t=-infinity to t=+infinity. Now change coordinates to T=e^t. The description in the new coordinates is just as valid as the original. Nothing happens to causality - this is just a change in how you choose to label events. But T>0.

Oh! I was trying to work up a good absolute zero analogy, but your coordinate mapping is better.

At first blush, I'd pictured the +/-infinite t timeline to have progressively less activity per increment of t as t headed for -infinity because the t increments were getting infinitely short as measured in T. But I'm a little slow until I've had my coffee (and sometimes even after that), and upon further reflection-

As t heads for -infinity, the universe is smaller and hotter, so the particles are closer together and travelling faster and thus interacting more often as measured in T. This suggests (but doesn't require) that with a proper mapping (my math isn't up to it, and maybe it really is as simple as T=e^-t) that there's a constant amount of interaction and "causing" per increment of t.

So - in a meaningful sense, the universe really is infinitely old, because there's been "time" for an infinite amount of interaction among particles? Or does the integral not work that way? Hmmm . . . "infinite amount of interaction between particles" doesn't sound consistent with the rationale for inflation.

A little help?
 
You really don't read very carefully, do you? I specified that t goes from -infinity to +infinity. Then T goes from 0 to infinity - that was the whole point of the example.

I read it quite carefully, thank you. There is no point to your example. If t < 0 does not exist in one coordinate system, then T < 1 does not exist in the other. By admitting to -t values in one system you merely create 0 < T < 1 in the other. If you think by devising a system that limits T to values > 0 and permits an infinite past time , you have somehow demonstrated that in a system where t < 0 does not exist, causes or time can be infinite, you are mistaken. If for some reason it were useful to express time in T, we would be asking the question, "is there T < 1?" You have not eliminated the question; you have changed its form.
Now try to read my post carefully.
 
Last edited:
As t heads for -infinity, the universe is smaller and hotter, so the particles are closer together and travelling faster and thus interacting more often as measured in T. This suggests (but doesn't require) that with a proper mapping (my math isn't up to it, and maybe it really is as simple as T=e^-t) that there's a constant amount of interaction and "causing" per increment of t.
The Minkowski spacetime in Cartesian coordinates, as simple as it gets:
ds² = dt² - (dx²+dy²+dz²). t,x,y,z in (-inf,+inf)
Vanishing curvature, and hence no matter or radiation or anything interesting at all.
Minkowski spacetime in Invictus time, T = exp(t):
ds² = (dT/T)² - (dx²+dy²+dz²). T in (0,inf).
There is still vanishing curvature everywhere. There's nothing to get 'hot'; it doesn't get 'smaller' in any physical sense--in fact, nothing at all is changed, because it's the same spacetime.
 
Originally Posted by Perpetual Student
If an infinite chain of events monotonically approached any point in time, would we not run into problems with the uncertainty principle?
(Vorpal) As I said, that depends on what you mean by "cause". If "causes" can be quantum states, then no, since those are [wave-]functions of (continuous) time.

I'm not familiar enough with the "quantum world" to understand what that means. Why would waves not be limited by the uncertainty principle? Isn't it true that if we try to pinpoint the location of a photon (which is a wave packet) we lose information about its energy (wavelength)? It would appear that the exact location of a photon is the same as an exact time for the photon, since the speed of the photon is exactly known. So as we approach t = 0 the wavelength of the photon could approach infinity? If so, that would (for me) represent another nail in the coffin of the idea that there is no t < 0. Since otherwise we reach absurd results like infinite energy and wavelength.
 
Last edited:
I read it quite carefully, thank you. There is no point to your example. If t < 0 does not exist in one coordinate system, then T < 1 does not exist in the other. By admitting to -t values in one system you merely create 0 < T < 1 in the other. If you think by devising a system that limits T to values > 0 and permits an infinite past time , you have somehow demonstrated that in a system where t < 0 does not exist, causes or time can be infinite, you are mistaken. If for some reason it were useful to express time in T, we would be asking the question, "is there T < 1?" You have not eliminated the question; you have changed its form.
Now try to read my post carefully.

For god's sake.

Your argument was that if the time coordinate didn't exist for negative values there was a problem with causality. That is obviously false, because T is a perfectly good time coordinate, it runs from 0 to infinity, and it manifestly does not have any problems with causality (because the physics it describes are identical in every way to those of t from -infinity to infinity).

One can of course go the other way as well. If we start with t in the range 0,+infinity, define T = log(t). Then T (a perfectly good time coordinate) runs from -infinity to infinity.

The point is, statements about whether the time coordinate has a finite range are utterly meaningless, because they are not invariant under trivial coordinate transformations. You cannot reason that way, as I have been trying to explain to you for the last five posts - it's wrong, period.
 
I'm not familiar enough with the "quantum world" to understand what that means. Why would waves not be limited by the uncertainty principle?
They are. Fourier analysis even a corresponding "Heisenberg uncertainty principle" (more than one, even, and in fact a large collection of inequalities sometimes also called "uncertainty principles"). However, it doesn't mean what you appear to think it means. In particular, it doesn't mean that the wave itself is in any sense "fuzzy" or takes ill-defined values, but simply that "position" and "wavelength" are not sharply defined for waves.

And why should they be? If the wave is far from periodic, wavelength ceases to make sense; if the wave is extended in space, asking its exact position is bit silly--it's more or less everywhere. For the former case, think of the distribution φ(x) = δ(x-x0), the Dirac delta. Position is well-defined, but wavelength is not. For the latter extreme, think of ψ(x) = sin(kx). Wavelength is very well-defined, but position is not.

Isn't it true that if we try to pinpoint the location of a photon (which is a wave packet) we lose information about its energy (wavelength)?
I'm not sure if that's physically meaning, since one cannot detect a photon without absorbing it, and therefore wouldn't have meaningful position eigenstates. But in any case, see above.

So as we approach t = 0 the wavelength of the photon would approach infinity? If so, that would (for me) represent another nail in the coffin of the idea that there is no t < 0. Since otherwise we reach absurd results like infinite energy and wavelength.
ΔEΔt ≥ hbar/2. As t→0, Δt→0, and therefore ΔE diverges. OK. Why is that a problem? Even plain vanilla GTR (or even Newtonian gravity) predicts diverging energy density as t→0 anyway.
 
As t heads for -infinity, the universe is smaller and hotter, so the particles are closer together and travelling faster and thus interacting more often as measured in T. This suggests (but doesn't require) that with a proper mapping (my math isn't up to it, and maybe it really is as simple as T=e^-t) that there's a constant amount of interaction and "causing" per increment of t.

So - in a meaningful sense, the universe really is infinitely old, because there's been "time" for an infinite amount of interaction among particles? Or does the integral not work that way? Hmmm . . . "infinite amount of interaction between particles" doesn't sound consistent with the rationale for inflation.

A little help?

Vorpal gave one example - start with flat space and transform to T. Then you have an empty flat space in funny coordinates. A slightly more interesting example is to take a real cosmology with a big bang singularity, and then do the inverse map T = log(t) on it. If t runs from 0 to infinity, T runs from -infinity to infinity - but of course nothing has actually changed, just our choice of labels for spacetime events.

To answer your question about the amount of real time, there's something called "proper time" in general relativity. That's the time that an observer would actually record on a clock, and it's invariant under coordinate changes like this (since it's a physical quantity). Mathematically, it's defined by the integral over time of the coordinate function multiplying dt in the metric - in Vorpal's T metric, one would integrate dT/T = log(T)=log(e^t)=t. In other words, t is the proper time in that case.
 
Your argument was that if the time coordinate didn't exist for negative values there was a problem with causality. That is obviously false, because T is a perfectly good time coordinate, it runs from 0 to infinity, and it manifestly does not have any problems with causality (because the physics it describes are identical in every way to those of t from -infinity to infinity).

What is it about this that you don't get? If T runs from 0 to infinity, as you have defined T, there would be an infinite past; consequently there is no problem with causality. If there were no T < 1, you create the same problem as there would be if there were no t < 0. You can't make the problem go away by changing coordinates.
 
Vorpal gave one example - start with flat space and transform to T. Then you have an empty flat space in funny coordinates. A slightly more interesting example is to take a real cosmology with a big bang singularity, and then do the inverse map T = log(t) on it.

Yes, I was thinking of the big bang - that was why I had the universe getting denser & hotter as the-time-ordinate-now-known-as-big-T headed for -infinity.

If t runs from 0 to infinity, T runs from -infinity to infinity - but of course nothing has actually changed, just our choice of labels for spacetime events.

Oh, hey, if I had a way to actually change the universe by coming up with a new mapping, the universe probably wouldn't have survived my freshman physics class.

To answer your question about the amount of real time, there's something called "proper time" in general relativity. That's the time that an observer would actually record on a clock, and it's invariant under coordinate changes like this (since it's a physical quantity). Mathematically, it's defined by the integral over time of the coordinate function multiplying dt in the metric - in Vorpal's T metric, one would integrate dT/T = log(T)=log(e^t)=t. In other words, t is the proper time in that case.

At that point, I was considering "time" to have a more casual meaning related to the amount of stuff that happens, and taking "stuff that happens" to mean interactions among particles and photons and whatever else might be interacting.
So - let's say I define my own "special" time interval - the T100, which is the time it takes for there to be a total of 10^100 interactions among the particles & photons in the universe. (yes, I can come up a bunch of reasons that this isn't a very usable defintion, starting with the argument about whether we're in an infinite universe, then the fact that the definition assumes a meaningful concept of simultaneity across the universe, then- oh, nevermind). Nowadays, a T100 interval takes a few femtoseconds or millenia of 'normal' time.
As we go back closer to the big bang, particles were closer together and moving faster so they interacted more often, and the T100 interval was shorter.
Does the T100 interval go to zero proper time as we approach the big bang? And does it go there quickly enough that there have been infinite number of T100 intervals since the Big Bang? (which stops looking like a bang in this perspective) Or is T100 too flawed to make any such assertions?
 
Last edited:
What is it about this that you don't get? If T runs from 0 to infinity, as you have defined T, there would be an infinite past; consequently there is no problem with causality. If there were no T < 1, you create the same problem as there would be if there were no t < 0. You can't make the problem go away by changing coordinates.

Perpetual, I don't think I can explain this any more clearly than I already have. Forget how I defined T. Call it t instead if you prefer. Now we have a universe described by a time coordinate t that runs from 0 to infinity, yet manifestly has no causality problem. That proves that there is no general problem with bounded time intervals - and that's obvious anyway, because the existence of such a boundary is completely coordinate dependent. End of story.

I'm sorry you're having so much difficulty understanding that (or admitting that you were wrong, whichever it is), but I'm not going to repeat it again, so if you still don't understand it you'll have to look elsewhere for help.
 
Yes, I was thinking of the big bang - that was why I had the universe getting denser & hotter as the-time-ordinate-now-known-as-big-T headed for -infinity.
Ah, I see. I took your reply to sol's "eternal universe" to mean that you were adopting his scenario, whereas in fact you were making your own, leading to this misunderstanding (I've also interpreted 'eternal' in the sense of 'infinite extensible' for at least some geodesics, rather than 'exists at all times', which would be tautologically true for any spacetime). Mea culpa.
 
Perpetual, I don't think I can explain this any more clearly than I already have. Forget how I defined T. Call it t instead if you prefer. Now we have a universe described by a time coordinate t that runs from 0 to infinity, yet manifestly has no causality problem. That proves that there is no general problem with bounded time intervals - and that's obvious anyway, because the existence of such a boundary is completely coordinate dependent. End of story.

I'm sorry you're having so much difficulty understanding that (or admitting that you were wrong, whichever it is), but I'm not going to repeat it again, so if you still don't understand it you'll have to look elsewhere for help.

Indeed, I think that is the crux of the issue with Perpetual S. here. In a clopen time interval where 0 ≤ T0 > ∞ (where one boundary is included in that interval but not the other) we only need to know the current state at that boundary condition T0 = 0 or even just at sometime T0 > 0 in an open interval where 0 < T0 > ∞ (unbounded at both ends, or no boundaries are included within that interval) in order to apply a deterministic (or generally causal) view. How it got to that condition at T0 = 0 or T0 > 0 is irrelevant in those considerations. It seems Perpetual S. is arguing that we must know the conditions that resulted in the state at T0 = 0 or T0 > 0 and not just that state itself at T0 in order to apply that general causal view for T1 > T0. Is that your primary argument Perpetual S.?
 
Indeed, I think that is the crux of the issue with Perpetual S. here. In a clopen time interval where 0 ≤ T0 > ∞ (where one boundary is included in that interval but not the other) ...
You've a very strange notation for intervals (are you sure you didn't mean '<' instead of '>'?), and 'clopen' doesn't mean what you think it means. A half-closed real interval is never clopen under the standard Euclidean topology, since the endpoint at which the interval is closed is not an interior point. In fact, there are no clopen sets in the reals other than the trivial ones--the empty set and the entire real line itself.

... we only need to know the current state at that boundary condition T0 = 0 or even just at sometime T0 > 0 in an open interval where 0 < T0 > ∞ (unbounded at both ends, or no boundaries are included within that interval) in order to apply a deterministic (or generally causal) view.
There's a theorem in GTR that says basically that if for every point, the intersection of the interiors of the past and future light cones is compact and there are no closed timelike curves, then the spacetime admits a Cauchy foliation. That's exactly what you want--a family of surfaces "slicing up" spacetime having the property that the entire spacetime is determined by local conditions on any of the surfaces. That's about as strict a causality as possible, since one can think of those surfaces as "nows", with each "now" completely determining both past and future.

How it got to that condition at T0 = 0 or T0 > 0 is irrelevant in those considerations. It seems Perpetual S. is arguing that we must know the conditions that resulted in the state at T0 = 0 or T0 > 0 and not just that state itself at T0 in order to apply that general causal view for T1 > T0. Is that your primary argument Perpetual S.?
It looks to me to be a lot simpler than that--just a correspondence of underlying logic. The objection of moving the question to talk about extensible geodesics instead has been already addressed.
 
Interesting discussion. Thanks everyone.

I think so far the conclusion is that causality does require time. But strange things happen as T->0.

Allow me to throw another nugget into the discussion. As we currently understand physics, time intervals cannot actually approach zero - we make a fundamental flaw in assuming that time (and also space) is a continuum. Time and space are actually discrete and quantized, just at such a small level that in our macroscopic view of the universe it appears they are a continuum - see Planck length and Planck time for more info on this.

ETA: I see now that Sol has already brought up the issue of Planck time. My bad.
 
Last edited:
You've a very strange notation for intervals (are you sure you didn't mean '<' instead of '>'?), and 'clopen' doesn't mean what you think it means. A half-closed real interval is never clopen under the standard Euclidean topology, since the endpoint at which the interval is closed is not an interior point. In fact, there are no clopen sets in the reals other than the trivial ones--the empty set and the entire real line itself.

Well I was not using the standard notation for those intervals (my fault and my intent), but just giving the limits for those internals in an attempt to make it more understandable. You are correct thought, that where I used ‘>’ in defining the limits of T in those intervals should have been ‘<’. Thanks, and I don’t know how I missed that as I had not started drinking yet (ok, maybe I did not start soon enough). From my understanding an open interval is one where that interval does not include the endpoints as interior points, but I’m always willing to be wrong. Certainly an interval is dependent on the set which it is a subset of and the interval [0,∞] (in the proper notation) in this consideration is a subset of the set of real numbers defining the set of positive real numbers. However, since the set of positive real numbers is the whole space being considered for T (negative T being before the big bang singularity) would it not be clopen in that regard? Again I remain willing as always.

There's a theorem in GTR that says basically that if for every point, the intersection of the interiors of the past and future light cones is compact and there are no closed timelike curves, then the spacetime admits a Cauchy foliation. That's exactly what you want--a family of surfaces "slicing up" spacetime having the property that the entire spacetime is determined by local conditions on any of the surfaces. That's about as strict a causality as possible, since one can think of those surfaces as "nows", with each "now" completely determining both past and future.

Oh, I do not doubt that, but you have to put that in terms and notations so that the people who what to be educated can be educated or at least get down to the problem they are having in the terms or concepts they do not understand. Not everyone is going to take “Cauchy foliation” to mean anything other then what happens to the Captain Crunch when your pour milk on it (which would just be a family of surfaces “slicing up” that milk) and having the causal result of just getting soggy by the local conditions on any of those surfaces. The problem is that the consideration (or our understanding) of “each "now" completely determining both past and future” breaks down at T = 0 (big bang singularity) or even as T approaches 0. You seem to be making the same augment as Perpetual S. In that T = 0 (big bang singularity) must have some determinate or determining T-1 otherwise subsequent causality fails, which I do not think arguing for. In other words my Captain Crunch does not get soggy until I pour milk on it and if we take the cereal in the bowl with milk to be T = 0, it does not matter how they came together for the causal result of it to get soggy sometime after T = 0.
It looks to me to be a lot simpler than that--just a correspondence of underlying logic. The objection of moving the question to talk about extensible geodesics instead has been already addressed.
Funny, I did not see Prpetual S. “moving the question to talk about extensible geodesics instead”. It just seems to me that on a discussion forum for an educational foundation those of us with some knowledge might make a better effort (an effort I often find myself lacking) to put things in terms that might be, well, more educational for those with questions and perhaps not the same education.
 

Back
Top Bottom