Why does time seem to flow only in one direction?

Catastrophe

"Science begets knowledge, opinion ignorance.
This is one aspect of time which is quite simple.

Sometimes entropy is called "the arrow of time". In theory, many aspects of physics are reversible, but fall down due to this factor.

Here is an old, but valuable, example. Take a handful of salt crystals and a handful of salt. Here you have an ordered system. The salt and the sand are quite separate. Now hold your hands over a flat surface and open them. The salt and the sand will fall together and mix.

It is now going to take some effort to separate them. Suppose you can pick up a two handfuls without leaving any on the table. You will not find that you have all salt in one hand and all sand in the other. Even if you find suitable sieves, it will still be more work separating them and when you mixed them initially. So time flowing in one direction is really entropy flowing only in one direction.

If you had an empty room (a vacuum) and bring in a pressure container filled with air, the air will very quickly be evenly distributed throughout the room. It is theoretically possible (with some practical difficulty) for all the air to go back into the container, or even just into one corner.

Even if you increase the order of water molecules temporarily by putting them in the freezer, you are still increasing entropy overall by virtue of the effort you have to put into the freezer, which will be more.

As in the Dorothy Sayers story: "The Second Law of Thermodynamics", explained Wimsey, helpfully, "which holds the Universe in its path, and without which time would run backwards like a cinema film round the wrong way".

Or, just to drive the point home:
"Only when a system behaves in a sufficiently random way may the differences between past and future, and therefore irreversibility, enter into its description . . . The Arrow of Time is the manifestation of the fact that the future is not given, that, as the French poet Paul Valery emphasized 'time is a construction'".



Cat :)
 
Last edited:
Apr 22, 2022
9
3
15
Visit site
This is one aspect of time which is quite simple.

Sometimes entropy is called "the arrow of time". In theory, many aspects of physics are reversible, but fall down due to this factor.

Here is an old, but valuable, example. Take a handful of salt crystals and a handful of salt. Here you have an ordered system. The salt and the sand are quite separate. Now hold your hands over a flat surface and open them. The salt and the sand will fall together and mix.

It is now going to take some effort to separate them. Suppose you can pick up a two handfuls without leaving any on the table. You will not find that you have all salt in one hand and all sand in the other. Even if you find suitable sieves, it will still be more work separating them and when you mixed them initially. So time flowing in one direction is really entropy flowing only in one direction.

If you had an empty room (a vacuum) and bring in a pressure container filled with air, the air will very quickly be evenly distributed throughout the room. It is theoretically possible (with some practical difficulty) for all the air to go back into the container, or even just into one corner.

Even if you increase the order of water molecules temporarily by putting them in the freezer, you are still increasing entropy overall by virtue of the effort you have to put into the freezer, which will be more.

As in the Dorothy Sayers story: "The Second Law of Thermodynamics", explained Wimsey, helpfully, "which holds the Universe in its path, and without which time would run backwards like a cinema film round the wrong way".

Or, just to drive the point home:
"Only when a system behaves in a sufficiently random way may the differences between past and future, and therefore irreversibility, enter into its description . . . The Arrow of Time is the manifestation of the fact that the future is not given, that, as the French poet Paul Valery emphasized 'time is a construction'".



Cat :)
Gosh dang. That explanation left me speechless
 
I find it interesting just how oblivious most people are as to how little we understand time. We know how simple it operates and how reliable it is, that is until Einstein came along. But even with Einstein's work, we still can claim we understand its behavior, but we have no clue as to its essence.

It's a little like gravity but with gravity we can less hypothesize with things like gravitons.

"We know time flies, but where in the world are its wonderful wings? :)
 
Apr 22, 2022
9
3
15
Visit site
I find it interesting just how oblivious most people are as to how little we understand time. We know how simple it operates and how reliable it is, that is until Einstein came along. But even with Einstein's work, we still can claim we understand its behavior, but we have no clue as to its essence.

It's a little like gravity but with gravity we can less hypothesize with things like gravitons.

"We know time flies, but where in the world are its wonderful wings? :)
punny
 
Nov 10, 2020
57
51
1,610
Visit site
I find it interesting just how oblivious most people are as to how little we understand time. We know how simple it operates and how reliable it is, that is until Einstein came along. But even with Einstein's work, we still can claim we understand its behavior, but we have no clue as to its essence.

It's a little like gravity but with gravity we can less hypothesize with things like gravitons.

"We know time flies, but where in the world are its wonderful wings? :)
Yeah time is interesting in that there isn't even agreement about whether it is fundamental or not as while it appears that thanks to experiments such as the delayed choice experiment and bells inequality we can say that either space or time are not fundamental, though this neglects a third possibility namely that quantum mechanics could be governed by a nonlocal hidden variable theorem.

Recently I have come across the "no big crunch theorem" which is a theorem for the general Einstein field equations which is proved by self contradiction in the article Inhomogenous and anisotropic cosmology cited: Matthew Kleban and Leonardo Senatore JCAP10(2016)022.
The interesting proof shows that if we do not make any initial approximations to simplify the mathematics the self consistency criteria for the Einstein field equations show that the only remaining valid solutions to the Einstein field equations with flat or open topologies in an initially accelerating universe are those which have no maximum spatial volume in any time-slice. Or in other words from any frame of reference or time slice the rate of expansion of the universe as a whole must always remain greater than zero such that the total volume of the universe never decreases.

What is interesting in terms of the arrow of time in this proof is that the conditions on the total spatial volume of the universe happens to have the same mathematical formalism as entropy since there are countably finite possible outcomes for a flat or open cosmology. The implication thus is that empty space has an extremely high entropy and the rate of expansion is equivalent to the maximization of entropy in space through time everywhere in spacetime. More intriguing from the perspective of unifying quantum mechanics and general relativity and identifying "dark energy" is that the proof holds regardless of the chosen initial values(including the cosmological constant value chosen) and as the total entropy of the universe governs the rate of expansion everywhere gravity at cosmological scales must take on the formalism of a nonlocal hidden variable theory consistent with Einstein's conception of quantum mechanics as a hidden variable theory under the criteria that the hidden variable(s) are nonlocal in this case the variable in question being the total entropy of the Universe.
This effectively results in a generalized analog of Ads-CFT (Anti De Sitter space Conformal field Theory) correspondence which for any universe which is initially expanding and not topologically closed will always hold regardless of the initial values. (Ads-CFT is the idea from string theory where quantum entanglement and gravity are equivalent)
In principal from the perspective of Noether's theorem it actually becomes pretty easy to see why this outcome emerges as acceleration effectively breaks time symmetry causing the 1st and 3rd laws of thermodynamics to emerge automatically. The 2nd law of thermodynamics and thus the arrow of time then emerge from the internal consistency criterion that any mathematical theory be internally self consistent for all possible values.

What is surprising to me is that there doesn't appear to have been much if any work in the literature to apply the internal consistency criterion, corresponding to Gödel's incompleteness theorems which must hold for any axiom based logical system, to the general Einstein field equations. Once the internal consistency criterion is enforced most possible manifold solutions are eliminated due to logical inconsistencies appearing in their mathematical framework that can only be resolved by adding additional axiomatic components the the model. Of particular note the Friedmann Lemaitre Robertson Walker metric solution becomes particularly special as it corresponds to an unstable minimum entropy from which all solutions diverge a.k.a. is is a universal repeller. This means the FLRW metric can only ever apply in the case of conformally invariant homogenous and isotropic conditions which means there is effectively no time thus showing that the assumptions behind the "standard model of cosmology" lambda CDM will always result in internal logical inconsistencies that render any reducible metric tensor as being forbidden or unphysical solutions to the Einstein Field equations. With any deviation from these conditions the system can only either fall forward or backward in time mathematically resulting in an irreversible arrow of time associated with a expanding or contracting universe.
In this context notably Occam's razor shows that dark energy is likely unnecessary as any initial condition in an initially accelerating universe (meaning one with any cosmological constant value positive or negative) will always result in net acceleration of expansion at cosmological scales.

The arrow of time is an automatic consequence of the breaking of time symmetry which is a prerequisite for any kind of structure.

One caveat that makes this incomplete is that the current proof for this theorem relies on the infinite extent conditions of a flat or open geometry to prove that all other solutions to the Einstein field equations in this domain are logically inconsistent in any expanding or contracting universe. To generalize this proof it would need to be extended into the closed solution domain. However there is reason to believe such a proof exists thanks to the mathematics of black hole thermodynamics linking the surface area of any event horizon to its total entropy.

This is much more speculative but if this can be generalized then I have a suspicion that applying the general quantization of matter will result in the immediate quantization of gravity under the criteria of logical internal consistency of the Einstein field equations. There appears to be a constraint of uniqueness to the metric tensor at every point in space which if it remains under general quantization might naturally imply that the quantization of space represents a total antisymmetric wavefunction or has the properties of a fermion obeying the Pauli exclusion principal. This is interesting as a possibility and not at all what I would have expected though it may generally help resolve apparent paradoxes related to black hole thermodynamics but it all appears far to pretty for my liking as a physicist especially as this theory will be mathematically irreducible and thus extremely computationally expensive.

That I can get as far as I have is purely thanks to the clever proof by Matthew Kleban and Leonardo Senatore which eliminated much of the complexity by in effect focusing on the set of all possible solutions and constraining them based on logical consistency. Still without this paper I would never have thought to think in terms of internal consistency nor that most of the set of possible solutions to the Einstein field equations might be able to be shown to be logically self inconsistent. Metamathematics is way outside my normal area of study so it made me take a fairly deep dive into high level math beyond what I have experience with. But if true this looks like it may be the a much more fundamental explanation for the arrow of time.
 
The interesting proof shows that if we do not make any initial approximations to simplify the mathematics the self consistency criteria for the Einstein field equations show that the only remaining valid solutions to the Einstein field equations with flat or open topologies in an initially accelerating universe are those which have no maximum spatial volume in any time-slice. Or in other words from any frame of reference or time slice the rate of expansion of the universe as a whole must always remain greater than zero such that the total volume of the universe never decreases.
Interesting. There may be no "initial approximations" but I would bet a donut that there are some initial assumptions. :) DE would likely be at the heart of their assumption, but we don't know what it is or if it will decay at some point, or even reverse. This is why claims of "proofs" must be limited to mathematics not physics.

What is interesting in terms of the arrow of time in this proof is that the conditions on the total spatial volume of the universe happens to have the same mathematical formalism as entropy since there are countably finite possible outcomes for a flat or open cosmology. The implication thus is that empty space has an extremely high entropy and the rate of expansion is equivalent to the maximization of entropy in space through time everywhere in spacetime.
Yes, this seems to suggest that the statistical possibility for a mess far outweighs the possibility of a sudden burst into order -- Humpty Dumpty doesn't self-reassemble. But, I'm having to look far over my head to suggest anything that may be substantial as I've never studied GR. Much of what I am saying is only to help sort what may or may not be correct in the way I should be thinking.

More intriguing from the perspective of unifying quantum mechanics and general relativity and identifying "dark energy" is that the proof holds regardless of the chosen initial values(including the cosmological constant value chosen)...
The biggest challenge in model making, AFAIK, are those initial conditions (values). It reminds of a dumb joke that you'll regret me telling but I'll assume (initial condition) that you know it so it won't be my fault. ;)

"If we had some ham, we could make a ham sandwich, if we had some bread." :)

... and as the total entropy of the universe governs the rate of expansion everywhere gravity at cosmological scales must take on the formalism of a nonlocal hidden variable theory consistent with Einstein's conception of quantum mechanics as a hidden variable theory under the criteria that the hidden variable(s) are nonlocal in this case the variable in question being the total entropy of the Universe.
That's interesting. I wonder if there is a way to snag that hidden variable, which would likely greatly effect some things in QM.

This effectively results in a generalized analog of Ads-CFT (Anti De Sitter space Conformal field Theory) correspondence which for any universe which is initially expanding and not topologically closed will always hold regardless of the initial values.
I would enjoy learning about what this means, perhaps in a new thread. The only De Sitter solution I am aware, at least stand-alone, was his early solution to GR which offered explanation for redshift in a static universe, with the one condition of a universe without any mass. :) [Hubble worked with De Sitter at a major conference and, IMO, this is one reason Hubble never committed himself to expansion.]

(Ads-CFT is the idea from string theory where quantum entanglement and gravity are equivalent)
In principal from the perspective of Noether's theorem it actually becomes pretty easy to see why this outcome emerges as acceleration effectively breaks time symmetry causing the 1st and 3rd laws of thermodynamics to emerge automatically. The 2nd law of thermodynamics and thus the arrow of time then emerge from the internal consistency criterion that any mathematical theory be internally self consistent for all possible values.
Wow, that's a dandy! I assume it makes testable predictions, right?

What is surprising to me is that there doesn't appear to have been much if any work in the literature to apply the internal consistency criterion, corresponding to Gödel's incompleteness theorems which must hold for any axiom based logical system, to the general Einstein field equations.
Would that be due to the freedom found in the use of axioms? I'm struggling to find things that might stick on the wall, admittedly.

Of particular note the Friedmann Lemaitre Robertson Walker metric solution becomes particularly special as it corresponds to an unstable minimum entropy from which all solutions diverge a.k.a. is is a universal repeller. ... With any deviation from these conditions the system can only either fall forward or backward in time mathematically resulting in an irreversible arrow of time associated with a expanding or contracting universe.
I wonder if any minimum entropy condition is stable? It's only logical that gases would prefer to mix, leading to my favorite 2nd law phrase, "Heat won't flow from a cooler to hotter, you can try if you like but you far better notter!" :)


In this context notably Occam's razor shows that dark energy is likely unnecessary as any initial condition in an initially accelerating universe (meaning one with any cosmological constant value positive or negative) will always result in net acceleration of expansion at cosmological scales.
Would that explain Inflation? Would an uber entropy state at t=0, or close to it, necessarily cascade to higher entropy states?

The arrow of time is an automatic consequence of the breaking of time symmetry which is a prerequisite for any kind of structure.

One caveat that makes this incomplete is that the current proof for this theorem relies on the infinite extent conditions of a flat or open geometry to prove that all other solutions to the Einstein field equations in this domain are logically inconsistent in any expanding or contracting universe. To generalize this proof it would need to be extended into the closed solution domain. However there is reason to believe such a proof exists thanks to the mathematics of black hole thermodynamics linking the surface area of any event horizon to its total entropy.
Impressive!

Perhaps our ability to study merging BHs might help reveal Hawking's entropy hypothesis for BHs. Combining BHs will produce less surface area than the total of the two before merger. [I've argued, as a novice, that the merger's delta entropy should do more than produce a beautiful gravity wave. But the EM emitted, if any, may be too feeble for us to see due to the great distances these mergers take place from us.]

...But if true this looks like it may be the a much more fundamental explanation for the arrow of time.
Thanks for sharing it!
 
Nov 10, 2020
57
51
1,610
Visit site
Interesting. There may be no "initial approximations" but I would bet a donut that there are some initial assumptions. :) DE would likely be at the heart of their assumption, but we don't know what it is or if it will decay at some point, or even reverse. This is why claims of "proofs" must be limited to mathematics not physics.
Oof yeah that was supposed to be assumptions not approximations. That typo may have subconsciously been myself relating to one of the main problems with the current model of cosmology which is the assumption that any deviation from perfect isotropy is small enough to be negligible which the mathematical proof for the general Einstein field equations show can never be the case

That's interesting. I wonder if there is a way to snag that hidden variable, which would likely greatly effect some things in QM.
Good question maybe but I am skeptical that we could obtain such information thanks to computational irreducibility(the computational extension of Gödel's incompleteness theorems). That said there may be observations which can be made in space and time which could greatly constrain such hidden variables perhaps enough to get an answer. We will not know unless we look!

I would enjoy learning about what this means, perhaps in a new thread. The only De Sitter solution I am aware, at least stand-alone, was his early solution to GR which offered explanation for redshift in a static universe, with the one condition of a universe without any mass. :) [Hubble worked with De Sitter at a major conference and, IMO, this is one reason Hubble never committed himself to expansion.]
Interesting one thing I have been meaning to look in further is some of the early work on GR to try and get a better understanding of how things developed as they did main problem for me is that since the pandemic situation I missed two semesters in a row in grad school where they tried to have in person classes which led to me becoming unenrolled. I have since managed to retroactively get my masters degree in physics as I had already met all the requirements for that degree but without institutional access pre internet papers are hard to find.

Wow, that's a dandy! I assume it makes testable predictions, right?
It better! I don't really have the computational power to run the mathematics for the full nonlinear general Einstein field equations to check if some of these predictions are actually observable but otherwise this basically is just the same unmodified Einstein field equations for 3 spatial dimensions and one time dimension being run for any arbitrary initial conditions. As a consequence in principal all of general relativity should be automatically reproduced. The main surprise was learning that there hasn't been a generalized check of the validity of the underlying assumptions specifically the so called cosmological principal.

If there is one big testable prediction it would be that while we know there is a net acceleration of the expansion of the universe that expansion of the universe can not be isotropic. This was what actually led me to investigate this mathematics frankly everything else has been a complete surprise which seems to good to be true. However thus far I can't falsify any of it at least not without formulate it as a NP problem which will not be computationally reducible.

Would that be due to the freedom found in the use of axioms? I'm struggling to find things that might stick on the wall, admittedly.
Probably after all you can always add another axiom to make any solution internally consistent. That after all is exactly what the combination of dark energy and the assumption that the CMB dipole is purely kinematic in origin. The latter sounds like a reasonable assumption until you learn that in general there should be a cosmological component to the CMB dipole for everything in the observable universe and the conditions for all these components to exactly cancel out corresponds to a minimum entropy solution in information theory, that is to say it requires the violation of the second law of thermodynamics. With the right added axiom however you can always fix that condition as Godel's incompleteness theorem says.

However by the same logic the Ptolemaic model of the solar system with the Earth at the center of the solar system is also a perfectly valid solution with enough additional axioms which is why Occam's razor is so useful. The question becomes what is the conceptually simplest model which explains all our observations thus far? In this light the cosmological principal must go and internal consistency should be enforced on GR. (My gut feeling will be the quanta of gravity will come from the irreducible fluctuations in the off diagonal terms of the metric tensor not canceling out but that is just opinion at this point it needs hard data modeling to be anything more)


I wonder if any minimum entropy condition is stable? It's only logical that gases would prefer to mix, leading to my favorite 2nd law phrase, "Heat won't flow from a cooler to hotter, you can try if you like but you far better notter!" :)
I would argue perhaps it is the other way around and entropy is defined by its property of always increasing for forward passage of time? After all entropy in statistical mechanics is ultimately a consequence of entropy in information theory which relates to the number of possible configurations of bits.


Would that explain Inflation? Would an uber entropy state at t=0, or close to it, necessarily cascade to higher entropy states?
That was the conclusion of Matthew Kleban and Leonardo Senatore cited in the paper. However I'm not yet completely convinced on that as it seems to simplistic for my likes. Frankly I'm wondering if inflation is needed at all as if the CMB is not isotropic and uniform the need for inflation largely disappears. The ugly reality of the situation in modern cosmology is that we assume the CMB is uniformly isotropic everywhere i.e. that there is no cosmological components of the CMB dipole.

However given the work of Nathan J. Secrest et al 2021 ApJL 908 L51 we can largely rule out the purely kinematic dipole CMB assumption a.k.a. that no cosmological dipole components are imprinted into in the CMB to 4.9 sigma or a 1 in 2 million chance of the mismatch in both direction and magnitude between the dipole in the CMB and the dipole within 1.36 million cosmologically distant quasars measured by catWISE being a statistical fluke. If we can get the full Square Kilometer Array we should actually be able to test this to 5 sigma but based on the evidence thus far the modern model of cosmology is scientifically problematic

Perhaps our ability to study merging BHs might help reveal Hawking's entropy hypothesis for BHs. Combining BHs will produce less surface area than the total of the two before merger. [I've argued, as a novice, that the merger's delta entropy should do more than produce a beautiful gravity wave. But the EM emitted, if any, may be too feeble for us to see due to the great distances these mergers take place from us.]

I hope we can get some answers on the unification between GR and Quantum mechanics from gravitational waves of particular note I'm excited about the ongoing periodicity decaying orbit of SDSSJ143016.05+230344.4. Unfortunately it is unfolding too quickly for LISA to be ready in time but the best fit model for the observations of the galaxy's AGN suggests two SMBH's which over an interval of 3 years have had their orbital period drop from around one orbit every year to one complete orbit in less than a month.... As for gravitational waves they probably should carry away some entropy but measuring that is going to be hard since LIGO Virgo and other Earth based gravitational wave observatories can't view such huge waves passing by. Pulsar timing arrays are our only hope![/QUOTE][/QUOTE]
 
That said there may be observations which can be made in space and time which could greatly constrain such hidden variables perhaps enough to get an answer. We will not know unless we look!
Right. Yogi Berra is famous for spreading rumors such as... one "can see a lot by just looking!" Perhaps the JWST will assist in this worthy study.

Interesting one thing I have been meaning to look in further is some of the early work on GR...
I find it helpful to explain the BBT by starting from a historical perspective. The theory was never about starting from t=0 or from a singularity, it came from what we observe and rewinding the clock from there.

The history itself is quite interesting, IMO. Who would ever expect a priest (Lemaitre) to birth the BBT. IIRC, it was tensions in Rome that caused Lemaitre to go to the states where he met the best named astronomer of all time, Vesto Slipher! Lowell, fortunately, didn't require his obsession for Mars to consume all work there. He had Slipher study spiral "nebulae" and, with very difficult effort, get some redshift data on them. [I think Slipher had to lean into the scope to maintain those long hours in the cold for observing redshifts.] Hubble managed to have some distance measurements (Cepheids).

Lemaitre knew of the problem in Einstein's model and De Sitter's model. Einstein's static model was okay except he could not explain redshift. De Sitter's static model could explain redshift but only with a universe of no mass! :)

Friedmann, a few years earlier, tried to awaken Einstein to non-static universe, but only on a mathematical basis. Lemaitre used the math to explain his physics model. Einstein didn't like either one. He told Lemaitre his math was fine but his physics "abominable". :)

Lemaitre produced his paper (Belgium) in 1927. In 1929, he went to England for a conference and heard Eddington discuss the dilemma with Einstein and De Sitter, and others, IIRC. Lemaitre, a former student of Eddington, explained his solution and he then translated it into English. Lemaitre's original paper produced the first expansion rate, which was way off since Hubble used the wrong Cepheid type variable, though only one existed at the point in time, I'm fairly sure.

Once Eddington and De Sitter -- both were competent in GR -- read Lemaitre's paper they quickly recognized its importance. De Sitter was able to quickly turn Einstein toward's it, but Einstein was quickly fading from cosmology and more into his total unification of all. [Einstein later reportedly quickly stood and applauded a speech by Lemaitre.]

Lemaitre's English translation didn't have his expansion rate because Hubble had advanced his work on both distances and redshifts, allowing others, including Hubble, to get the credit. [Hubble never committed to claiming his values justified them as expansion rates. Ironic given Ho, further justifying the somewhat recent decision by the IAU to call it now the Hubble-Lemaitre Constant, but I don't think they showed any abbreviation, so I give it little hope. :)]

I hate to make this a tome, but.... I recently enjoyed learning that Einstein withheld his GR equations initially until he had three independent lines of evidence he could use to support his theory: the known solar redshift (Sun's gravity being the reason, of course); the fix for the precession of equinox for Mercury (the original destruction of Vulcan) ; the determined amount of bending of starlight around an eclipsed Sun, which Eddington verified.

Perhaps that will help, assuming you were unaware of any of that. You likely know about the subsequent challenges, including that which brought us Guth's inflationary model.

to try and get a better understanding of how things developed as they did main problem for me is that since the pandemic situation I missed two semesters in a row in grad school where they tried to have in person classes which led to me becoming unenrolled. I have since managed to retroactively get my masters degree in physics as I had already met all the requirements for that degree but without institutional access pre internet papers are hard to find.
Ug, I suppose the silver lining is that misery loves company. I've had to run a business under Covid. [I regret that the credibility for medical science has suffered for seemingly rejecting some robust data. The fact that its an aerosol, but only recently recognized as such in spite of all the early efforts by aerosol scientists, is one of the worst moments to come to light, IMO.]

Congrats in overcoming the adversity!

The main surprise was learning that there hasn't been a generalized check of the validity of the underlying assumptions specifically the so called cosmological principal.
It seems more likely than not it will hold. The isotropy in the CMBR would support this, right?

If there is one big testable prediction it would be that while we know there is a net acceleration of the expansion of the universe that expansion of the universe can not be isotropic.
I assumed the large-scale web structure of galaxy clusters demonstrates more for isotropy than otherwise. So is the margin of error too great to draw this conclusion?

This was what actually led me to investigate this mathematics frankly everything else has been a complete surprise which seems to good to be true. However thus far I can't falsify any of it at least not without formulate it as a NP problem which will not be computationally reducible.
Well, "if it were easy, then..." ;)

Probably after all you can always add another axiom to make any solution internally consistent. That after all is exactly what the combination of dark energy and the assumption that the CMB dipole is purely kinematic in origin. The latter sounds like a reasonable assumption until you learn that in general there should be a cosmological component to the CMB dipole for everything in the observable universe and the conditions for all these components to exactly cancel out corresponds to a minimum entropy solution in information theory, that is to say it requires the violation of the second law of thermodynamics. With the right added axiom however you can always fix that condition as Godel's incompleteness theorem says.
I've often wondered if the Hubble flow, speaking of the dipole, could not be considered a preferred frame, but that GR just can't offer a means of distinction. Perhaps the mystery of how to understand time is in this flow.

However by the same logic the Ptolemaic model of the solar system with the Earth at the center of the solar system is also a perfectly valid solution...
Well, you likely mean a modified Tychonic model does work. [Ptolemy was falsified with Galileo's discovery of all those phases for Venus (and Mercury).] GR math is fine with it, though all the required fictious forces make it...abominable! :)

The question becomes what is the conceptually simplest model which explains all our observations thus far? In this light the cosmological principal must go and internal consistency should be enforced on GR. (My gut feeling will be the quanta of gravity will come from the irreducible fluctuations in the off diagonal terms of the metric tensor not canceling out but that is just opinion at this point it needs hard data modeling to be anything more)
Well, as my friend would tell me when I wax headedly, "of all your ideas, that's one of them!" It sounds interesting. Good luck.

[I have to run, but thanks for all this discussion. Very interesting!]
 
Last edited:
Continuing with Episode 2... ;)

The question becomes what is the conceptually simplest model which explains all our observations thus far? In this light the cosmological principal must go and internal consistency should be enforced on GR.
I'm unclear what problem you see with the cosmological principle. Are there regions that defy the "known" constants of physics? The anisotropy is quite tiny, and it's necessary for things like stars and planets.

(My gut feeling will be the quanta of gravity will come from the irreducible fluctuations in the off diagonal terms of the metric tensor not canceling out but that is just opinion at this point it needs hard data modeling to be anything more)
Is there a general public kind of explanation for where GR exhibits spots of interest or even concern? The first "solutions" for GR seemed to be mathematically valid but also erroneous. [Schwarzschild was one of the first to present one solution, which gave us black holes. How nuts was that one! ;)] About how many possible solutions might there be for GR? Is that even a fair question? Engineering does a lot with just F=ma, so I assume there is great similarity in how much stuff can get baked starting with just flour.

Frankly I'm wondering if inflation is needed at all as if the CMB is not isotropic and uniform the need for inflation largely disappears.
The problem that was found in BBT was that it was too isotropic. Apparently, not that I understand any of it, is that the very early influence quantum fluctuations had would require less uniform temperature, more anisotropy. Something was needed to diminish the hot spots. But, at those super dense conditions of "pure energy" (Spock ;)), perhaps something strange like tachyons contributed to the fast transfers, but that's purely imaginative. Inflation seems to be plausible given the mechanism presented to cause it, apparently.

The ugly reality of the situation in modern cosmology is that we assume the CMB is uniformly isotropic everywhere i.e. that there is no cosmological components of the CMB dipole...

However given the work of Nathan J. Secrest et al 2021 ApJL 908 L51 we can largely rule out the purely kinematic dipole CMB assumption a.k.a. that no cosmological dipole components are imprinted into in the CMB to 4.9 sigma or a 1 in 2 million chance of the mismatch in both direction and magnitude between the dipole in the CMB and the dipole within 1.36 million cosmologically distant quasars measured by catWISE being a statistical fluke. If we can get the full Square Kilometer Array we should actually be able to test this to 5 sigma but based on the evidence thus far the modern model of cosmology is scientifically problematic. .
I'm curious what you mean. The dipole would necessarily need to exist unless we are very unique in being glued to the Hubble Flow, right? Are you saying the measured shifts vary with distance, separate from what a normal dipole should exhibit?

Thanks again.
 

Catastrophe

"Science begets knowledge, opinion ignorance.
Excuse me, please, but I have a problem understanding isotropic. I know what it means, but, applied to the Universe, does it mean that all currant buns are isotropic?

Do currant buns (i.e., Universe) have to have currants (galaxies et cetera) present in a non-random configuration, to be isotropic?

Cat :)


Note: This is not off topic, as isotropy is one of the assumptions.
 
  • Like
Reactions: Helio
Excuse me, please, but I have a problem understanding isotropic. I know what it means, but, applied to the Universe, does it mean that all currant buns are isotropic?

Here’s just my personal take on this question that seems to present ambiguity from time to time:

If you hollow one bun out and put a camera inside then look all around, you would see some variations, but you would also see uniformity.

Some would say it’s not isotropic, but IMO,many would understand the main point that it does look the same in all directions ( especially if larger areas are imaged).

I think context becomes important to understand the point being made. So, homogeneity can be argued to be isotropic or not isotropic depending on whether the uniform variations are acceptable.
]
 
  • Like
Reactions: Catastrophe
Nov 10, 2020
57
51
1,610
Visit site
I'm unclear what problem you see with the cosmological principle. Are there regions that defy the "known" constants of physics? The anisotropy is quite tiny, and it's necessary for things like stars and planets.

The cosmological principal might seem natural when the explanation used to justify it is that the laws of physics are the same everywhere, however this is not mathematically what cosmologists mean when they apply the cosmological principal. Specifically the assumption they refer to is the idea that for sufficiently symmetric distributions of matter and other forms of energy that the off diagonal components of the energy stress tensor should effectively be the same canceling each other out at large scales.
Mathematically this has a lot of problems as there is nothing within the Einstein field equations that says this should be valid after all the local density of matter and the curvature that results from that is very different from changes in the underlying laws of physics. Really what this has to do with is path dependence on one hand we know that the Einstein field equations are path dependent as that is how you get gravitational lensing and gravity but they assume that at large distances all the deviations from matter in the universe will effectively cancel out which as the Einstein field equations are a system of multivariate partial differential equations this mathematically as valid as assuming that the wind speeds across a planet's atmosphere within the Navier stokes equations will cancel out letting them ignore the effects of wind on weather. This is fundamentally flawed in many ways with an entire field of mathematics dedicated to the study of the properties of systems of multivariate partial differential equations known as chaos theory.

In summary what cosmologists are assuming is that they can ignore the effects of local spacetime variations if looking over sufficiently large differences as they treat those variations as converging to some generalized solution.

This lets them treat the rate of expansion at large scales as effectively constant (the so called Hubble rate thus being reduced to H_o drastically simplifying the mathematics from a system of fundamentally nonlinear partial differential equations into a linear system of equations that are analytically solvable.

This is an awful assumption because one of the defining properties of differential equations is that every system of differential equations has a unique solution for every single possible initial conditions such that for any system of multivariate partial differential equations F(x,y,z,t) an initial condition differing only by x=1 and x'=1+ 1*10^(-3150) will result in each being unique with these deviations only ever amplifying over time. In this sense all solutions must always diverge and thus the value at any point in space or time everywhere will with sufficient precision be unique everywhere.



Thus the issue with the cosmological principal has to do with cosmologists ignoring the mathematical properties of the class of functions which the Einstein field equations are a subset of just to make the math easier.

As a result we get mathematically forbidden nonsense like the idea that the Hubble rate can be a constant rather than being a field H(x,y,z,t) which has a unique value everywhere in space and time is uniquely determined for a given x, y, z & t. To some extent cosmologists know this as it was pointed out more than a century ago but they have chosen to ignore this by justifying these differences as minor. However what the No big crunch theorem spells out explicitly is that this assumption necessarily results in the Einstein field equations becoming logically self inconsistent. (That is to say these assumptions violate the laws of thermodynamics as you are throwing away information).

At least in recent years cosmologists have started to recognize that they can't assume H is constant in time alas they still ignore the spatial component which severely impacts the results of studies of cosmologically distant sources as there should not be any reason to assume the density of the energy stress tensor(aka the amount of stuff) is the same everywhere as even minute differences should for any such system of equations only amplify over time.

Of course this problem has been able to persist since it has a self reinforcing effect on measurements due to the initial assumptions after all if you use the assumption that space is effectively the same everywhere at large scales with exception to time to determine the distance of cosmological sources (by say measuring their redshifts) then you will always find that your results are consistent with your initial assumption. :pensive:

Is there a general public kind of explanation for where GR exhibits spots of interest or even concern? The first "solutions" for GR seemed to be mathematically valid but also erroneous. [Schwarzschild was one of the first to present one solution, which gave us black holes. How nuts was that one! ;)] About how many possible solutions might there be for GR? Is that even a fair question? Engineering does a lot with just F=ma, so I assume there is great similarity in how much stuff can get baked starting with just flour.

Indeed the problems with the Einstein field equations from the perspective of mathematical theory are quite simple however the subject is far removed from even most cosmologists since the cosmological community hasn't really used the general Einstein field equations since Friedmann and others developed the Friedmann Lemaitre Robertson Walker metric (FLRW metric) and their associated simplified Friedmann equations. After all the Friedmann equations are an exact solution to the Einstein field equations so surely perturbation theory will let them use them as a general guideline for cosmological models. The catch however is that the assumption they generally always make that deviations from the FLRW solution become negligibly small at cosmological distances which has become known as the cosmological principal is not valid and mathematically can be shown to be forbidden by the criteria for logical internal consistency. This is to say the Friedmann equations violate the conservation of information. It is my suspicion that this will be found to be the source of the information paradox. >_>

The problem that was found in BBT was that it was too isotropic. Apparently, not that I understand any of it, is that the very early influence quantum fluctuations had would require less uniform temperature, more anisotropy. Something was needed to diminish the hot spots. But, at those super dense conditions of "pure energy" (Spock ;)), perhaps something strange like tachyons contributed to the fast transfers, but that's purely imaginative. Inflation seems to be plausible given the mechanism presented to cause it, apparently.
That is true under the assumption that the FLRW metric is applicable. I am however increasingly aware of theoretical problems in these underlying assumptions which make me question their validity. It is quite a bit more complicated however as the No big Crunch theorem relies on the Weak Energy Condition which is that accounting for expansion the rate of energy is conserved. Inflation allows this to be violated however as inflation involves a phase transition in quantum fields which would release energy for instance so inflation is hard to analyze under this framework however it is possible to have conditions which in a low density region might replicate the rapid acceleration expected of inflation though that doesn't directly account for where the energy of the "big bang" comes from. Regardless I also have to wonder what effects the implied nonlocality might have on large scale structure? Can there be alternative models to inflation related to some other kind of phase transition within quantum fields? Really there are a lot of questions here more than answers so this seems unlikely to be resolved soon.

I'm curious what you mean. The dipole would necessarily need to exist unless we are very unique in being glued to the Hubble Flow, right? Are you saying the measured shifts vary with distance, separate from what a normal dipole should exhibit?
Well yes the dipole clearly exists however there are a number of sources which could contribute to the observed dipole particularly in the general conditions where the rate of expansion is a field that varies everywhere in space and time based on both local and global/universal effects. The current cosmological model which assumes the rate of expansion is constant everywhere with the large scale distribution of matter being the same at large scales and thus their model which simplifies the math so they don't need to account for path dependence on the measured redshift and thus the rate of acceleration only works if and only if this i s purely a kinematic dipole. For any other dipole component there will need to be a directionally dependent correction function to account for local variations in matter density and bulk flows which will result in large systematic errors in interpretations of redshift measurements.

The falsification test used by Nathan Secrest et al 2021 indeed looked to test thee validity of the assumption for a purely kinematic dipole following the procedure for falsification outlined by Elis & Baldwin 1984.

Because the cosmological principal depends on the assumption that the observed CMB dipole must be purely kinematic so that a change of reference frame can be made to the presumed frame where the CMB should be homogenous and isotropic this model can be falsified if two dipoles at different cosmic times one being the CMB can be constructed and shown to be incompatible. For this assumption to be valid the distance of an individual source so long as it is cosmologically distant should not matter so you can construct a dipole of cosmologically distant sources which can be compared to the CMB. If they don't match then your initial assumption that there is a special frame in which the CMB is uniform and isotropic is invalid and thus you will need to account for cosmological effects which can not be removed by a simple shift in reference frame.

Note that this test doesn't tell you anything beyond that the cosmological components to the CMB dipole are nonzero, as there are a number of potential ways which cosmological effects can induce a dipole onto the sky ranging from over densities inherit to the CMB or over densities along the path the CMB light took to reach us further follow up to determine the source of the dipole components would be needed to construct a correction function before any cosmological data could be reanalyzed.
The reason this hasn't been done earlier is that it takes millions of independent sources all across the sky once any local signal contamination has been already removed which is more data than most cosmology studies get close to using.
Excuse me, please, but I have a problem understanding isotropic. I know what it means, but, applied to the Universe, does it mean that all currant buns are isotropic?

Do currant buns (i.e., Universe) have to have currants (galaxies et cetera) present in a non-random configuration, to be isotropic?

Cat :)


Note: This is not off topic, as isotropy is one of the assumptions.
Technically yes but cosmologists have assumed small deviations from isotropy have a negligible effect on large scale structure. Mathematically this is much more problematic as the results seem to indicate it doesn't hold in the way it has been argued it should which makes the necessary math much more complicated.
 
  • Like
Reactions: Catastrophe

Catastrophe

"Science begets knowledge, opinion ignorance.
The cosmological principal might seem natural when the explanation used to justify it is that the laws of physics are the same everywhere, however this is not mathematically what cosmologists mean when they apply the cosmological principal. Specifically the assumption they refer to is the idea that for sufficiently symmetric distributions of matter and other forms of energy that the off diagonal components of the energy stress tensor should effectively be the same canceling each other out at large scales.
Mathematically this has a lot of problems as there is nothing within the Einstein field equations that says this should be valid after all the local density of matter and the curvature that results from that is very different from changes in the underlying laws of physics. Really what this has to do with is path dependence on one hand we know that the Einstein field equations are path dependent as that is how you get gravitational lensing and gravity but they assume that at large distances all the deviations from matter in the universe will effectively cancel out which as the Einstein field equations are a system of multivariate partial differential equations this mathematically as valid as assuming that the wind speeds across a planet's atmosphere within the Navier stokes equations will cancel out letting them ignore the effects of wind on weather. This is fundamentally flawed in many ways with an entire field of mathematics dedicated to the study of the properties of systems of multivariate partial differential equations known as chaos theory.

In summary what cosmologists are assuming is that they can ignore the effects of local spacetime variations if looking over sufficiently large differences as they treat those variations as converging to some generalized solution.

This lets them treat the rate of expansion at large scales as effectively constant (the so called Hubble rate thus being reduced to H_o drastically simplifying the mathematics from a system of fundamentally nonlinear partial differential equations into a linear system of equations that are analytically solvable.

This is an awful assumption because one of the defining properties of differential equations is that every system of differential equations has a unique solution for every single possible initial conditions such that for any system of multivariate partial differential equations F(x,y,z,t) an initial condition differing only by x=1 and x'=1+ 1*10^(-3150) will result in each being unique with these deviations only ever amplifying over time. In this sense all solutions must always diverge and thus the value at any point in space or time everywhere will with sufficient precision be unique everywhere.



Thus the issue with the cosmological principal has to do with cosmologists ignoring the mathematical properties of the class of functions which the Einstein field equations are a subset of just to make the math easier.

As a result we get mathematically forbidden nonsense like the idea that the Hubble rate can be a constant rather than being a field H(x,y,z,t) which has a unique value everywhere in space and time is uniquely determined for a given x, y, z & t. To some extent cosmologists know this as it was pointed out more than a century ago but they have chosen to ignore this by justifying these differences as minor. However what the No big crunch theorem spells out explicitly is that this assumption necessarily results in the Einstein field equations becoming logically self inconsistent. (That is to say these assumptions violate the laws of thermodynamics as you are throwing away information).

At least in recent years cosmologists have started to recognize that they can't assume H is constant in time alas they still ignore the spatial component which severely impacts the results of studies of cosmologically distant sources as there should not be any reason to assume the density of the energy stress tensor(aka the amount of stuff) is the same everywhere as even minute differences should for any such system of equations only amplify over time.

Of course this problem has been able to persist since it has a self reinforcing effect on measurements due to the initial assumptions after all if you use the assumption that space is effectively the same everywhere at large scales with exception to time to determine the distance of cosmological sources (by say measuring their redshifts) then you will always find that your results are consistent with your initial assumption. :pensive:



Indeed the problems with the Einstein field equations from the perspective of mathematical theory are quite simple however the subject is far removed from even most cosmologists since the cosmological community hasn't really used the general Einstein field equations since Friedmann and others developed the Friedmann Lemaitre Robertson Walker metric (FLRW metric) and their associated simplified Friedmann equations. After all the Friedmann equations are an exact solution to the Einstein field equations so surely perturbation theory will let them use them as a general guideline for cosmological models. The catch however is that the assumption they generally always make that deviations from the FLRW solution become negligibly small at cosmological distances which has become known as the cosmological principal is not valid and mathematically can be shown to be forbidden by the criteria for logical internal consistency. This is to say the Friedmann equations violate the conservation of information. It is my suspicion that this will be found to be the source of the information paradox. >_>


That is true under the assumption that the FLRW metric is applicable. I am however increasingly aware of theoretical problems in these underlying assumptions which make me question their validity. It is quite a bit more complicated however as the No big Crunch theorem relies on the Weak Energy Condition which is that accounting for expansion the rate of energy is conserved. Inflation allows this to be violated however as inflation involves a phase transition in quantum fields which would release energy for instance so inflation is hard to analyze under this framework however it is possible to have conditions which in a low density region might replicate the rapid acceleration expected of inflation though that doesn't directly account for where the energy of the "big bang" comes from. Regardless I also have to wonder what effects the implied nonlocality might have on large scale structure? Can there be alternative models to inflation related to some other kind of phase transition within quantum fields? Really there are a lot of questions here more than answers so this seems unlikely to be resolved soon.


Well yes the dipole clearly exists however there are a number of sources which could contribute to the observed dipole particularly in the general conditions where the rate of expansion is a field that varies everywhere in space and time based on both local and global/universal effects. The current cosmological model which assumes the rate of expansion is constant everywhere with the large scale distribution of matter being the same at large scales and thus their model which simplifies the math so they don't need to account for path dependence on the measured redshift and thus the rate of acceleration only works if and only if this i s purely a kinematic dipole. For any other dipole component there will need to be a directionally dependent correction function to account for local variations in matter density and bulk flows which will result in large systematic errors in interpretations of redshift measurements.

The falsification test used by Nathan Secrest et al 2021 indeed looked to test thee validity of the assumption for a purely kinematic dipole following the procedure for falsification outlined by Elis & Baldwin 1984.

Because the cosmological principal depends on the assumption that the observed CMB dipole must be purely kinematic so that a change of reference frame can be made to the presumed frame where the CMB should be homogenous and isotropic this model can be falsified if two dipoles at different cosmic times one being the CMB can be constructed and shown to be incompatible. For this assumption to be valid the distance of an individual source so long as it is cosmologically distant should not matter so you can construct a dipole of cosmologically distant sources which can be compared to the CMB. If they don't match then your initial assumption that there is a special frame in which the CMB is uniform and isotropic is invalid and thus you will need to account for cosmological effects which can not be removed by a simple shift in reference frame.

Note that this test doesn't tell you anything beyond that the cosmological components to the CMB dipole are nonzero, as there are a number of potential ways which cosmological effects can induce a dipole onto the sky ranging from over densities inherit to the CMB or over densities along the path the CMB light took to reach us further follow up to determine the source of the dipole components would be needed to construct a correction function before any cosmological data could be reanalyzed.
The reason this hasn't been done earlier is that it takes millions of independent sources all across the sky once any local signal contamination has been already removed which is more data than most cosmology studies get close to using.

Technically yes but cosmologists have assumed small deviations from isotropy have a negligible effect on large scale structure. Mathematically this is much more problematic as the results seem to indicate it doesn't hold in the way it has been argued it should which makes the necessary math much more complicated.

Ah! Would it be too naughty to describe it as another "fudge factor"? Cat :)
 
Nov 10, 2020
57
51
1,610
Visit site
Ah! Would it be too naughty to describe it as another "fudge factor"? Cat :)
In general there are lots of things in science which can be described in terms of a fudge factor generally these initially are put in place as placeholders for something we don't understand but if you aren't careful the underlying assumptions, approximations and placeholders can effectively become dogma as the researchers who remembered why these were needed have passed on with the new generation of researchers only experiencing the approximate model.

In this sense a fudge factor isn't a wrong way to describe how things have been used after all the math is absurdly hard which is why early work on GR which remember hard to be done via pen and paper calculations as digital computers hadn't yet been invented(and analog computers while older tend to have to be built problem specific to a given set of initial conditions). The criteria of having to work out all the math by hand means that some fairly extreme assumptions were needed to actually get anywhere at all.

It wasn't until modern digital semiconductor based computers started to become considerably more advanced that nonlinear mathematics became something that was actually computationally approachable allowing the foundations of chaos theory and breakthroughs in areas of physics governed by nonlinear systems of multivariate partial differential equations.

Our theories are unfortunately still littered with relics of this earlier time where mathematics was limited by the capabilities of the human brain and time and it is important to remember why we made these assumptions in the first place.

Probably the biggest disappointment that can truly be called a fudge factor is the general reaction to the evidence that the rate of expansion was accelerating in 1997. The work showed that the old model of cosmology was incorrect but rather than exploring possibilities or ways in which this acceleration could arise the community relied on a brute force fitting and slapped a label onto it as "dark energy" without investigating the underlying assumptions of their models to see where things were going astray. This isn't to say some cosmologists weren't doing this but they were a minority. The problem looking back at the work being done in science over time is the model for how science has been funded by grants becoming much more competitive and less compatible with the long in depth investigations needed to carefully analyze ones work as research became much more of a publish or perish mentality where the focus shifted to publishing as many papers as possible not on performing thorough and self critical appraisal of their work. In effect research shifted from quality to quantity.
 
In summary what cosmologists are assuming is that they can ignore the effects of local spacetime variations if looking over sufficiently large differences as they treat those variations as converging to some generalized solution.

This lets them treat the rate of expansion at large scales as effectively constant (the so called Hubble rate thus being reduced to H_o drastically simplifying the mathematics from a system of fundamentally nonlinear partial differential equations into a linear system of equations that are analytically solvable.

This is an awful assumption because one of the defining properties of differential equations is that every system of differential equations has a unique solution for every single possible initial conditions such that for any system of multivariate partial differential equations F(x,y,z,t) an initial condition differing only by x=1 and x'=1+ 1*10^(-3150) will result in each being unique with these deviations only ever amplifying over time. In this sense all solutions must always diverge and thus the value at any point in space or time everywhere will with sufficient precision be unique everywhere.
Ok, that's logical. Small changes to initial conditions may or may not have large effects on homogeneity, no doubt.

This seems similar to the pendulum clock issue in the 1700s when Mason and Dixon had to spend many days calibrating their pendulum clock when at the Cape of Good Hope in order to provide useful timing for the transit of Venus. They were off by more than 2 minutes per day (2 min 40 s, IIRC). They reasoned, from rate measurements elsewhere, that the Earth as an oblate spheroid was the explanation. A simplified spherical model (initial condition) would be useless to determine planetary distances via transits.

As a result we get mathematically forbidden nonsense like the idea that the Hubble rate can be a constant rather than being a field H(x,y,z,t) which has a unique value everywhere in space and time is uniquely determined for a given x, y, z & t. To some extent cosmologists know this as it was pointed out more than a century ago but they have chosen to ignore this by justifying these differences as minor.
Yes, and I think you'll find this was true even in Lemaitre's work, though not necessarily in his original paper. I recall seeing his picture in front of an accelerating rate plot, that became linear as it approached today. :)

At least in recent years cosmologists have started to recognize that they can't assume H is constant in time alas they still ignore the spatial component which severely impacts the results of studies of cosmologically distant sources as there should not be any reason to assume the density of the energy stress tensor(aka the amount of stuff) is the same everywhere as even minute differences should for any such system of equations only amplify over time.
Yes, but is this because the data necessary to define the varying rates is too wishy-washy? The Hubble Tension seems to bring this point out. The margin of errors in a dozen or more methods to calculate Ho are still pretty broad and the ones with low margins are contrary to other low margin ones. [S&T recent issue shows these.]

Of course this problem has been able to persist since it has a self reinforcing effect on measurements due to the initial assumptions after all if you use the assumption that space is effectively the same everywhere at large scales with exception to time to determine the distance of cosmological sources (by say measuring their redshifts) then you will always find that your results are consistent with your initial assumption. :pensive:
Yes, that would help resolve, perhaps, some of the Ho variations since the many methods, no doubt, look in independent directions. Perhaps the tension will be lessened with understanding the greater anisotropy you suggest, especially as a function of time.

This is to say the Friedmann equations violate the conservation of information. It is my suspicion that this will be found to be the source of the information paradox. >_>
Is this mainstream or a forthcoming thesis? :) Very interesting. The conservations of information, though important, is not something I've studied at any length.

It is quite a bit more complicated however as the No big Crunch theorem relies on the Weak Energy Condition which is that accounting for expansion the rate of energy is conserved. Inflation allows this to be violated however as inflation involves a phase transition in quantum fields which would release energy for instance so inflation is hard to analyze under this framework however it is possible to have conditions which in a low density region might replicate the rapid acceleration expected of inflation though that doesn't directly account for where the energy of the "big bang" comes from.
I assume your view of Inflation is the common one taking place in the first trillionth of a second, right? One might argue BBT should begin after Inflation given the somewhat ad hoc view some hold for Inflation theory. But this becomes a problem similar to what you're saying since doing so doesn't solve the problems BBT has without inflation.

Regardless I also have to wonder what effects the implied nonlocality might have on large scale structure? Can there be alternative models to inflation related to some other kind of phase transition within quantum fields? Really there are a lot of questions here more than answers so this seems unlikely to be resolved soon.
Agreed. It took some amazing science to add the nuclear force to our universe. :) Is DE the clue to a new force not incorporated yet in QM, but only manifest in the extremes near t = 0? [Just supposition on my part.]

Because the cosmological principal depends on the assumption that the observed CMB dipole must be purely kinematic so that a change of reference frame can be made to the presumed frame where the CMB should be homogenous and isotropic this model can be falsified if two dipoles at different cosmic times one being the CMB can be constructed and shown to be incompatible. For this assumption to be valid the distance of an individual source so long as it is cosmologically distant should not matter so you can construct a dipole of cosmologically distant sources which can be compared to the CMB. If they don't match then your initial assumption that there is a special frame in which the CMB is uniform and isotropic is invalid and thus you will need to account for cosmological effects which can not be removed by a simple shift in reference frame.
Interesting, but wouldn't the quadrupole support the current kinematic view of the dipole?

Thanks again for your extensive explanation.
 
Nov 10, 2020
57
51
1,610
Visit site
Ok, that's logical. Small changes to initial conditions may or may not have large effects on homogeneity, no doubt.

This seems similar to the pendulum clock issue in the 1700s when Mason and Dixon had to spend many days calibrating their pendulum clock when at the Cape of Good Hope in order to provide useful timing for the transit of Venus. They were off by more than 2 minutes per day (2 min 40 s, IIRC). They reasoned, from rate measurements elsewhere, that the Earth as an oblate spheroid was the explanation. A simplified spherical model (initial condition) would be useless to determine planetary distances via transits.
Yep that is in principal the same root cause related to initial conditions and assumptions and how those apply to systems of differential equations. Your choice of axioms, assumptions, and initial conditions all matter a lot when dealing with complex systems of differential equations it is why you get complex chaotic phenomenon like turbulence dynamical bulk flows, vorticities etc. in such systems of differential equations. The Einstein field equations have the same mathematical properties though they are notably much more complicated so we should expect similar properties to appear in spacetime as well given the right conditions


Yes, but is this because the data necessary to define the varying rates is too wishy-washy? The Hubble Tension seems to bring this point out. The margin of errors in a dozen or more methods to calculate Ho are still pretty broad and the ones with low margins are contrary to other low margin ones. [S&T recent issue shows these.]

That is certainly part of it there are sources of error coming from many different places for example there are a number of different assumptions used when constructing the distance ladder that all contribute. For example there is a growing body of evidence that type 1a supernovae aren't as simple as has been assumed as surveys of the Milky Way keep finding more and more strange white dwarf stars which appear to have undergone runaway thermonuclear burning and survived. Given these sources are so faint and thus the objects discovered are all relatively nearby within our own Milky Way including at least one extremely exotic intermediate state super Chandrasekhar remnant that has reestablished hydrostatic equilibrium 3kpc away that only can last between ~10,000-20,000 years formed from a double degenerate type 1ax supernovae(basically it looks to be the combined product of two very massive white dwarfs where their combined gravity has reestablished hydrostatic equilibrium with the ongoing runaway fusion reactions). That is just regarding only one of the so called standard candles used, red giants are another problematic link in the distance ladder as well, but exemplifies the large uncertainties that generally haven't been fully resolved. There are red giants the distinctions between photometric redshift and spectroscopic redshift(which are only equivalent if and only if the reddening is a product purely of distance which is assumed to obey the cosmological principal. Another troublesome aspect in cosmology is that much of the data correction has historically been done via black box algorithms prior to publishing which haven't been independently verified or checked for errors which is a whole other can of worms.

Yes, that would help resolve, perhaps, some of the Ho variations since the many methods, no doubt, look in independent directions. Perhaps the tension will be lessened with understanding the greater anisotropy you suggest, especially as a function of time.
Yeah there have been some crude analysis to check for anisotropy among these standard candles through the use of coordinate based binning and it appears to show a dipole as well. Of particular note the regions where the rate of expansion (which remember is very crude with high error bars because small sample sizes) appears to be slowing in the direction of the so called great attractor in the zone of avoidance. While in the opposite direction you actually get a far faster rate of expansion (again with the same severe data limitations) As the region where the universe appears to be slowing down is preferentially obscured by the Milky Way that will on its own add a bias onto the data.

Is this mainstream or a forthcoming thesis? :) Very interesting. The conservations of information, though important, is not something I've studied at any length.
Well it doesn't involve anything new but I don't believe I have seen it explicitly spelled out in the "mainstream consensus". Really part of the problem is there are a lot of papers being published at any time courtesy of the academic need to constantly publish new papers in order to maintain funding and a job resulting in far too many for any given researcher to really read through thus they fall on deaf ears barring if you are lucky the abstract.

I assume your view of Inflation is the common one taking place in the first trillionth of a second, right? One might argue BBT should begin after Inflation given the somewhat ad hoc view some hold for Inflation theory. But this becomes a problem similar to what you're saying since doing so doesn't solve the problems BBT has without inflation.
Yes I'm using the standard definitions of inflation unless otherwise stated.

Agreed. It took some amazing science to add the nuclear force to our universe. :) Is DE the clue to a new force not incorporated yet in QM, but only manifest in the extremes near t = 0? [Just supposition on my part.]
Hmmm can't really say much if anything with confidence but to me the properties of inhomogenous and anisotropic cosmology within the self consistent domain suggest to me that "dark energy" isn't really a new force but rather is the main force responsible for gravity. In particular the expansion driving field that emerges from self consistency has a form similar to quantum fields which suggests what we call gravity might really be a representation of an analog of the Casmir effect between any two persistent non vacuum fluctuations aka particles and the massive bodies that they form. This would help resolve one of the greatest mysteries in physics namely the question of why gravity is so weak compared to the other fundamental forces as in this picture gravity would be an indirect effect caused by matter perturbing the allowed quantum vacuum states at "small" distances at least in terms of relative to the size of the total unobservable universe. This is pretty much speculation noted based on similarities in observations however. It might also be describable in terms of some scaling up of newtons 3rd law to spacetime where the attraction between two massive bodies slowing the rate of expansion between them but be compensated by the acceleration of space away from those two bodies extending outwards towards infinity. The net repulsion would be smaller than the attraction as the magnitude is split over a much shorter distance but the repulsive expansion component would be additive for all bodies where as the more localized attraction would not in general. This is more or less just speculation at this point based on published simulation behavior in the anisotropic and inhomogeneous domain so don't read too much into it.

Interesting, but wouldn't the quadrupole support the current kinematic view of the dipole?
Actually the quadrupole octupole and higher moments are one of the strongest lines of evidence against the dipole being kinematic since it is extremely unlikely for a kinematic dipole which should arise from the net motion of the Earth, solar system, Milky Way, Local group etc. to align with higher multipoles in the CMB which can only be cosmological in origin based on our known theories. On the other hand these higher multipoles would naturally be expected to align with a cosmological dipole.
This is what has been termed the "axis of evil" because under the kinematic interpretation this shouldn't happen as the probability is too low.
 

Catastrophe

"Science begets knowledge, opinion ignorance.
Whatever. . . . . . . . . .

Time flows in the direction of increasing entropy.

If an egg breaks, it is extremely unlikely that the whites and yolks will recombine, and that the shell will reform around them. In fact, I would say impossible. Does that mean that the direction of time is absolutely irreversible?

Cat :)
 

DMH

May 18, 2022
3
0
10
Visit site
Time moves forward based on the age of atoms coming into existence and then decaying. Time can never move backwards because light photons from the sun ensure atomic decay remains steady to create new biomes.
 

Catastrophe

"Science begets knowledge, opinion ignorance.
Why does time seem to flow only in one direction?

There are related suggestions that our time lime is fixed, like in a block of spacetime. The problem then is why do we run along the rails of time, only allowing our perception to follow this path.

The idea is not "why does time seem to flow only in one direction" - it is why are we limited to perceiving the Universe one frame after another? The mystery is why we cannot see it all at one go, when it is there somewhere.

According to this idea, 'dreams' - particularly unexplained dreams of the future - are cases of 'seeing' parts of the future, which are out there somewhere, while sleeping - because the limits are off. And 'ghosts' are visits by ourselves when we are dreaming.

The answer is in this mechanism which only allows us to see our world one little bit at a time - because it is too bog and complex to understand all at once.

At least, that is how the story goes . . . . . .


Cat :)
 

Latest posts