Gosh dang. That explanation left me speechlessThis is one aspect of time which is quite simple.
Sometimes entropy is called "the arrow of time". In theory, many aspects of physics are reversible, but fall down due to this factor.
Here is an old, but valuable, example. Take a handful of salt crystals and a handful of salt. Here you have an ordered system. The salt and the sand are quite separate. Now hold your hands over a flat surface and open them. The salt and the sand will fall together and mix.
It is now going to take some effort to separate them. Suppose you can pick up a two handfuls without leaving any on the table. You will not find that you have all salt in one hand and all sand in the other. Even if you find suitable sieves, it will still be more work separating them and when you mixed them initially. So time flowing in one direction is really entropy flowing only in one direction.
If you had an empty room (a vacuum) and bring in a pressure container filled with air, the air will very quickly be evenly distributed throughout the room. It is theoretically possible (with some practical difficulty) for all the air to go back into the container, or even just into one corner.
Even if you increase the order of water molecules temporarily by putting them in the freezer, you are still increasing entropy overall by virtue of the effort you have to put into the freezer, which will be more.
As in the Dorothy Sayers story: "The Second Law of Thermodynamics", explained Wimsey, helpfully, "which holds the Universe in its path, and without which time would run backwards like a cinema film round the wrong way".
Or, just to drive the point home:
"Only when a system behaves in a sufficiently random way may the differences between past and future, and therefore irreversibility, enter into its description . . . The Arrow of Time is the manifestation of the fact that the future is not given, that, as the French poet Paul Valery emphasized 'time is a construction'".
Cat
punnyI find it interesting just how oblivious most people are as to how little we understand time. We know how simple it operates and how reliable it is, that is until Einstein came along. But even with Einstein's work, we still can claim we understand its behavior, but we have no clue as to its essence.
It's a little like gravity but with gravity we can less hypothesize with things like gravitons.
"We know time flies, but where in the world are its wonderful wings?
Yeah time is interesting in that there isn't even agreement about whether it is fundamental or not as while it appears that thanks to experiments such as the delayed choice experiment and bells inequality we can say that either space or time are not fundamental, though this neglects a third possibility namely that quantum mechanics could be governed by a nonlocal hidden variable theorem.I find it interesting just how oblivious most people are as to how little we understand time. We know how simple it operates and how reliable it is, that is until Einstein came along. But even with Einstein's work, we still can claim we understand its behavior, but we have no clue as to its essence.
It's a little like gravity but with gravity we can less hypothesize with things like gravitons.
"We know time flies, but where in the world are its wonderful wings?
Interesting. There may be no "initial approximations" but I would bet a donut that there are some initial assumptions. DE would likely be at the heart of their assumption, but we don't know what it is or if it will decay at some point, or even reverse. This is why claims of "proofs" must be limited to mathematics not physics.The interesting proof shows that if we do not make any initial approximations to simplify the mathematics the self consistency criteria for the Einstein field equations show that the only remaining valid solutions to the Einstein field equations with flat or open topologies in an initially accelerating universe are those which have no maximum spatial volume in any time-slice. Or in other words from any frame of reference or time slice the rate of expansion of the universe as a whole must always remain greater than zero such that the total volume of the universe never decreases.
Yes, this seems to suggest that the statistical possibility for a mess far outweighs the possibility of a sudden burst into order -- Humpty Dumpty doesn't self-reassemble. But, I'm having to look far over my head to suggest anything that may be substantial as I've never studied GR. Much of what I am saying is only to help sort what may or may not be correct in the way I should be thinking.What is interesting in terms of the arrow of time in this proof is that the conditions on the total spatial volume of the universe happens to have the same mathematical formalism as entropy since there are countably finite possible outcomes for a flat or open cosmology. The implication thus is that empty space has an extremely high entropy and the rate of expansion is equivalent to the maximization of entropy in space through time everywhere in spacetime.
The biggest challenge in model making, AFAIK, are those initial conditions (values). It reminds of a dumb joke that you'll regret me telling but I'll assume (initial condition) that you know it so it won't be my fault.More intriguing from the perspective of unifying quantum mechanics and general relativity and identifying "dark energy" is that the proof holds regardless of the chosen initial values(including the cosmological constant value chosen)...
That's interesting. I wonder if there is a way to snag that hidden variable, which would likely greatly effect some things in QM.... and as the total entropy of the universe governs the rate of expansion everywhere gravity at cosmological scales must take on the formalism of a nonlocal hidden variable theory consistent with Einstein's conception of quantum mechanics as a hidden variable theory under the criteria that the hidden variable(s) are nonlocal in this case the variable in question being the total entropy of the Universe.
I would enjoy learning about what this means, perhaps in a new thread. The only De Sitter solution I am aware, at least stand-alone, was his early solution to GR which offered explanation for redshift in a static universe, with the one condition of a universe without any mass. [Hubble worked with De Sitter at a major conference and, IMO, this is one reason Hubble never committed himself to expansion.]This effectively results in a generalized analog of Ads-CFT (Anti De Sitter space Conformal field Theory) correspondence which for any universe which is initially expanding and not topologically closed will always hold regardless of the initial values.
Wow, that's a dandy! I assume it makes testable predictions, right?(Ads-CFT is the idea from string theory where quantum entanglement and gravity are equivalent)
In principal from the perspective of Noether's theorem it actually becomes pretty easy to see why this outcome emerges as acceleration effectively breaks time symmetry causing the 1st and 3rd laws of thermodynamics to emerge automatically. The 2nd law of thermodynamics and thus the arrow of time then emerge from the internal consistency criterion that any mathematical theory be internally self consistent for all possible values.
Would that be due to the freedom found in the use of axioms? I'm struggling to find things that might stick on the wall, admittedly.What is surprising to me is that there doesn't appear to have been much if any work in the literature to apply the internal consistency criterion, corresponding to Gödel's incompleteness theorems which must hold for any axiom based logical system, to the general Einstein field equations.
I wonder if any minimum entropy condition is stable? It's only logical that gases would prefer to mix, leading to my favorite 2nd law phrase, "Heat won't flow from a cooler to hotter, you can try if you like but you far better notter!"Of particular note the Friedmann Lemaitre Robertson Walker metric solution becomes particularly special as it corresponds to an unstable minimum entropy from which all solutions diverge a.k.a. is is a universal repeller. ... With any deviation from these conditions the system can only either fall forward or backward in time mathematically resulting in an irreversible arrow of time associated with a expanding or contracting universe.
Would that explain Inflation? Would an uber entropy state at t=0, or close to it, necessarily cascade to higher entropy states?In this context notably Occam's razor shows that dark energy is likely unnecessary as any initial condition in an initially accelerating universe (meaning one with any cosmological constant value positive or negative) will always result in net acceleration of expansion at cosmological scales.
Impressive!The arrow of time is an automatic consequence of the breaking of time symmetry which is a prerequisite for any kind of structure.
One caveat that makes this incomplete is that the current proof for this theorem relies on the infinite extent conditions of a flat or open geometry to prove that all other solutions to the Einstein field equations in this domain are logically inconsistent in any expanding or contracting universe. To generalize this proof it would need to be extended into the closed solution domain. However there is reason to believe such a proof exists thanks to the mathematics of black hole thermodynamics linking the surface area of any event horizon to its total entropy.
Thanks for sharing it!...But if true this looks like it may be the a much more fundamental explanation for the arrow of time.
Oof yeah that was supposed to be assumptions not approximations. That typo may have subconsciously been myself relating to one of the main problems with the current model of cosmology which is the assumption that any deviation from perfect isotropy is small enough to be negligible which the mathematical proof for the general Einstein field equations show can never be the caseInteresting. There may be no "initial approximations" but I would bet a donut that there are some initial assumptions. DE would likely be at the heart of their assumption, but we don't know what it is or if it will decay at some point, or even reverse. This is why claims of "proofs" must be limited to mathematics not physics.
Good question maybe but I am skeptical that we could obtain such information thanks to computational irreducibility(the computational extension of Gödel's incompleteness theorems). That said there may be observations which can be made in space and time which could greatly constrain such hidden variables perhaps enough to get an answer. We will not know unless we look!That's interesting. I wonder if there is a way to snag that hidden variable, which would likely greatly effect some things in QM.
Interesting one thing I have been meaning to look in further is some of the early work on GR to try and get a better understanding of how things developed as they did main problem for me is that since the pandemic situation I missed two semesters in a row in grad school where they tried to have in person classes which led to me becoming unenrolled. I have since managed to retroactively get my masters degree in physics as I had already met all the requirements for that degree but without institutional access pre internet papers are hard to find.I would enjoy learning about what this means, perhaps in a new thread. The only De Sitter solution I am aware, at least stand-alone, was his early solution to GR which offered explanation for redshift in a static universe, with the one condition of a universe without any mass. [Hubble worked with De Sitter at a major conference and, IMO, this is one reason Hubble never committed himself to expansion.]
It better! I don't really have the computational power to run the mathematics for the full nonlinear general Einstein field equations to check if some of these predictions are actually observable but otherwise this basically is just the same unmodified Einstein field equations for 3 spatial dimensions and one time dimension being run for any arbitrary initial conditions. As a consequence in principal all of general relativity should be automatically reproduced. The main surprise was learning that there hasn't been a generalized check of the validity of the underlying assumptions specifically the so called cosmological principal.Wow, that's a dandy! I assume it makes testable predictions, right?
Probably after all you can always add another axiom to make any solution internally consistent. That after all is exactly what the combination of dark energy and the assumption that the CMB dipole is purely kinematic in origin. The latter sounds like a reasonable assumption until you learn that in general there should be a cosmological component to the CMB dipole for everything in the observable universe and the conditions for all these components to exactly cancel out corresponds to a minimum entropy solution in information theory, that is to say it requires the violation of the second law of thermodynamics. With the right added axiom however you can always fix that condition as Godel's incompleteness theorem says.Would that be due to the freedom found in the use of axioms? I'm struggling to find things that might stick on the wall, admittedly.
I would argue perhaps it is the other way around and entropy is defined by its property of always increasing for forward passage of time? After all entropy in statistical mechanics is ultimately a consequence of entropy in information theory which relates to the number of possible configurations of bits.I wonder if any minimum entropy condition is stable? It's only logical that gases would prefer to mix, leading to my favorite 2nd law phrase, "Heat won't flow from a cooler to hotter, you can try if you like but you far better notter!"
That was the conclusion of Matthew Kleban and Leonardo Senatore cited in the paper. However I'm not yet completely convinced on that as it seems to simplistic for my likes. Frankly I'm wondering if inflation is needed at all as if the CMB is not isotropic and uniform the need for inflation largely disappears. The ugly reality of the situation in modern cosmology is that we assume the CMB is uniformly isotropic everywhere i.e. that there is no cosmological components of the CMB dipole.Would that explain Inflation? Would an uber entropy state at t=0, or close to it, necessarily cascade to higher entropy states?
I hope we can get some answers on the unification between GR and Quantum mechanics from gravitational waves of particular note I'm excited about the ongoing periodicity decaying orbit of SDSSJ143016.05+230344.4. Unfortunately it is unfolding too quickly for LISA to be ready in time but the best fit model for the observations of the galaxy's AGN suggests two SMBH's which over an interval of 3 years have had their orbital period drop from around one orbit every year to one complete orbit in less than a month.... As for gravitational waves they probably should carry away some entropy but measuring that is going to be hard since LIGO Virgo and other Earth based gravitational wave observatories can't view such huge waves passing by. Pulsar timing arrays are our only hope![/QUOTE][/QUOTE]Perhaps our ability to study merging BHs might help reveal Hawking's entropy hypothesis for BHs. Combining BHs will produce less surface area than the total of the two before merger. [I've argued, as a novice, that the merger's delta entropy should do more than produce a beautiful gravity wave. But the EM emitted, if any, may be too feeble for us to see due to the great distances these mergers take place from us.]
Right. Yogi Berra is famous for spreading rumors such as... one "can see a lot by just looking!" Perhaps the JWST will assist in this worthy study.That said there may be observations which can be made in space and time which could greatly constrain such hidden variables perhaps enough to get an answer. We will not know unless we look!
I find it helpful to explain the BBT by starting from a historical perspective. The theory was never about starting from t=0 or from a singularity, it came from what we observe and rewinding the clock from there.Interesting one thing I have been meaning to look in further is some of the early work on GR...
Ug, I suppose the silver lining is that misery loves company. I've had to run a business under Covid. [I regret that the credibility for medical science has suffered for seemingly rejecting some robust data. The fact that its an aerosol, but only recently recognized as such in spite of all the early efforts by aerosol scientists, is one of the worst moments to come to light, IMO.]to try and get a better understanding of how things developed as they did main problem for me is that since the pandemic situation I missed two semesters in a row in grad school where they tried to have in person classes which led to me becoming unenrolled. I have since managed to retroactively get my masters degree in physics as I had already met all the requirements for that degree but without institutional access pre internet papers are hard to find.
It seems more likely than not it will hold. The isotropy in the CMBR would support this, right?The main surprise was learning that there hasn't been a generalized check of the validity of the underlying assumptions specifically the so called cosmological principal.
I assumed the large-scale web structure of galaxy clusters demonstrates more for isotropy than otherwise. So is the margin of error too great to draw this conclusion?If there is one big testable prediction it would be that while we know there is a net acceleration of the expansion of the universe that expansion of the universe can not be isotropic.
Well, "if it were easy, then..."This was what actually led me to investigate this mathematics frankly everything else has been a complete surprise which seems to good to be true. However thus far I can't falsify any of it at least not without formulate it as a NP problem which will not be computationally reducible.
I've often wondered if the Hubble flow, speaking of the dipole, could not be considered a preferred frame, but that GR just can't offer a means of distinction. Perhaps the mystery of how to understand time is in this flow.Probably after all you can always add another axiom to make any solution internally consistent. That after all is exactly what the combination of dark energy and the assumption that the CMB dipole is purely kinematic in origin. The latter sounds like a reasonable assumption until you learn that in general there should be a cosmological component to the CMB dipole for everything in the observable universe and the conditions for all these components to exactly cancel out corresponds to a minimum entropy solution in information theory, that is to say it requires the violation of the second law of thermodynamics. With the right added axiom however you can always fix that condition as Godel's incompleteness theorem says.
Well, you likely mean a modified Tychonic model does work. [Ptolemy was falsified with Galileo's discovery of all those phases for Venus (and Mercury).] GR math is fine with it, though all the required fictious forces make it...abominable!However by the same logic the Ptolemaic model of the solar system with the Earth at the center of the solar system is also a perfectly valid solution...
Well, as my friend would tell me when I wax headedly, "of all your ideas, that's one of them!" It sounds interesting. Good luck.The question becomes what is the conceptually simplest model which explains all our observations thus far? In this light the cosmological principal must go and internal consistency should be enforced on GR. (My gut feeling will be the quanta of gravity will come from the irreducible fluctuations in the off diagonal terms of the metric tensor not canceling out but that is just opinion at this point it needs hard data modeling to be anything more)
I'm unclear what problem you see with the cosmological principle. Are there regions that defy the "known" constants of physics? The anisotropy is quite tiny, and it's necessary for things like stars and planets.The question becomes what is the conceptually simplest model which explains all our observations thus far? In this light the cosmological principal must go and internal consistency should be enforced on GR.
Is there a general public kind of explanation for where GR exhibits spots of interest or even concern? The first "solutions" for GR seemed to be mathematically valid but also erroneous. [Schwarzschild was one of the first to present one solution, which gave us black holes. How nuts was that one! ] About how many possible solutions might there be for GR? Is that even a fair question? Engineering does a lot with just F=ma, so I assume there is great similarity in how much stuff can get baked starting with just flour.(My gut feeling will be the quanta of gravity will come from the irreducible fluctuations in the off diagonal terms of the metric tensor not canceling out but that is just opinion at this point it needs hard data modeling to be anything more)
The problem that was found in BBT was that it was too isotropic. Apparently, not that I understand any of it, is that the very early influence quantum fluctuations had would require less uniform temperature, more anisotropy. Something was needed to diminish the hot spots. But, at those super dense conditions of "pure energy" (Spock ), perhaps something strange like tachyons contributed to the fast transfers, but that's purely imaginative. Inflation seems to be plausible given the mechanism presented to cause it, apparently.Frankly I'm wondering if inflation is needed at all as if the CMB is not isotropic and uniform the need for inflation largely disappears.
I'm curious what you mean. The dipole would necessarily need to exist unless we are very unique in being glued to the Hubble Flow, right? Are you saying the measured shifts vary with distance, separate from what a normal dipole should exhibit?The ugly reality of the situation in modern cosmology is that we assume the CMB is uniformly isotropic everywhere i.e. that there is no cosmological components of the CMB dipole...
However given the work of Nathan J. Secrest et al 2021 ApJL 908 L51 we can largely rule out the purely kinematic dipole CMB assumption a.k.a. that no cosmological dipole components are imprinted into in the CMB to 4.9 sigma or a 1 in 2 million chance of the mismatch in both direction and magnitude between the dipole in the CMB and the dipole within 1.36 million cosmologically distant quasars measured by catWISE being a statistical fluke. If we can get the full Square Kilometer Array we should actually be able to test this to 5 sigma but based on the evidence thus far the modern model of cosmology is scientifically problematic. .
Here’s just my personal take on this question that seems to present ambiguity from time to time:Excuse me, please, but I have a problem understanding isotropic. I know what it means, but, applied to the Universe, does it mean that all currant buns are isotropic?
The cosmological principal might seem natural when the explanation used to justify it is that the laws of physics are the same everywhere, however this is not mathematically what cosmologists mean when they apply the cosmological principal. Specifically the assumption they refer to is the idea that for sufficiently symmetric distributions of matter and other forms of energy that the off diagonal components of the energy stress tensor should effectively be the same canceling each other out at large scales.I'm unclear what problem you see with the cosmological principle. Are there regions that defy the "known" constants of physics? The anisotropy is quite tiny, and it's necessary for things like stars and planets.
Indeed the problems with the Einstein field equations from the perspective of mathematical theory are quite simple however the subject is far removed from even most cosmologists since the cosmological community hasn't really used the general Einstein field equations since Friedmann and others developed the Friedmann Lemaitre Robertson Walker metric (FLRW metric) and their associated simplified Friedmann equations. After all the Friedmann equations are an exact solution to the Einstein field equations so surely perturbation theory will let them use them as a general guideline for cosmological models. The catch however is that the assumption they generally always make that deviations from the FLRW solution become negligibly small at cosmological distances which has become known as the cosmological principal is not valid and mathematically can be shown to be forbidden by the criteria for logical internal consistency. This is to say the Friedmann equations violate the conservation of information. It is my suspicion that this will be found to be the source of the information paradox. >_>Is there a general public kind of explanation for where GR exhibits spots of interest or even concern? The first "solutions" for GR seemed to be mathematically valid but also erroneous. [Schwarzschild was one of the first to present one solution, which gave us black holes. How nuts was that one! ] About how many possible solutions might there be for GR? Is that even a fair question? Engineering does a lot with just F=ma, so I assume there is great similarity in how much stuff can get baked starting with just flour.
That is true under the assumption that the FLRW metric is applicable. I am however increasingly aware of theoretical problems in these underlying assumptions which make me question their validity. It is quite a bit more complicated however as the No big Crunch theorem relies on the Weak Energy Condition which is that accounting for expansion the rate of energy is conserved. Inflation allows this to be violated however as inflation involves a phase transition in quantum fields which would release energy for instance so inflation is hard to analyze under this framework however it is possible to have conditions which in a low density region might replicate the rapid acceleration expected of inflation though that doesn't directly account for where the energy of the "big bang" comes from. Regardless I also have to wonder what effects the implied nonlocality might have on large scale structure? Can there be alternative models to inflation related to some other kind of phase transition within quantum fields? Really there are a lot of questions here more than answers so this seems unlikely to be resolved soon.The problem that was found in BBT was that it was too isotropic. Apparently, not that I understand any of it, is that the very early influence quantum fluctuations had would require less uniform temperature, more anisotropy. Something was needed to diminish the hot spots. But, at those super dense conditions of "pure energy" (Spock ), perhaps something strange like tachyons contributed to the fast transfers, but that's purely imaginative. Inflation seems to be plausible given the mechanism presented to cause it, apparently.
Well yes the dipole clearly exists however there are a number of sources which could contribute to the observed dipole particularly in the general conditions where the rate of expansion is a field that varies everywhere in space and time based on both local and global/universal effects. The current cosmological model which assumes the rate of expansion is constant everywhere with the large scale distribution of matter being the same at large scales and thus their model which simplifies the math so they don't need to account for path dependence on the measured redshift and thus the rate of acceleration only works if and only if this i s purely a kinematic dipole. For any other dipole component there will need to be a directionally dependent correction function to account for local variations in matter density and bulk flows which will result in large systematic errors in interpretations of redshift measurements.I'm curious what you mean. The dipole would necessarily need to exist unless we are very unique in being glued to the Hubble Flow, right? Are you saying the measured shifts vary with distance, separate from what a normal dipole should exhibit?
Technically yes but cosmologists have assumed small deviations from isotropy have a negligible effect on large scale structure. Mathematically this is much more problematic as the results seem to indicate it doesn't hold in the way it has been argued it should which makes the necessary math much more complicated.Excuse me, please, but I have a problem understanding isotropic. I know what it means, but, applied to the Universe, does it mean that all currant buns are isotropic?
Do currant buns (i.e., Universe) have to have currants (galaxies et cetera) present in a non-random configuration, to be isotropic?
Cat
Note: This is not off topic, as isotropy is one of the assumptions.
Ah! Would it be too naughty to describe it as another "fudge factor"? CatThe cosmological principal might seem natural when the explanation used to justify it is that the laws of physics are the same everywhere, however this is not mathematically what cosmologists mean when they apply the cosmological principal. Specifically the assumption they refer to is the idea that for sufficiently symmetric distributions of matter and other forms of energy that the off diagonal components of the energy stress tensor should effectively be the same canceling each other out at large scales.
Mathematically this has a lot of problems as there is nothing within the Einstein field equations that says this should be valid after all the local density of matter and the curvature that results from that is very different from changes in the underlying laws of physics. Really what this has to do with is path dependence on one hand we know that the Einstein field equations are path dependent as that is how you get gravitational lensing and gravity but they assume that at large distances all the deviations from matter in the universe will effectively cancel out which as the Einstein field equations are a system of multivariate partial differential equations this mathematically as valid as assuming that the wind speeds across a planet's atmosphere within the Navier stokes equations will cancel out letting them ignore the effects of wind on weather. This is fundamentally flawed in many ways with an entire field of mathematics dedicated to the study of the properties of systems of multivariate partial differential equations known as chaos theory.
In summary what cosmologists are assuming is that they can ignore the effects of local spacetime variations if looking over sufficiently large differences as they treat those variations as converging to some generalized solution.
This lets them treat the rate of expansion at large scales as effectively constant (the so called Hubble rate thus being reduced to H_o drastically simplifying the mathematics from a system of fundamentally nonlinear partial differential equations into a linear system of equations that are analytically solvable.
This is an awful assumption because one of the defining properties of differential equations is that every system of differential equations has a unique solution for every single possible initial conditions such that for any system of multivariate partial differential equations F(x,y,z,t) an initial condition differing only by x=1 and x'=1+ 1*10^(-3150) will result in each being unique with these deviations only ever amplifying over time. In this sense all solutions must always diverge and thus the value at any point in space or time everywhere will with sufficient precision be unique everywhere.
Thus the issue with the cosmological principal has to do with cosmologists ignoring the mathematical properties of the class of functions which the Einstein field equations are a subset of just to make the math easier.
As a result we get mathematically forbidden nonsense like the idea that the Hubble rate can be a constant rather than being a field H(x,y,z,t) which has a unique value everywhere in space and time is uniquely determined for a given x, y, z & t. To some extent cosmologists know this as it was pointed out more than a century ago but they have chosen to ignore this by justifying these differences as minor. However what the No big crunch theorem spells out explicitly is that this assumption necessarily results in the Einstein field equations becoming logically self inconsistent. (That is to say these assumptions violate the laws of thermodynamics as you are throwing away information).
At least in recent years cosmologists have started to recognize that they can't assume H is constant in time alas they still ignore the spatial component which severely impacts the results of studies of cosmologically distant sources as there should not be any reason to assume the density of the energy stress tensor(aka the amount of stuff) is the same everywhere as even minute differences should for any such system of equations only amplify over time.
Of course this problem has been able to persist since it has a self reinforcing effect on measurements due to the initial assumptions after all if you use the assumption that space is effectively the same everywhere at large scales with exception to time to determine the distance of cosmological sources (by say measuring their redshifts) then you will always find that your results are consistent with your initial assumption.
Indeed the problems with the Einstein field equations from the perspective of mathematical theory are quite simple however the subject is far removed from even most cosmologists since the cosmological community hasn't really used the general Einstein field equations since Friedmann and others developed the Friedmann Lemaitre Robertson Walker metric (FLRW metric) and their associated simplified Friedmann equations. After all the Friedmann equations are an exact solution to the Einstein field equations so surely perturbation theory will let them use them as a general guideline for cosmological models. The catch however is that the assumption they generally always make that deviations from the FLRW solution become negligibly small at cosmological distances which has become known as the cosmological principal is not valid and mathematically can be shown to be forbidden by the criteria for logical internal consistency. This is to say the Friedmann equations violate the conservation of information. It is my suspicion that this will be found to be the source of the information paradox. >_>
That is true under the assumption that the FLRW metric is applicable. I am however increasingly aware of theoretical problems in these underlying assumptions which make me question their validity. It is quite a bit more complicated however as the No big Crunch theorem relies on the Weak Energy Condition which is that accounting for expansion the rate of energy is conserved. Inflation allows this to be violated however as inflation involves a phase transition in quantum fields which would release energy for instance so inflation is hard to analyze under this framework however it is possible to have conditions which in a low density region might replicate the rapid acceleration expected of inflation though that doesn't directly account for where the energy of the "big bang" comes from. Regardless I also have to wonder what effects the implied nonlocality might have on large scale structure? Can there be alternative models to inflation related to some other kind of phase transition within quantum fields? Really there are a lot of questions here more than answers so this seems unlikely to be resolved soon.
Well yes the dipole clearly exists however there are a number of sources which could contribute to the observed dipole particularly in the general conditions where the rate of expansion is a field that varies everywhere in space and time based on both local and global/universal effects. The current cosmological model which assumes the rate of expansion is constant everywhere with the large scale distribution of matter being the same at large scales and thus their model which simplifies the math so they don't need to account for path dependence on the measured redshift and thus the rate of acceleration only works if and only if this i s purely a kinematic dipole. For any other dipole component there will need to be a directionally dependent correction function to account for local variations in matter density and bulk flows which will result in large systematic errors in interpretations of redshift measurements.
The falsification test used by Nathan Secrest et al 2021 indeed looked to test thee validity of the assumption for a purely kinematic dipole following the procedure for falsification outlined by Elis & Baldwin 1984.
Because the cosmological principal depends on the assumption that the observed CMB dipole must be purely kinematic so that a change of reference frame can be made to the presumed frame where the CMB should be homogenous and isotropic this model can be falsified if two dipoles at different cosmic times one being the CMB can be constructed and shown to be incompatible. For this assumption to be valid the distance of an individual source so long as it is cosmologically distant should not matter so you can construct a dipole of cosmologically distant sources which can be compared to the CMB. If they don't match then your initial assumption that there is a special frame in which the CMB is uniform and isotropic is invalid and thus you will need to account for cosmological effects which can not be removed by a simple shift in reference frame.
Note that this test doesn't tell you anything beyond that the cosmological components to the CMB dipole are nonzero, as there are a number of potential ways which cosmological effects can induce a dipole onto the sky ranging from over densities inherit to the CMB or over densities along the path the CMB light took to reach us further follow up to determine the source of the dipole components would be needed to construct a correction function before any cosmological data could be reanalyzed.
The reason this hasn't been done earlier is that it takes millions of independent sources all across the sky once any local signal contamination has been already removed which is more data than most cosmology studies get close to using.
Technically yes but cosmologists have assumed small deviations from isotropy have a negligible effect on large scale structure. Mathematically this is much more problematic as the results seem to indicate it doesn't hold in the way it has been argued it should which makes the necessary math much more complicated.
In general there are lots of things in science which can be described in terms of a fudge factor generally these initially are put in place as placeholders for something we don't understand but if you aren't careful the underlying assumptions, approximations and placeholders can effectively become dogma as the researchers who remembered why these were needed have passed on with the new generation of researchers only experiencing the approximate model.Ah! Would it be too naughty to describe it as another "fudge factor"? Cat
Ok, that's logical. Small changes to initial conditions may or may not have large effects on homogeneity, no doubt.In summary what cosmologists are assuming is that they can ignore the effects of local spacetime variations if looking over sufficiently large differences as they treat those variations as converging to some generalized solution.
This lets them treat the rate of expansion at large scales as effectively constant (the so called Hubble rate thus being reduced to H_o drastically simplifying the mathematics from a system of fundamentally nonlinear partial differential equations into a linear system of equations that are analytically solvable.
This is an awful assumption because one of the defining properties of differential equations is that every system of differential equations has a unique solution for every single possible initial conditions such that for any system of multivariate partial differential equations F(x,y,z,t) an initial condition differing only by x=1 and x'=1+ 1*10^(-3150) will result in each being unique with these deviations only ever amplifying over time. In this sense all solutions must always diverge and thus the value at any point in space or time everywhere will with sufficient precision be unique everywhere.
Yes, and I think you'll find this was true even in Lemaitre's work, though not necessarily in his original paper. I recall seeing his picture in front of an accelerating rate plot, that became linear as it approached today.As a result we get mathematically forbidden nonsense like the idea that the Hubble rate can be a constant rather than being a field H(x,y,z,t) which has a unique value everywhere in space and time is uniquely determined for a given x, y, z & t. To some extent cosmologists know this as it was pointed out more than a century ago but they have chosen to ignore this by justifying these differences as minor.
Yes, but is this because the data necessary to define the varying rates is too wishy-washy? The Hubble Tension seems to bring this point out. The margin of errors in a dozen or more methods to calculate Ho are still pretty broad and the ones with low margins are contrary to other low margin ones. [S&T recent issue shows these.]At least in recent years cosmologists have started to recognize that they can't assume H is constant in time alas they still ignore the spatial component which severely impacts the results of studies of cosmologically distant sources as there should not be any reason to assume the density of the energy stress tensor(aka the amount of stuff) is the same everywhere as even minute differences should for any such system of equations only amplify over time.
Yes, that would help resolve, perhaps, some of the Ho variations since the many methods, no doubt, look in independent directions. Perhaps the tension will be lessened with understanding the greater anisotropy you suggest, especially as a function of time.Of course this problem has been able to persist since it has a self reinforcing effect on measurements due to the initial assumptions after all if you use the assumption that space is effectively the same everywhere at large scales with exception to time to determine the distance of cosmological sources (by say measuring their redshifts) then you will always find that your results are consistent with your initial assumption.
Is this mainstream or a forthcoming thesis? Very interesting. The conservations of information, though important, is not something I've studied at any length.This is to say the Friedmann equations violate the conservation of information. It is my suspicion that this will be found to be the source of the information paradox. >_>
I assume your view of Inflation is the common one taking place in the first trillionth of a second, right? One might argue BBT should begin after Inflation given the somewhat ad hoc view some hold for Inflation theory. But this becomes a problem similar to what you're saying since doing so doesn't solve the problems BBT has without inflation.It is quite a bit more complicated however as the No big Crunch theorem relies on the Weak Energy Condition which is that accounting for expansion the rate of energy is conserved. Inflation allows this to be violated however as inflation involves a phase transition in quantum fields which would release energy for instance so inflation is hard to analyze under this framework however it is possible to have conditions which in a low density region might replicate the rapid acceleration expected of inflation though that doesn't directly account for where the energy of the "big bang" comes from.
Agreed. It took some amazing science to add the nuclear force to our universe. Is DE the clue to a new force not incorporated yet in QM, but only manifest in the extremes near t = 0? [Just supposition on my part.]Regardless I also have to wonder what effects the implied nonlocality might have on large scale structure? Can there be alternative models to inflation related to some other kind of phase transition within quantum fields? Really there are a lot of questions here more than answers so this seems unlikely to be resolved soon.
Interesting, but wouldn't the quadrupole support the current kinematic view of the dipole?Because the cosmological principal depends on the assumption that the observed CMB dipole must be purely kinematic so that a change of reference frame can be made to the presumed frame where the CMB should be homogenous and isotropic this model can be falsified if two dipoles at different cosmic times one being the CMB can be constructed and shown to be incompatible. For this assumption to be valid the distance of an individual source so long as it is cosmologically distant should not matter so you can construct a dipole of cosmologically distant sources which can be compared to the CMB. If they don't match then your initial assumption that there is a special frame in which the CMB is uniform and isotropic is invalid and thus you will need to account for cosmological effects which can not be removed by a simple shift in reference frame.
Yep that is in principal the same root cause related to initial conditions and assumptions and how those apply to systems of differential equations. Your choice of axioms, assumptions, and initial conditions all matter a lot when dealing with complex systems of differential equations it is why you get complex chaotic phenomenon like turbulence dynamical bulk flows, vorticities etc. in such systems of differential equations. The Einstein field equations have the same mathematical properties though they are notably much more complicated so we should expect similar properties to appear in spacetime as well given the right conditionsOk, that's logical. Small changes to initial conditions may or may not have large effects on homogeneity, no doubt.
This seems similar to the pendulum clock issue in the 1700s when Mason and Dixon had to spend many days calibrating their pendulum clock when at the Cape of Good Hope in order to provide useful timing for the transit of Venus. They were off by more than 2 minutes per day (2 min 40 s, IIRC). They reasoned, from rate measurements elsewhere, that the Earth as an oblate spheroid was the explanation. A simplified spherical model (initial condition) would be useless to determine planetary distances via transits.
That is certainly part of it there are sources of error coming from many different places for example there are a number of different assumptions used when constructing the distance ladder that all contribute. For example there is a growing body of evidence that type 1a supernovae aren't as simple as has been assumed as surveys of the Milky Way keep finding more and more strange white dwarf stars which appear to have undergone runaway thermonuclear burning and survived. Given these sources are so faint and thus the objects discovered are all relatively nearby within our own Milky Way including at least one extremely exotic intermediate state super Chandrasekhar remnant that has reestablished hydrostatic equilibrium 3kpc away that only can last between ~10,000-20,000 years formed from a double degenerate type 1ax supernovae(basically it looks to be the combined product of two very massive white dwarfs where their combined gravity has reestablished hydrostatic equilibrium with the ongoing runaway fusion reactions). That is just regarding only one of the so called standard candles used, red giants are another problematic link in the distance ladder as well, but exemplifies the large uncertainties that generally haven't been fully resolved. There are red giants the distinctions between photometric redshift and spectroscopic redshift(which are only equivalent if and only if the reddening is a product purely of distance which is assumed to obey the cosmological principal. Another troublesome aspect in cosmology is that much of the data correction has historically been done via black box algorithms prior to publishing which haven't been independently verified or checked for errors which is a whole other can of worms.Yes, but is this because the data necessary to define the varying rates is too wishy-washy? The Hubble Tension seems to bring this point out. The margin of errors in a dozen or more methods to calculate Ho are still pretty broad and the ones with low margins are contrary to other low margin ones. [S&T recent issue shows these.]
Yeah there have been some crude analysis to check for anisotropy among these standard candles through the use of coordinate based binning and it appears to show a dipole as well. Of particular note the regions where the rate of expansion (which remember is very crude with high error bars because small sample sizes) appears to be slowing in the direction of the so called great attractor in the zone of avoidance. While in the opposite direction you actually get a far faster rate of expansion (again with the same severe data limitations) As the region where the universe appears to be slowing down is preferentially obscured by the Milky Way that will on its own add a bias onto the data.Yes, that would help resolve, perhaps, some of the Ho variations since the many methods, no doubt, look in independent directions. Perhaps the tension will be lessened with understanding the greater anisotropy you suggest, especially as a function of time.
Well it doesn't involve anything new but I don't believe I have seen it explicitly spelled out in the "mainstream consensus". Really part of the problem is there are a lot of papers being published at any time courtesy of the academic need to constantly publish new papers in order to maintain funding and a job resulting in far too many for any given researcher to really read through thus they fall on deaf ears barring if you are lucky the abstract.Is this mainstream or a forthcoming thesis? Very interesting. The conservations of information, though important, is not something I've studied at any length.
Yes I'm using the standard definitions of inflation unless otherwise stated.I assume your view of Inflation is the common one taking place in the first trillionth of a second, right? One might argue BBT should begin after Inflation given the somewhat ad hoc view some hold for Inflation theory. But this becomes a problem similar to what you're saying since doing so doesn't solve the problems BBT has without inflation.
Hmmm can't really say much if anything with confidence but to me the properties of inhomogenous and anisotropic cosmology within the self consistent domain suggest to me that "dark energy" isn't really a new force but rather is the main force responsible for gravity. In particular the expansion driving field that emerges from self consistency has a form similar to quantum fields which suggests what we call gravity might really be a representation of an analog of the Casmir effect between any two persistent non vacuum fluctuations aka particles and the massive bodies that they form. This would help resolve one of the greatest mysteries in physics namely the question of why gravity is so weak compared to the other fundamental forces as in this picture gravity would be an indirect effect caused by matter perturbing the allowed quantum vacuum states at "small" distances at least in terms of relative to the size of the total unobservable universe. This is pretty much speculation noted based on similarities in observations however. It might also be describable in terms of some scaling up of newtons 3rd law to spacetime where the attraction between two massive bodies slowing the rate of expansion between them but be compensated by the acceleration of space away from those two bodies extending outwards towards infinity. The net repulsion would be smaller than the attraction as the magnitude is split over a much shorter distance but the repulsive expansion component would be additive for all bodies where as the more localized attraction would not in general. This is more or less just speculation at this point based on published simulation behavior in the anisotropic and inhomogeneous domain so don't read too much into it.Agreed. It took some amazing science to add the nuclear force to our universe. Is DE the clue to a new force not incorporated yet in QM, but only manifest in the extremes near t = 0? [Just supposition on my part.]
Actually the quadrupole octupole and higher moments are one of the strongest lines of evidence against the dipole being kinematic since it is extremely unlikely for a kinematic dipole which should arise from the net motion of the Earth, solar system, Milky Way, Local group etc. to align with higher multipoles in the CMB which can only be cosmological in origin based on our known theories. On the other hand these higher multipoles would naturally be expected to align with a cosmological dipole.Interesting, but wouldn't the quadrupole support the current kinematic view of the dipole?