Very nice post! If you don't mind, I would like to question my own viewpoints that are more historical than otherwise:The first point to note is that redshift in General Relativity(GR) is in fact model dependent because there are multiple ways which light can become redshifted. The rate of expansion of space, the curvature of space and even variations in diffuse matter densities all have an impact on how light can become redshifted and these are not trivial.
I was surprised to learn that one of the first solutions to GR (de Sitter) was a simplified model that used no matter in the universe showing redshift in a static universe, which was the mainstream model at the time, though given the lack of knowledge for other galaxies, that was understandable.
Since Hubble and deSitter worked closely together at one or more conferences, I suspect that this early, but rejected, model affected Hubble since he avoided ever claiming that redshift was due to the expansion of spacetime. He said he would leave theoretical views to the theorists. [I've never read an author connecting these dots, though many authors seem convinced Hubble discovered expansion. ]
It seems that Doppler explained redshift well-enough as the early assumption starting with Lemaitre, though his model had space carrying the "extragalactic nebulae", as Hubble always called them. But the Doppler fails when redshifts demonstrate speeds > c, hence came the term "cosmological redshift". Doppler still refers to their peculiar motions, of course.
Yes, and this is why I think it best to explain BBT to the general public by not starting with any initial condition, though many like to add sizzle to their article by claiming it was a singularity, which is not likely even hypothetical given the inability to test those predictions, I think.GR is tricky because without prior assumptions to simplify the mathematics the Einstein field equations like pretty much every other known system of partial differential equations are prone to natural irreducible chaotic behavior in this case affecting the evolution of the metric. This is because differential equations by definition must have a single unique solution for each and every possible valid set of initial conditions.
Thus, it seems wiser to me to explain BBT beginning with Lemaitre's work, which took Slipher's redshifts and Hubble's distances, that introduced the original model for BBT (1927). IOW, start today and work backwards, which is what science has been doing ever since. Would you agree with this approach?
What does "numerically" mean here? Iterations?There are a lot of implications this has in every area of math and science but the first and foremost implication is that this means the true Einstein field equations can only ever be solved numerically.
What do you mean by a "maximal spatial volume"? I would infer this to be an infinite universe, and infinities can be very problematic, which, ironically, is true for the universe at time's start - a singularity.... for any initially expanding nontrivial universe that no maximal spatial volume can ever exist...
But doesn't the 1/100,000 level of anisotropy minimize the problem?Thus if light geodesics carrying information on these initial conditions passes into an area which has a higher density(overdensity) of stuff we should expect that that region of space will be expanding slower thus that edge doesn't expand outwards as far as a geodesic passing through an underdensity(void) as the effects of expansion locally depend on the relative rate of intervals of time. I.e. the faster time passes the faster space expands thus in a region where there is underdensities we get a feedback effect where space expands at a faster and faster rate so the distance between points in this case is growing.
Are you suggesting some sort critical information is important at t=0?In essence the crucial insight is that for information to be conserved we need to have the metric carry information most naturally in the form of echoes of the local past metric imprinting into the change in volume tensor locally. Thus GR can only be internally self consistent if information is conserved and this in an expanding universe can only be satisfied if information is stored within the local variations in the rate of change of the metric itself.
I've been puzzled by how the dipole is viewed. If we have a good idea of the rate of the Hubble Flow, and we know or speed through it, does the red and blue shifts of the dipole match, or are they off?In fact thanks to the existence of the CMB dipole we have even managed to perform a falsification test on the cosmological principal itself since the principal predicts that the only kind of dipole which can exist in the sky is a purely local kinematic dipole associated with an observers frame of reference. Thus as pointed out by Ellis & Baldwin in 1984 this would require any dipole constructed from cosmologically distant sources to be identical in both magnitude and direction to the dipole in the CMB as if the two are not the same then that means there is a cosmological(i.e. due to the large scale structure and distribution of matter and energy within the universe) component to the dipole.
Ok, that's a quicker answer than I expected. [I'll leave the above to help others understand what you're saying. ]This was first rigorously tested by Nathan Secrest et al 2021 using a sample of 1.36 million high redshift quasars from catWISE and the results are staggeringly in disagreement by more than twice the magnitude in particular which gives a 4.9 sigma discrepancy from the CMB dipole.
This is helpful as it seems to have something to do with the multiverse view as presented by Laura Mersini-Houghton's book ,"Before the Big Bang", where overlaying quantum physics atop the landscape of string theory produces a somewhat finite number of universes (10^600?), and she says there are six tests for this theory, one being related to the dipole "void", I think she called it. I think she mentioned another test, but not all size, which was curious. [Perhaps I missed them, admittedly.]
Because of your ability to explain this complex field, it would be great if you could explain what science means by things like the "axis of evil", etc. I think I read about a quadrupole issue as well, but many here, like me, are far more familiar with tadpoles instead. Few here have PhDs in anything, but many have BS degrees.
There certainly seems to be, at best, issues to resolve.Worse however is independent follow work has tested these results and only raised the discrepancy with the CMB dipole to 5.7 sigma. In this context not even considering the many other lines of evidence challenging the standard model of cosmology i.e. Hubble tension, the axis of evil, many giga parsec scale large structures well beyond the size limit of structure formation in Lambda CDM, the mathematical and logical arguments is presented in a simplified form above etc. there is now overwhelming evidence to call the standard model of cosmology into serious question.
But given that Einstein, and perhaps every early cosmologist, assumed homogeneity, does this surprise you much? Quantum quirks were not well-developed in the early decades, dark matter was only a suspicion raised by Zwicky and DE was a total surprise.
So, what explains the redshift finding as it relates to distance?In fact dropping the cosmological constant automatically resolves all these tensions. After all the axis of evil problem was to due to the sheer improbability of a presumed kinematic dipole aligning with higher multipoles which themselves in a universe where matter is relatively homogenous and isotropic should all be random in their alignments too rather than all aligned along the same axis in all measurements. All the measurement tensions vanish when you bring in the enormous (dominant) systematic error which effectively swamps any measurement claims by orders of magnitude at cosmological distances.