No, the Big Bang theory is not 'broken.' Here's how we know.

Page 3 - Seeking answers about space? Join the Space community: the premier source of space exploration, innovation, and astronomy news, chronicling (and celebrating) humanity's ongoing expansion across the final frontier.
Feb 5, 2023
1
1
15
Visit site
The Big Bang Theory is the most popular theory for how the universe came into being, according to Space.com. In its most basic form, it states that the universe as we know it began with an infinitely hot, infinitely dense singularity and then expanded over the course of the following 13.8 billion years, at first at an unfathomable rate and subsequently at a more palpable one.
 
  • Like
Reactions: rod
The Big Bang Theory is the most popular theory for how the universe came into being, according to Space.com. In its most basic form, it states that the universe as we know it began with an infinitely hot, infinitely dense singularity and then expanded over the course of the following 13.8 billion years, at first at an unfathomable rate and subsequently at a more palpable one.
Yes, this is the reporting about the BB model in many sites too. Caveat here. The CMBR does not have H-alpha line and no H1 21-cm line either.

Impact of inhomogeneous CMB heating of gas on the HI 21-cm signal during dark ages, https://arxiv.org/abs/1810.05908

No H1 line is seen in the CMBR or H-alpha like the Sun's spectrum (at least not yet, assuming hot H gas was there at the beginning that gave birth to the CMBR). This means the hot H gas for the origin of the CMBR is theoretical only, something not yet confirmed in nature by observation. The same holds for observing the original pristine, primordial gas clouds created during BBN, as well as Population III stars. Small items like this in cosmology do not seem reported in popular science sites or at least not clearly to the public IMO.
 
  • Like
Reactions: Think twice
Upon further reading the Twin Paradox is resolved as such:
Twin A stays on Earth, Twin B zooms into outer space. Twin A watches Twin B's clock run slow. When Twin B is accelerating he is under higher gravity. Unlike velocity, time dilation due to gravity is not reciprocal thus during acceleration, Twin B's clock will run slower and he will see Twin A's clock run fast. Thus when Twin B arrives home, even though he had a high velocity his acceleration will put Twin A's clock ahead of his.
This has been experimentally verified with atomic clocks being flown around the world. When they arrived home they were slow by 50-150 microseconds depending on how far they went, how fast and how high in the gravity field.

For the "zipping by" option where no acceleration is involved both A and B see the other's clock as running slower at all times. However they are in two different reference frames and time is not invariant across reference frames. Just as kinetic energy is not invariant across frames. As they zip by each other they will, in fact, each see the other's clock as having lost time. But if one or the other decellerates so they can share the same reference frame, inertia posing as gravity will bring the clocks back into synch.
 
Upon further reading the Twin Paradox is resolved as such:
Twin A stays on Earth, Twin B zooms into outer space. Twin A watches Twin B's clock run slow. When Twin B is accelerating he is under higher gravity. Unlike velocity, time dilation due to gravity is not reciprocal thus during acceleration, Twin B's clock will run slower and he will see Twin A's clock run fast. Thus when Twin B arrives home, even though he had a high velocity his acceleration will put Twin A's clock ahead of his.
The acceleration time is not consequential to the total dilation effect,

This has been experimentally verified with atomic clocks being flown around the world. When they arrived home they were slow by 50-150 microseconds depending on how far they went, how fast and how high in the gravity field.
Agreed. Similarly, as mentioned, GPS requires both GR and SR math.

It matters, too, which direction the aircraft or spacecraft traveled around Earth.

For the "zipping by" option where no acceleration is involved both A and B see the other's clock as running slower at all times.
Well, not if Twin A has died of old age as the younger Twin B zips by. I do, however, think that the sharing of their original inertial frame is important.


However they are in two different reference frames and time is not invariant across reference frames. Just as kinetic energy is not invariant across frames. As they zip by each other they will, in fact, each see the other's clock as having lost time. But if one or the other decellerates so they can share the same reference frame, inertia posing as gravity will bring the clocks back into synch.
Do you have a reference for this? This would be the “symmetry break” idea I mentioned earlier, but it makes little sense to me as age change would be verifiable before deceleration.
 
I found a refutation of the assertion that gravity and acceleration both cause time dilation. Although gravitational acceleration and acceleration are indistinguishable they do not both cause time dilation. This is because time dilation is caused by gravitational potential not gravitational acceleration. Gravitional potential is the negative of the work done by moving a mass from an infinite distance away down into the gravitational well. Here is what "yuiop" said:

"Another thing to note is that gravitational time dilation is more precisely a function of gravitational potential and not of gravitational acceleration although they are closely related. FOr example if you descend down a very deep mine shaft on the Earth the gravitational time dilation continues to increase even though the gravitational acceleration is decreasing (assuming a non rotating Earth)."

Reference: https://www.physicsforums.com/threads/does-acceleration-cause-time-dilation.237212/


Does acceleration cause time dilation? | Physics Forums See post #21
 
That seems odd to me, Bill. As one travels into the interior of the Earth, the gravitational potential becomes less and the “force “ is also less. [The matter overhead acts to “pull” a body upward, slightly.]

I can’t see how the “timelines” would favor greater dilation in the interior. Not that I know much, admittedly.

The foundation to GR is the equivalence principal — acceleration and gravity are the same thing, not just similar.
 
That's what I thought too but apparently it is not correct. There is a difference between gravitiational acelleration and gravitational potential. Gravitational acelleration is how fast something acellerates when it falls. Gravitational potential energy is how much energy is needed to remove an object from a gravitational well and take it to infinity.
At the center of the Earth the gravitational acelleration is zero but the gravitational potential energy is at a local maximum. It takes more energy from Earth's center to infinity than it does from Earth's surface to infinity thus the gravitational potential energy at the center is greater.
And, yes, gravitational acelleration and acelleration are indistinguishable but apparently neither causes time dilation.
In fact, from what this guy (aka "some guy on the internet") says, acelleration and velocity do not not cause time dilation. Time dilation is directly caused by the distance travelled through space time. Yes, acelleration causes velocity and velocity causes distance but it is distance that does the dilation.

Pause for brain cooling.
 
At the center of the Earth the gravitational acelleration is zero but the gravitational potential energy is at a local maximum. It takes more energy from Earth's center to infinity than it does from Earth's surface to infinity thus the gravitational potential energy at the center is greater.
That’s what makes no sense to me. PE = mgh, right? Setting h to 0 hardly maximizes anything except for PE to grow in value more from that point than any other with radial movement, but that’s word salad as far as I can tell.

And, yes, gravitational acelleration and acelleration are indistinguishable but apparently neither causes time dilation.
Ok, but this results in causing changes to the time line (i.e. geodesics, I think).


ITime dilation is directly caused by the distance travelled through space time.
But this is distance traveled per unit time. And what is this called? This is almost silly.

As stated earlier, the SR, or GR, explanation for the time differences established by both Twin A and B can be found in either distance contraction or time dilation. There is almost no objective evidence, however, favoring length contraction. I think we see this latter use because of how popular and interesting it was formulated by Lorenz, Fitzgerald, and others, mathematically to explain the null result from Michelson - Morley. Also, both equations (contraction and time dilation) always produce the same result.
 
Several points:

1)The sign on gravitational potential energy is backwards from what makes sense to me. When an object is an infinite distance away, its gravitational potential energy is maximized as a negative. Seem bass ackwards to me but that is what they say. This is how the positive energy of the quantum fields in newly created space can be balanced exactly by the negative energy of galaxies moving farther away.

2) Time dilation is not caused by distance travelled per unit time (speedometer) it is causesd by total distance travelled regardless of how fast you got there (odometer).

3) The Twin Paradox in the "zipping by" model, where they don't stop for a meeting, is explained by time dilation and Lorentz contraction. The problem I am having is when they stop for a "sit down" and each one sees the other one as younger. This cannot be. Only in two different reference frames can this be. The explanation supposedly lies in GR.

4) Lorentz contraction has been observed in collider experiments where the results could only be explained if the particles were foreshortened. I read this somewhere.
 
Several points:

1)The sign on gravitational potential energy is backwards from what makes sense to me. When an object is an infinite distance away, its gravitational potential energy is maximized as a negative. Seem bass ackwards to me but that is what they say. This is how the positive energy of the quantum fields in newly created space can be balanced exactly by the negative energy of galaxies moving farther away.
That's an interesting observation, but a convention. As PE is reduced, KE is gained, hence PE gets to zero at the center of the planet. The important point is the change in PE to give you the change in KE. But how much PE is there at the center capable of producing KE.... nada, nothing, zero. No maximum there, IMO.

I prefer to think of PE as being greater when at a higher elevation since it has greater ability to produce more acitve energy, KE. PE has always seemed to me as another one of those terms engineers are handed so they can solve the problems. Just what the heck it really is hasn't seemed to get absorbed in my thick skull, but it is a great tool. ;)

2) Time dilation is not caused by distance travelled per unit time (speedometer) it is causesd by total distance travelled regardless of how fast you got there (odometer).
No. The slower the object, the less the dilation over that same distance. A photon reaches any destination in zero time, regardless of distance.

3) The Twin Paradox in the "zipping by" model, where they don't stop for a meeting, is explained by time dilation and Lorentz contraction. The problem I am having is when they stop for a "sit down" and each one sees the other one as younger. This cannot be. Only in two different reference frames can this be. The explanation supposedly lies in GR.
SR seems to be all that is needed. Scientists had 10 years or so to wirte books about the paradox before GR emerged.

The secrete ingredient, IMO, lies in the timeline differences that can be expressed graphically. The problem is trying to get how that works physically. This is why Einstein likely never got a Nobel Prize for Relativity as it was never obvious, in spite of all the experimental success.

4) Lorentz contraction has been observed in collider experiments where the results could only be explained if the particles were foreshortened. I read this somewhere.
It's time we move on. Oops, I meant muon. ;)

I saw this, too, years ago. But, I am very confident that time dilation is equally functional to explain its extended life.

But the initial question, notice, is how it could survive the length through the atmosphere. So, if the conversation involves length, we get length contraction.

But notice I used "life" to explain it, which is a cue for the alertnative, but equal, time dilation equation.
 
May 3, 2020
59
11
4,535
Visit site
Researchers confirmed that the distant galaxies discovered by the James Webb Space Telescope are, indeed, perfectly compatible with our modern understanding of cosmology.

No, the Big Bang theory is not 'broken.' Here's how we know. : Read more
(I’m not sure if this is the method of commenting directly to the article above but I’ll try anyways)
My understanding from reading the Robertson paper JADES survey is that the low mass assumption for these 4 distant galaxies at z>10 are just that. Assumptions made using modelling designed specifically to interpret a galaxy observed in the “early”universe. And how many stars such an early Galaxy could have in its discs. In other words if my interpretation is correct,...these aren’t neccesarily small galaxies. The authors of the paper just pretended they were. As the following quote indicates: (Note Ive hilited the fa t that the authors didn’t observe these to be Young low mass galaxies. They assumed they were using appropriate modelling.)
“Using stellar population modelling, we find the galaxies typically contain a hundred million solar masses in stars, in stellar populations that are less than one hundred million years old.”
Further to that it must be noted that to make these JADES candidates fit the various modelling used to remove “tensions” with the BBT model, a new unpredicted property of these early galaxies has been included to authenticate these assumptions that the 4 galaxies are not in tension with BBT predictions to date. As the following quote states: “The moderate star formation rates and compact sizes suggest elevated star formation rate surface densities, a key indicator of their formation pathways. Taken together, these measurements show that the first galaxies contributing to cosmic reionisation formed rapidly and with intense internal radiation fields”.
As my sections in bold hilites. None of their models could originally either predict nor account for the existence at such an early epoch of any of these 4 candidates. Contrary to the claims that they do. That is unless and in hindsight they add and revise their modelling to include even more rapid and intense star formation rates then they had assumed were possible before the JWST data proved their earlier galaxy formation rate modelling wrong.
 
  • Like
Reactions: rod
What Think twice points to in post #61 looks like an issue to me. It is common in BB cosmology to explain large redshift galaxies said to have small sizes and masses using the doctrine that these small galaxies over billions of years of cosmic evolution, continue to grow into larger galaxies, thus all early galaxies in the BB model must be small and lower masses too, nothing like what we see today in the MW or Andromeda galaxy. The Sparkler galaxy is a good example with a redshift of 1.378 reported today.

“THE SPARKLER” MIGHT BE OUR MILKY WAY’S LONG-LOST TWIN, https://skyandtelescope.org/astronomy-news/the-sparkler-might-be-our-milky-ways-long-lost-twin/

All of these *early galaxy* observations use the look back distance from Earth for the redshifts but little or none say anything about the comoving radial distances - where those objects would be at today if we could seem there here on Earth. Whether z=1.378 or 10 or 13, the comoving radial distances place the objects somewhere far, far away that is not observable. What happened to them over billions of years of cosmic expansion - is not observable. Apparently, extrapolations are presented to the public it seems when explaining galaxy origins in the BB cosmology :)
 
The authors of the paper just pretended they were. As the following quote indicates: (Note Ive hilited the fa t that the authors didn’t observe these to be Young low mass galaxies. They assumed they were using appropriate modelling.)
So you're assuming they aren't using appropriate modeling? According to the article, they weren't since they were using an unreliable method originally. When a better method was employed they realized the galaxies were within the proper size range. Per the article...

"Thankfully, there were no such problems. The appearance of galaxies with 10^8 solar masses in the early universe was no sweat for ΛCDM, the team explained in their research paper..."

Since the first stars comprised essentially only of H & He, they were very likely quite massive, thus a 100 million star-masses might only be as little as 1 million stars, which would only be a large globular cluster today.

But they would be very prodigious in luminosity, and produce a lot of UV, hence the point about their contribution to the Reionization event.
 
  • Like
Reactions: Catastrophe
Billslugg, Getting back to the discussion about potential energy, gravitational acceleration, and time dilation - I fully understand that the gravitational acceleratation will decrease from the maximum at the surface of a solid to zero at the center of the solid. And I understand that the energy needed to raise an object from the center to infinity is the integral of the force times the distance increment over the total distance, so includes the effects from below the surface.

What I am looking for is the observational basis for saying that the time dilation is a function of the gravitational potential energy, rather than the gravitational acceleration, at points inside a solid object. It seems that would be very difficult to actually measure, given how comparitively shallow our deepest holes are. Do you know of a reference I can look at for the supporting observational data?
 
  • Like
Reactions: Catastrophe
The only source I had for that assertion is the one I showed in my original post - #55 in this thread.

 
  • Like
Reactions: Catastrophe
Feb 22, 2023
2
3
15
Visit site
I have my own theory I imagine and read about astronomy & cosmology quiet a bit and with the Webb discovery of very old galaxies it further strengthens my view, human brain is fraction of a Millisecond old when compared to the age of the universe. It is not possible for the human brain to even begin to fathom the origin of the universe in any manner what so ever...To me the universe was just there there is no big bag theory of any sort...maybe there is no way to quantify any time to the universe. Even the milky way existed for a very long period along with trillions of galaxies existiing in space... it is just an infinite void in which matter is moving in all directions. There was never a beginning and there is no end to the universe, the human race will flame out even before it is able to understand the milky way in its entirety!! The entire existence of the human race will remain but a mere micromillisecond faction of this human derived measurable time. How for such a small period of existence can then humans even begun to think of the universe.!!?
 
  • Like
Reactions: Catastrophe
This experiment showed that a falling gamma ray picks up energy from the "acceleration of gravity". They stated: "...we neglect gravitational fields" when they discussed their assumption that space was flat in between the source and receiver. What they are ignoring is the very field I am claiming exists.

If we ran Pound-Rebka at the center of the Earth then we would see zero acceleration due to Gravity but would be in a strong gravitational field and would expect to see some time dilation.

With those two results we could resolve the question: What causes time dilation, acceleration by gravity (g) or the depth in a gravitational well you are? Or both?

I have read a source claims center of Earth is 2.5 years younger than the crust, due to the higher gravitational potential at the center.

All energy warps spacetime, all fields have energy. Einstein has an equation for each field there is, which shows its energy density and how that is distributed and how it warps spacetime. It is written in tensor notation. Each tensor describes a field completely. For each tiny parcel of space we can use the equation to tell us how strong the field is there, which direction it is pointing, how fast it is changing, haw fast the rate of change is changing.
 
Last edited:
BillSlugg, thanks for the links. I waded through them and see where you got the idea that time dilation is a function of the gravitational potential energy at any point, rather than the gravitational acceleration at that point. But, I did not find any observational or experimental evidence to support one or the other - just a lot of arguments with different opinions.

Helio, the Pound-Rebka experiment is located above the surface of the massive object (Earth), so really doesn't look at the difference in the measurement as the location is shifted to varying distances below the surface. Above the surface, the gravitational potential energy and gravitational acceleration have one relationship relative to the center of mass of the massive object, but once below the surface, that relationship changes, as more of the mass is in a spherical shell with a greater radius than the observing point, and thus does not have a net gravitational acceleration on an object at that depth.

By going below the surface, the gravitational potential energy gets more negative as depth increases, while the local gravitational acceleration gets smaller. So, if gravitational time dilation depends on the gravitational potential energy, it will continue to change in the same direction with depth (i.e., clocks go slower when deeper), but if time dilation depends on the gravitational acceleration, then it will change the direction of its derivative at the surface and decrease with depth (i.e., the clocks will go faster as depth increases).

That seems like a pretty profound concept that really does need some experimental verification. But, I am not seeing any available verification.

Perhaps looking at the wavelengths of photons emitted by a well understood radioactive material located at different depths, but measured at the surface would give an answer. But, with the deepest holes we have made to date being only a few miles, and the changes in temperatures, etc. that occur with depth, I am not confident that meaningful experimental data could be obtained. But, I hope somebody tries - the theorists seem to need some real guidance.
 
BillSlugg, thanks for the links. I waded through them and see where you got the idea that time dilation is a function of the gravitational potential energy at any point, rather than the gravitational acceleration at that point. But, I did not find any observational or experimental evidence to support one or the other - just a lot of arguments with different opinions.

The experiment that has (at least partially) confirmend that velocity not acceleration causes time dilation (The Clock Postulate) is mentioned in this thread in post #27, and was tested by Bailey, et al, putting muons in a storage ring at 10^18 g forces. Their decay rate dilation was consistent with their velocity at the time, not their acceleration. (They are cautious about making a general claim since the velocity vector was normal to the acceleration vector.)

Here is a discussion from Harvard, see section #5.
Experimental Basis of Special Relativity (edu-observatory.org)

So, I am going with the Clock Postulate.
 
  • Like
Reactions: Helio
I see the point of that experiment. But, considering the amount of "duality" in the apparantly paradoxical nature of such things as muons being both particles and waves, I am always wondering if we are really repesenting everything about them properly in the analyses of our experiments. So, while not disagreeing, or even thinking that it is probably wrong, I would like to see some more conclusive experiments before assuming that it is a funamental law of nature.

General Relativity says a lot about the nature of time that we don't really seem to explore in the way that we explore the other, more physical phenomena. It is just hard to thinkg about time not having some sort of absolute measure that we can sit back and think of as being misperceived due to motion or proximity to mass when we do experiments.
 
Nov 10, 2020
57
51
1,610
Visit site
So there is a lot of conservation that has been here(for this site) that I might be able to shed some light on regards to how assumptions come into play and the difference between the standard model of cosmology and the Einstein field equations which sadly many physicists and cosmologists aren't really aware of because of how the subject is covered in graduate level courses(i.e. the lack of a requirement for a background of statistics courses)

The first point to note is that redshift in General Relativity(GR) is in fact model dependent because there are multiple ways which light can become redshifted. The rate of expansion of space, the curvature of space and even variations in diffuse matter densities all have an impact on how light can become redshifted and these are not trivial.


GR is tricky because without prior assumptions to simplify the mathematics the Einstein field equations like pretty much every other known system of partial differential equations are prone to natural irreducible chaotic behavior in this case affecting the evolution of the metric. This is because differential equations by definition must have a single unique solution for each and every possible valid set of initial conditions.


There are a lot of implications this has in every area of math and science but the first and foremost implication is that this means the true Einstein field equations can only ever be solved numerically.

And if you want accurate and practically relevant results this means in computationally expensive time consuming super computer simulations, at least not without some valid data compression mechanism however we know that such things are also mathematically limited by conservation of information.

Traditionally cosmologists get around these problems using their fitting model called Lambda CDM which is a parameterized variation on Friedmann Lemaitre Robertson Walker metric where an axiom/assumption is made known as the cosmological principal which drastically simplifies the metric by assuming that the universe is at some arbitrarily defined large scale sufficiently homogenous for them to treat the fifth through tenth and eleventh through sixteenth differential equations as duplicates

I came to recognize this line of thinking by cosmologists is problematic particularly thanks to the work of Matthew Kleban and Leonardo Senatore on Inhomogeneous and anisotropic cosmology particularly their proof of what they call the "now big crunch theorem" which in the limit where the size of the universe is much larger than the cosmological horizon tells us that for any initially expanding nontrivial universe that no maximal spatial volume can ever exist else the Einstein field equations will be irreducibly internally inconsistent leading to unavoidable logical paradoxes and discrepancies (or equivalently that there are no inflection points allowable in the metric because the criteria for stopping expansion and or reversing into contraction in this large scale limit requires two mutually incompatible properties to be simultaneously be true.)

This suspiciously looks similar to the second law of thermodynamics only with volume in place of entropy thus it seems natural to take a look with respect to information theory.

For any system of partial differential equations there is a unique solution for each and every possible choice of initial conditions which means that the system of differential equations must always "remember" these conditions and information theory comes into play telling us there is an associated informational entropy with these initial conditions.

Now to understand how this plays into the no big crunch theorem lets take the perspective of information radiating out into an expanding universe what happens? That is right you will get an effective volume for any local timeslice where that information can have causally interacted. Paths in general relativity here can be considered as an extension of a 3D light cone which we can imagine radiating outwards in all directions but space is expanding and the metric at any point along those light cone radiants, meaning we are solving a path integration through a surface, where the shape of the surface of these propagating paths(geodesics) is going to depend on the local conditions since curvature causes space to bend the local four velocity out of the time axis thus curvature of space is going to make the shape of this surface effectively lumpy rather than spherical as would be expected if the system was isotropic and homogenous. This also allows a very easy recognition of a limited case of the equidistant path integral limit in relatively flat/negligible curvature space where the timeslice in question is a path integration in 3 dimensions(with a delta function evaluation of time) a.k.a. the divergence theorem which links a path integral through a surface to its corresponding volumetric integral. Thus we can recover Hawking entropy for a cosmological horizon as a special case suggesting this isn't crazy.

Thus if light geodesics carrying information on these initial conditions passes into an area which has a higher density(overdensity) of stuff we should expect that that region of space will be expanding slower thus that edge doesn't expand outwards as far as a geodesic passing through an underdensity(void) as the effects of expansion locally depend on the relative rate of intervals of time. I.e. the faster time passes the faster space expands thus in a region where there is underdensities we get a feedback effect where space expands at a faster and faster rate so the distance between points in this case is growing.

In essence the crucial insight is that for information to be conserved we need to have the metric carry information most naturally in the form of echoes of the local past metric imprinting into the change in volume tensor locally. Thus GR can only be internally self consistent if information is conserved and this in an expanding universe can only be satisfied if information is stored within the local variations in the rate of change of the metric itself.

This means that even if the initial rate of expansion in a universe begins at a constant rate everywhere it can not stay constant unless said universe is conformally invariant under all transformations and hence by definition contains no information whatsoever.
This naturally explains why the Friedmann Lemaitre Robertson Walker metric is even after a century the only known linearizable exact solution, though it also kills the argument for the validity of approximating any nontrivial universe via applying perturbation theory to the FLRW metric as perturbation theory is only applicable to stable equilibria and as it has just been mathematically shown the FLRW metric is not a stable solution but rather represents an unstable equilibrium as the solution precariously balanced on the topological hill of all possible valid metrics for the Einstein field equations. The consequence is that the 5th through 10th and 11th through 16th differential equations representing the off diagonal contributions will always evolve towards uniqueness in time causing the informational entropy to increase.

This gives us a mechanism for linking past local properties of space to non local behavior of the metric in the form of an echo of the past light cone which is embedded within the rate of expansion itself. Since we will always get more underdensities than over densities the net rate of expansion particularly from under densities will quickly grow to dominant the solution at large scales. Thus as a consequence while the primarily attractive diagonal terms drop off with distance the primarily repulsive/expansive off diagonal contributions actually grow nonlinearly (volumetrically) with distance so even if these differences are normally negligible at small scales you can not assume that will be true at cosmological scales. In fact the assumption of the cosmological principal is even worse than just being wrong as since the asymmetries in the off diagonal elements represent information needed for the Einstein field equations to be mathematically valid you are actually breaking not just internal consistency but causality itself at cosmological scales which effectively allows you to construct metric based analogs of the grandfather paradox.

Because of this theorem which is basically just the second law of thermodynamics you also resolve a lot of problems in cosmology, and quite probably in the interface of general relativity with quantum mechanics, while drastically simplifying the initial models at least conceptually.

After all these off diagonal metric contributions have been shown to under the standard implicit assumptions of the cosmological principal to be indistinguishable from "dark energy" which means that you don't need that parameter at all, in essence dark energy is the natural unavoidable consequence of gravity in an expanding universe.

In fact thanks to the existence of the CMB dipole we have even managed to perform a falsification test on the cosmological principal itself since the principal predicts that the only kind of dipole which can exist in the sky is a purely local kinematic dipole associated with an observers frame of reference. Thus as pointed out by Ellis & Baldwin in 1984 this would require any dipole constructed from cosmologically distant sources to be identical in both magnitude and direction to the dipole in the CMB as if the two are not the same then that means there is a cosmological(i.e. due to the large scale structure and distribution of matter and energy within the universe) component to the dipole.

This was first rigorously tested by Nathan Secrest et al 2021 using a sample of 1.36 million high redshift quasars from catWISE and the results are staggeringly in disagreement by more than twice the magnitude in particular which gives a 4.9 sigma discrepancy from the CMB dipole. Worse however is independent follow work has tested these results and only raised the discrepancy with the CMB dipole to 5.7 sigma. In this context not even considering the many other lines of evidence challenging the standard model of cosmology i.e. Hubble tension, the axis of evil, many giga parsec scale large structures well beyond the size limit of structure formation in Lambda CDM, the mathematical and logical arguments is presented in a simplified form above etc. there is now overwhelming evidence to call the standard model of cosmology into serious question.

In fact dropping the cosmological constant automatically resolves all these tensions. After all the axis of evil problem was to due to the sheer improbability of a presumed kinematic dipole aligning with higher multipoles which themselves in a universe where matter is relatively homogenous and isotropic should all be random in their alignments too rather than all aligned along the same axis in all measurements. All the measurement tensions vanish when you bring in the enormous (dominant) systematic error which effectively swamps any measurement claims by orders of magnitude at cosmological distances.

There are potentially significant quantum implications too largely linked to the proof showing that there must exist nonlocal contributions and irreducibly nonzero asymmetric terms but this is already enormously long compared to what I had initially intended.
 
"Researchers confirmed that the distant galaxies discovered by the James Webb Space Telescope are, indeed, perfectly compatible with our modern understanding of cosmology."

This is the central idea behind the article here. However, other reports on early galaxies that JWST sees, is not as rosy a picture. Here is another example.

Astronomers discover metal-rich galaxy in early universe, https://phys.org/news/2023-02-astronomers-metal-rich-galaxy-early-universe.html, "We found this galaxy to be super-chemically abundant, something none of us expected," said Bo Peng, a doctoral student in astronomy, who led the data analysis."

ref - Discovery of a Dusty, Chemically Mature Companion to a z ∼ 4 Starburst Galaxy in JWST ERS Data, https://iopscience.iop.org/article/10.3847/2041-8213/acb59c, 17-Feb-2023.

My observations. Using https://lambda.gsfc.nasa.gov/toolbox/calculators.html, z=4.225, look back distance or light time = 12.262 Gyr. Comoving radial distance = 24.380 Gly. Using H0=69 km/s/Mpc, space is expanding at 1.7204144E+00 or 1.72 x c velocity. We cannot see this metal rich galaxy at the comoving radial distance and do not know if other generations of stars continued to enrich the gas with more metals. The interpretation of the metal rich observations should call into question the existence of the postulated, primordial pristine gas clouds created during BBN and Population III stars.

More who, what, where, when, how, and why investigative reporting I feel is required when it comes to reports that show how rosy the universe is and lines up so nicely with BB cosmology.
 
  • Like
Reactions: Dragrath and Helio
May 3, 2020
59
11
4,535
Visit site
So you're assuming they aren't using appropriate modeling? According to the article, they weren't since they were using an unreliable method originally. When a better method was employed they realized the galaxies were within the proper size range. Per the article...

"Thankfully, there were no such problems. The appearance of galaxies with 10^8 solar masses in the early universe was no sweat for ΛCDM, the team explained in their research paper..."

Since the first stars comprised essentially only of H & He, they were very likely quite massive, thus a 100 million star-masses might only be as little as 1 million stars, which would only be a large globular cluster today.

But they would be very prodigious in luminosity, and produce a lot of UV, hence the point about their contribution to the Reionization event.
Im not sure why you think that revising their ^CDM model to account for the fact it failed to correctly predict the subsequent JWST observations, is proof that they didn’t revise the parameters of their models? As far as I can tell there were problems with their theory when the new data came in. It didn’t match the models predictions. So AFTER the predictions failed...they changed the parameters. But then claimed they had changed the parameters ...BEFORE the data from JWST came in. ?
Do the ^CDM theorists have a time travel machine to go back and correct their failed predictions?
 
Feb 28, 2023
1
0
15
Visit site
Im not sure why you think that revising their ^CDM model to account for the fact it failed to correctly predict the subsequent JWST observations, is proof that they didn’t revise the parameters of their models? As far as I can tell there were problems with their theory when the new data came in. It didn’t match the models predictions. So AFTER the predictions failed...they changed the parameters. But then claimed they had changed the parameters ...BEFORE the data from JWST came in. ?
Do the ^CDM theorists have a time travel machine to go back and correct their failed predictions?

This finding isn't in tension with LCDM, I don't get why that's the narrative right now. LCDM is coupled most strongly to dark matter and dark energy, and predictions for subhalo distributions and more importantly, baryon acoustic oscillations remain unchallenged. These are universal scale confirmations of LCDM.

No, the issue is on slightly smaller scales, with Galaxy formation. For example, we assume a certain IMF when modelling galaxy formation, because we do not have the capacity to model individual stars in cosmological simulations (yet). We instead model stellar populations and assume what the distribution of those stars is. Different stars influence the dynamics of surrounding gas differently (i.e. massive stars will blow stronger winds, have bigger supernovae), and gas collapse is the necessary condition for star formation, and thus, galaxy formation.

It is thought that the IMF for star formation could've been "bottom heavy" in the early universe, meaning that massive stars formed preferentially. However, this is unconfirmed and one of the many tensions that contribute to model errors. A lot of simulations don't change their IMF to account for this, because we simply don't know which IMF model would be correct. So we use the model that works at present day.

Other factors/assumptions that effect galaxy growth include, but are not limited to: star formation criteria, AGN feedback (black hole at center spewing things out), metallicity, ambient temperature (gas can't cool and collapse if the ambient temperature is too high, as it wouldve been in the early universe), magnetic fields (these are seldom accounted for in hydrodynamic simulations, and basically never in one zone/semi analytic models), and so on.

There are so many moving parts with unknowns in high redshift galaxy formation that its just kind of ill advised to jump straight to "this is in tension with LCDM!!!" We've seen several "these ancient galaxies have astronomers confused!" articles since JWST went up. Typically, the issue has been with the assumptions going into models of galaxy formation/star formation, or even the observations themselves - not cosmology.
 
Last edited:

Latest posts