Einstein wins again! Quarks obey relativity laws, Large Hadron Collider finds

I am still wondering how a quark can be a "fundamental" particle when the combination of 2 down quarks and 1 up quark (a neutron) can become a combination of one down quark and 2 up quarks ( a proton) by emitting an electron, which is also supposed to be a "fundamental" particle that is not composed of quarks at all, and quarks are not supposedly containing electrons.

Typically, I see something like this from https://en.wikipedia.org/wiki/Quark :
"A quark of one flavor can transform into a quark of another flavor only through the weak interaction, one of the four fundamental interactions in particle physics. By absorbing or emitting a W boson, any up-type quark (up, charm, and top quarks) can change into any down-type quark (down, strange, and bottom quarks) and vice versa. This flavor transformation mechanism causes the radioactive process of beta decay, in which a neutron (n) "splits" into a proton (p), an electron (e−) and an electron antineutrino (νe) (see picture). This occurs when one of the down quarks in the neutron (udd) decays into an up quark by emitting a virtual W− boson, transforming the neutron into a proton (uud). The W− boson then decays into an electron and an electron antineutrino.[72]"

For me, that indicates something entirely different from "building blocks" when a quantum physicist says "fundamental particle".
 
Jan 2, 2020
27
9
10,535
The result here is interesting but not really a surprise.. At speeds slower than light Special Relativity passes every test.

Its predictions for physics at speeds faster than light are a very different matter but there is currently still no real way to test that..
 
Feb 6, 2020
68
26
10,560
I am still wondering how a quark can be a "fundamental" particle when the combination of 2 down quarks and 1 up quark (a neutron) can become a combination of one down quark and 2 up quarks ( a proton) by emitting an electron, which is also supposed to be a "fundamental" particle that is not composed of quarks at all, and quarks are not supposedly containing electrons.
For me, that indicates something entirely different from "building blocks" when a quantum physicist says "fundamental particle".
The decay emits the electron, not the quark. Conservation laws require the mass/energy, charge, and whatnot be preserved in Nature's bookkeeping. Nuclear reactions aren't like chemical reactions where, say, hydrogen evolves from zinc and hydrochloric acid.

The electron was never there in the first place. The decay process fashions it out of the inventory of whole cloth, the aforementioned attributes. Search Google Images, "Feynman Diagram" . Eg. electron and positron emitting gamma photon, thence quarks and gluon, none of which were components of the fundamental electron or positron from the outset.
 
  • Like
Reactions: Robert Lucien Howe
I can read that thought process, but I don't agree with it.

As I said, the concept of "fundamental particle" has devolved into "we don't know how to split this particle with the theories and technology at our disposal right now", and nothing more than that meaning.

If quantum theorists believe that they can get an electron out of "nothing" without getting a positron out of that same "nothing" at the same time, then what is their problem ginning up a solution to the obvious lack of antimatter in our current universe that is consistent with the Big Bang Theory?

Yes, the theory that "explains" beta decay and positron decay and electron capture by nuclei does conserve the parameters currently combined into the Standard Model of quantum mechanics. But, that model also postulates all sorts of things that are just not "explained" in a manner that makes sense to people not confined to that particular box of thought processes.
 
Feb 22, 2025
1
0
10
I can read that thought process, but I don't agree with it.

As I said, the concept of "fundamental particle" has devolved into "we don't know how to split this particle with the theories and technology at our disposal right now", and nothing more than that meaning.

If quantum theorists believe that they can get an electron out of "nothing" without getting a positron out of that same "nothing" at the same time, then what is their problem ginning up a solution to the obvious lack of antimatter in our current universe that is consistent with the Big Bang Theory?

Yes, the theory that "explains" beta decay and positron decay and electron capture by nuclei does conserve the parameters currently combined into the Standard Model of quantum mechanics. But, that model also postulates all sorts of things that are just not "explained" in a manner that makes sense to people not confined to that particular box of thought processes.
Don't think of particles as being like rocks. They are 100% energy. Not solid or any sort of similar way to think about them.
 
As for relativity, I was hoping to see a result to indicate some evidence of length contraction. The only example of it has come from accelerators. A flattened particle, apparently, best explains what’s observed. Otherwise, all the tons of other evidence favors time dilation, AFAIK.

I find it curious that one must choose to use either time dilation or length contraction, but not both, in calculating the time to, say, Proxima Centauri.
 
How theorists think about "particles" is also part of the issue. We can all agree that when we try to measure subatomic "things", we see properties that look like particles under some conditions and like waves under other conditions. But, we really have not idea how waves can exist without being in some sort of medium that they displace to create forces.

The Michaelson-Morley experiment to detect our motion through such a medium showed that we cannot detect motion through any sort of medium in space through which photons propagate. But, that does not mean that there is no such medium, and the Lorenz transformations of Special Relativity provide a mathematical solution to the observation by making the measurements of time and length dependent on the differences in (constant) speed between observers.

So, the quantum theorists use the concept of "fields" that are everywhere in space all of the time to propose that "particles" are really only energy that vibrates fields in particular ways. And, they have a pantheon of various fields and vibrations to explain what they find with high energy collisions between various "particles".

My only point is that we really don't fully understand what we are talking about. And, further, we have problems with combining that theory with astronomy observations and the General Theory of Relativity, which does quite a good job of matching astronomy observations - except perhaps for the observations of galactic object speeds, which seem to require more matter than we can detect (currently).

The BBT ties to bridge the existing gap between General Relativity and quantum mechanics so as to be able to extrapolate the apparent expansion of the universe backward in time to a single point (or wimps out at the Planck radius). But, we really don't have the answers to fundamental questions such as how those theories can explain the dominance of regular matter over antimatter in the universe today, among other important unexplained problems. Which is why I brought up the issue of beta decay seeming to not fit the idea of certain particles really being "fundamental".

So, there are plenty of people who are not "sold" on the current forms of the Standard Model or the Big Bang Theory. My thinking is that we are probably missing something important, so I support the research to look for things that might answer some of these questions.

What I don't support is the incessant writing of articles that states things "are" what the theories say they think they are. My point is that we need to constantly be aware of the differences between what we actually can demonstrate with observations and what we postulate are the explanations for the underlying physics that produce those observations. We will make progress faster if we don't fall into the trap of believing what we assume, so that we discard the thinking about other potential explanations.
 
  • Like
Reactions: Fire-Starter James
My only point is that we really don't fully understand what we are talking about. And, further, we have problems with combining that theory with astronomy observations and the General Theory of Relativity, which does quite a good job of matching astronomy observations - except perhaps for the observations of galactic object speeds, which seem to require more matter than we can detect (currently).
But astronomers are detecting regions of DM around a countless number of galaxies, large and small, and in clusters (e.g. Bullet cluster).

These are indirect observations, but they are almost as good as direct observations given the magnitude of their effects.

The BBT ties to bridge the existing gap between General Relativity and quantum mechanics so as to be able to extrapolate the apparent expansion of the universe backward in time to a single point (or wimps out at the Planck radius).
I favor the view that grander theories are required to go beyond what I see as the BBT. The GUT (Grand Unified Theory) is one example. Many too often try to extrapolate the physics in BBT into the realm of metaphysics or pseudoscience, which is beyond the boundaries of BBT. Since the "bang" is found in this no-man's land, it's not hard to see why people stretch BBT a little too far, but a better label for the BBT has yet to emerge.

My thinking is that we are probably missing something important, so I support the research to look for things that might answer some of these questions.
Yes, there should be a great deal to learn about the regions prior to t=1E-12 sec, where hard evidence begins to turn to mush. But that just means the BBT should come with limits of prior origins, just as Darwin's theory was sound even though it offered no explanation for the origin of life.
We will make progress faster if we don't fall into the trap of believing what we assume, so that we discard the thinking about other potential explanations.
Yes, all the more reason to assign labels for the areas differing in our confidence.
 
Jan 2, 2020
27
9
10,535
"I find it curious that one must choose to use either time dilation or length contraction, but not both, in calculating the time to, say, Proxima Centauri."
I think in this case the two are directly equivalent - they produce the same result. Choosing both mechanisms together would add the two together producing a wrong result.
So it could be said that in either you get the other for free - but I may be wrong.
 
I think in this case the two are directly equivalent - they produce the same result. Choosing both mechanisms together would add the two together producing a wrong result.
So it could be said that in either you get the other for free - but I may be wrong.
Since they are likely mutually exclusive then there is no free compliment, I think.

The oddity of this is that we think in terms of what is real. If space actually contracts with relativistic speeds, then if we say time also changes at these speeds, then it too must be factored into the equation. Yet we can't easily do this without some serious math contortions.

We can make a ham sandwich or a cheese sandwich, but for some reason, we can't make a ham and cheese sandwich? ;)
 
Helio, I think what is happening is that we have developed a mathematical model (Relativity Theory) that does a good job of making predictions, but we really don't have a good phsyical understanding of why the observations behave as they do.

There are some explanations in the popular literature that seem rational, but are not supported by the physicists. One example of that is the emission of a photon in the "up" direction" at the event horizon of a black hole. The popular explanation for that photon never getting past the event horizon distance is to make the analogy to a speed boat in a swiftly flowing river, where the boat's maximum speed just barely matches the current, so it cannot make any progress upstream at all. But, that would mean that space is flowing into the black hole, which the theorists emphatically deny.

I am not so sure I buy their denials though. To a person who has a lot of background in fluid flows, it seems reasonable to question whether space can flow, if it can be bent, stretched and compressed in theory, already. And, I note that time dilation at a point in the gravitational well of a mass is equal to the time dilation in Special Relativity for an observer at infinite distance watching an object moving away at that same velocity as the escape velocity for that position in the mass' gravity well.

If you consider that space could flow into mass, you even could explain space being stretched between galaxies such that photons traveling through space would be stretched (red shifted).

I am not trying to claim that I have a mathematically coherent theory for "flowing space", I am just noting that the exclusion of that possibility does not seem to be fully proven, and there seem to be reasons to suspect it.

Theorists like their own theories, which they have spent careers learning and trying to build, and anything that doesn't fit gets rejected, sometimes with elevated emotion. But, my experience with modelers of more mundane phenomena is that they tend to believe their models more than they should, at least until there is some direct refutation by observations that are repeatable. I am not expecting modelers of quantum or astronomy phenomena to be any different in their emotional attachments and underestimation of the uncertainties in their predictions that are not (yet) verifiable with observations. The BBT modelers have certainly had to revise the BBT time lines as new observations are made by better telescopes. But, it is hard to disprove a model that has so much flexibility due to so many fitting parameters.
 
Feb 23, 2025
3
0
10
In his commentary on this matter, Unclear Engineer said:

"My thinking is that we are probably missing something important, so I support the research to look for things that might answer some of these questions."

"We will make progress faster if we don't fall into the trap of believing what we assume, so that we discard the thinking about other potential explanations."


Agnosco said:

Regarding Time Dilation and Length Contraction:

I see these as concurrent effects of adding energy to matter [please bear with me]. The energy applied to the acceleration of matter alters the dynamics of the energy comprising that matter in such a manner as to cause physical events related to that matter to occur at a lower rate. This is equivalent to what is seen as a slowing of local time (time related to that particular matter).

Consider 'Time' as a measure of the rate at which physical events occur.

As a result, ALL physical events, such as those related to the aging of a living creature, occur at a lower rate at a higher speed because 'event time', that is the portion of each 'period' available for 'things to happen' is reduced with increased speed.

An understanding of these concepts may be gained by anyone able to grok the hypothesis presented initially in my next post and subsequently elaborated upon in subsequent posts to this site dealing with a range of concepts such as Time Dilation and the fundamental nature of the speed limitation imposed upon matter.

I'm most encouraged to read the thoughts of 'Unclear Engineer' as expressed in this article, who's thinking is far from unclear to me. Such an individual, seemingly unspoiled by formal quantum 'education' would be capable of dissecting 'Quantum Mechanics - A Classical Interpretation' to perhaps conclude, as did one American astrophysicist (sadly now deceased) that there is merit in the ideas presented. As a retired astrophysicist suffering from terminal cancer and no longer having any 'skin in the game' in terms of the profession, he wrote to me that after detailed close study and constant review he had been unable to find a flaw in the reasoning presented and insisted that I continued to promulgate my ideas.

Being fundamentally disruptive to current 'theories', this description of a Universe in which all sub-atomic level events comply with the fundamentals of classical physics has not been pursued by proponents of current theories despite its wide promulgation. Interestingly, over the period from publication of the website in February 2016 until now [February 2025], there has not been one counter argument put forward, let alone a logically reasoned one, despite direct approaches to many well known scientists requesting comment.

I invite Unclear Engineer and other keen minds present on this forum to critically consider the hypothesis I put forward and respond through this forum and/or to the email address found in the linked Blogger post.

Too simple to be true? Maybe so. Please let me know.

Agnosco
 
Last edited:
Feb 23, 2025
3
0
10

Light - Light Interactions​

IN SEARCH OF KNOWLEDGE​

As site rules forbid links to the Blog where my hypothesis is introduced I have included the initial information as follows:


Are you capable of changing your mind, of really questioning what you've been taught?
If you think you understand the Double-Slit experiment, or even if you don't..........read on.
In his review of Anil Ananthaswamy's splendid book Through Two Doors at Once, Philip Ball said "According to the eminent physicist Richard Feynman, the quantum double-slit experiment puts us up against the paradoxes and mysteries and peculiarities of nature".
Feynman said of the twin slit experiment that it has in it the heart of quantum mechanics and that in reality, it contains the only mystery.
Philip Ball continued in his review to say “By Feynman's logic, if we could understand what is going on in this deceptively simple experiment, we would penetrate to the heart of quantum theory - and perhaps all its puzzles would dissolve."

In Through Two Doors at Once, Ananthaswamy concludes that "physics has yet to complete its passage through the double-slit experiment. The case remains unsolved."
Another quote of Feynman is “I would rather have questions that can’t be answered than answers that can’t be questioned.”
Can we dispel the mystery of quantum mechanics by questioning some of his answers?

The belief that light propagates as a wave is taken by Feynman as proven in the double-slit experiment where it is considered that photons of light waves interfere with each other to produce the patterns seen.
However.........
Photons, the fundamental particles of light, do not interact with each other in everyday life.”
http://www.weizmann.ac.il/complex/Firstenberg/quantum-nonlinear-optics-strongly-interacting-photons
And..........
Normally, beams of light pass through each other unperturbed.”
https://en.wikipedia.org/wiki/Two-photon_physics
And.........
...instead of bouncing off of each other, these beams of light travel in straight lines, ignoring each other entirely.”
https://www.physics.utoronto.ca/~aephraim/PhotonGate/PhotonGate.html
Physics is fully aware that photons of light or any other energy level [or ‘frequency’] DO NOT interfere with each other, therefore it should be known that such an interaction cannot be responsible for the patterns observed in the double-slit experiment.
Is it that physicists just haven't noticed this flaw in a critical fundamental upon which so much else is based, or is a blind eye turned to it as there seems to be no viable explanation utilizing the alternative of light propagating somehow as a particulate thing?
If we set aside the discussion of 'interference patterns' formed by photons for now and consider the identical results obtained when electrons are used in the twin slit experiment we are able to discern a clear causal process.
[That is provided we are able to see past the confirmation bias displayed in the convenient view that electrons also propagate as a wave rather than as a particle!]
As we know, matter is comprised of atoms in association with each other. This is of course true for the material in which the two slits are formed for the twin slit experiment.
Under normal environmental conditions the atoms of any material thing, including those of the twin slit material, are in constant thermal motion.
In general terms, atoms present a negative charge to their surroundings owing to their electron field.
This charge appears as either a simple or complex spherical field and this field is on a very large scale in comparison with an electron passing in its vicinity.
A negatively charged electron approaching the electron field of an atom will be repelled by the [mobile] negative charge exhibited by that atom.
The direction taken by an electron repelled by the negative field of any atom it encounters is primarily determined by the angle at which the curvature of the atomic field is encountered and the instantaneous motion of the atom at that time.
Consideration of the twin slit material and its behavior at the atomic scale provides a clear picture showing that simultaneous or successive electrons passing through the twin slits will be deflected and scattered across a broad range by the atoms they encounter.
Picture small balls bouncing off very large balls that are constantly moving and jostling each other.
These deflections cause the electrons to encounter the twin slit screen in many places, producing patterns determined largely by the characteristics of the slit material [and the energy (frequency) of the light].
It should be clear that the pattern formed by electrons emitted one at a time to pass through the slits will accumulate the same type of image on a storage screen as that produced immediately by a flood of electrons.
It can be seen that a very similar but not identical pattern emerges with the use of one slit only. The basic mechanism for why this is so should now be apparent to you.
As the twin slit experiment produces identical results for electrons and photons it appears reasonable to conclude that the same or a similar mechanism may be involved in both instances.


"If we want a scientific theory, we also have to require that it describes what we observe. It's science, not maths. This means that the requirement that the assumptions describe what we observe is *necessary to select the theory. And so, one of the reasons for why a scientific theory is correct will always be "Because it describes what we observe"." Sabine Hossenfelder

ANALYSIS METHOD FOR NATURE OF LIGHT
In the world of electronic engineering the process for investigating the cause of the failure of a complex piece of equipment or system is similar to that employed in reverse engineering.
Reverse engineering is a process by which deductive reasoning is utilized in an attempt to understand the mechanisms by which a device, process, system or software meets its observed performance.
The same process is used to determine what caused a faulty piece of equipment to exhibit its observed failure symptoms.
Logically applied deduction follows a path of successive analysis of immediately prior causes to arrive at conclusions regarding the origins of present observations.
Based upon prior training and experience in addition to logical reasoning, an investigator will consider a range of ‘what if?’ questions while conducting their analysis.
The ability to imagine and analyze what may at first consideration appear to be unlikely or even ridiculous ‘what if?’ questions separates the most successful systems analysts from the majority and can produce valuable results where a more conventional approach may fail to do so.

The Nature of Light.
I came away from a series of lectures by professor Richard Feynman with a feeling of uneasy dissatisfaction and a reluctance to accept the inherent limitation on human knowledge that was implied and seemingly considered reasonable by physicists.
Although fully aware of my own intellectual limitations in comparison with those at the forefront of science I was also aware that many physicists themselves admit [quietly in most instances] that there must be a shortcoming somewhere in the fundamental theory.
Presently accepted concepts fail to account for all known aspects of reality.
Despite my lack of formal education in advanced physics I had the perhaps delusional thought that my considerable and successful experience in the analysis of many diverse complex systems in military and civilian electronic engineering....
as well as in other fields of investigation may have provided me with an advantage not afforded to many qualified scientists when it comes to ascertaining the characteristics light would need to possess to cause it to behave as observed and to do so in all regards.
Applying the concepts of reverse engineering to the well documented behavior of light I asked myself a range of ‘what if?’ questions in an attempt to discover its true nature.
Considering it almost certainly an exercise in futility but determined to either confirm the generally accepted characteristics of light, find answers to satisfy myself or to ultimately conclude that the subject matter is beyond the scope of my capabilities...
I was surprised to conceive a ‘particulate’ photon model that appeared to meet the necessary criteria.
While my model for the nature and behavior of light [photons] may in fact be a pure fantasy, it seemed to work at a fundamental level. But did it explain any physical phenomena?
If you wish to understand how the equivalence of photons and electrons in this experiment could be true I suggest you closely analyze the presentation at https://www.hereticalphysics.com by going to
The Wayback Machine at:
as the direct hereticalphysics site is no longer in use.
A capable and unspoiled mind should appreciate and gain from the experience, and any comments should confine themselves to analyzing and questioning the ideas explored rather than contrasting them with current beliefs.
Sabine Hossenfelder said in her excellent book Lost in Math “With hindsight one often wonders why a particular conclusion was not drawn earlier, even though the pieces were all there already.”

What do YOU think? Can we dispel some mysteries?
 
Last edited by a moderator:
I appreciate the Feynman quote: "I would rather have questions that can’t be answered than answers that can’t be questioned.”

But, I have a problem with "Physics is fully aware that photons of light or any other energy level [or ‘frequency’] DO NOT interfere with each other, therefore it should be known that such an interaction cannot be responsible for the patterns observed in the double-slit experiment." My problem with that is the double slit experiment is looking at the effect of light hitting a physical surface, upon which it interacts so that the pattern of its illumination is visible. So, the question is whether the light effects on that surface can "undo" and "reinforce" each other on that surface. That doesn't change the mystery about time, though. How could a photon that hits a surface well after a previous photon be capable of undoing the effect on that surface that the first photon produced? I am thinking of photographic silver halide chemical reactions or electrical responses of photoelectric sensors.

If I understand Agnosco's posted blog material as he intends, he seems to be saying that the patterns on the screen are actually created by the effects on the photons of the electron clouds associated with the material in which the slits are formed, occurring as the photons pass through the slit.

However, I don't think you can simply combine the patterns from 2 slits, say by overlaying photos of 2 single-slit, multi-photon patterns taken days apart with the same slit in 2 different positions comparable to the double slit arrangement, and thus produce the double-slit pattern. It seems like that would be the test for his hypothesis. Maybe somebody has already done that test?
 
Last edited:
Helio, I think what is happening is that we have developed a mathematical model (Relativity Theory) that does a good job of making predictions, but we really don't have a good phsyical understanding of why the observations behave as they do.
True, but it is a better "how" than Newton's explanation since he admittedly offered no explanation for how gravity works. I am hopeful they will find gravitons someday, but, regardless, science keeps searching.


There are some explanations in the popular literature that seem rational, but are not supported by the physicists. One example of that is the emission of a photon in the "up" direction" at the event horizon of a black hole. The popular explanation for that photon never getting past the event horizon distance is to make the analogy to a speed boat in a swiftly flowing river, where the boat's maximum speed just barely matches the current, so it cannot make any progress upstream at all. But, that would mean that space is flowing into the black hole, which the theorists emphatically deny.
Yes, I even asked Dr. Joe about this flow and he had not heard of it.

I haven't studied BHs partly because they're clearly enigmatic and a very deep understanding of GR seems required, unlike most of BBT. The time dimension, apparently, has to be taken far more seriously into account, which adds to the confusion. I just settle for the idea that inside the EH the escape velocity is > c, so no flow is necessary. I do discount any singularity claims, since they are highly subjective in nature, pun intended. ;)

The BBT modelers have certainly had to revise the BBT time lines as new observations are made by better telescopes. But, it is hard to disprove a model that has so much flexibility due to so many fitting parameters.
Yes, but that too was predicted and from the start. Lemaitre removed his expansion calculations from 1927 when he translated his Belgium (French) paper into English per Edington's request. This was because Hubble had improved his distance data and had tweaked the "apparent velocity" rates of galaxies in his publication of 1929. [Hubble still used Slipher's redshifts, though, surprisingly, he didn't bother to mention Slipher in his paper. Funny how hardly no one mentions this hiccup.]

It took Baader, at Mt. Wilson, to come in and revise our understanding of Cepheids and other variables (e.g. RR Lyare), which Shapley, and subsequently Hubble had gotten wrong. Given that Hubble's rate of 500kps/Mpc made the universe younger than what science was then saying was the age of the Earth and stars, it wasn't too huge a step to gain acceptance. But it took a lot of work on Baader's part to refine their "absolute" magnitudes.

As Yogi might say, "You can observe a lot by just looking". We've never been capable of looking at the earliest stars and galaxy formations. It's no surprise to most that we will be able to improve the BBT which is indeed flexible for early or late formation periods.

Paraphrasing him, "The past ain't what it used to be." [His cuter quip was "The future ain't what it used to be."]
 
Last edited:
True, but it is a better "how" than Newton's explanation since he admittedly offered no explanation for how gravity works. I am hopeful they will find gravitons someday, but, regardless, science keeps searching.
There is still a problem with how we try to explain "how gravity works". The typical GRT explanation about "bending space" shows a heavy object sitting in a depression it causes in a stretched piece of graph paper, and wants us to envision a ball bearing rolling around the depression. The issue is that uses the conventional understanding of gravity to try to show that the conventional understanding is wrong - sort of a visual oxymoron.

Instead, show me a flat piece of graph paper that has been "distorted" within its plane to result in "straight lines" going in orbits of various sorts (hyperbolic, parabolic, elliptical and circular), That is what the words say - so let's see that in a picture.
 
There is still a problem with how we try to explain "how gravity works". The typical GRT explanation about "bending space" shows a heavy object sitting in a depression it causes in a stretched piece of graph paper, and wants us to envision a ball bearing rolling around the depression. The issue is that uses the conventional understanding of gravity to try to show that the conventional understanding is wrong - sort of a visual oxymoron.

Instead, show me a flat piece of graph paper that has been "distorted" within its plane to result in "straight lines" going in orbits of various sorts (hyperbolic, parabolic, elliptical and circular), That is what the words say - so let's see that in a picture.
Analogies are never perfect. They are attempts to point someone in the right direction when only darkness is ahead.

If one watches a marble orbit around the bowling ball on the trampoline, though only briefly, it hints that a planet traveling along a gravitational gradient line might be viewed as traveling straight.

Of course, better analogies are welcome. Having no analogies is problematic as math that can't be explained will be a poor representation of physics. [I think Einstein said as much.] But the math is brutal, so any analogy that points roughly in the right direction can be deemed helpful. But, as you know, too often its limitations aren't adequately presented.
 
With a movable array of thermometers one could plot the static(positional static) spacial temperature density pattern coming from the slits. A space EM pattern. A rectangle of space temperature density.

But it’s only a potential pattern until you put a thermometer or matter there. That temperature emission only interacts with matter. One can not warm space, or place a temperature in space. Only matter can interact with it. Only matter can become warm. The intersection of two laser beams has no temperature unless you put matter there. Light and temp do not interact with space. Or in space unless matter is there. Remember that when you see light curve. Matter is curving it, not gravity.

All EM radiation is a temperature emission. We only feel a small slice of it, like we only see a small slice of light.

Space has neither temperature or light. Only matter has temperature and light. And only matter can emit it. Space is neither warm or cold. It’s nothing.

Both the supposition of propagated EM and the supposition of charge EM is greatly misunderstood.

Just an opinion from an old model.
 
Helio, Actually, what I am saying is that the analogy is not adequate, not just "imperfect". I think it fails to get people to understand how mass can make "straight lines" be curved. Or more from our perspective, how what appear to us to be curved lines are actually "straight" in "unbent" space.
 
Helio, Actually, what I am saying is that the analogy is not adequate, not just "imperfect". I think it fails to get people to understand how mass can make "straight lines" be curved. Or more from our perspective, how what appear to us to be curved lines are actually "straight" in "unbent" space.
Well, it’s not hard to look at geographical contours and recognize that any movement off the contour means a change in the relative values of KE and PE. If “straight” is limited to holding a given PE, then going both straight and in circles is hinted at. Not a close analogy, but a hint, IMO.
 
Jan 28, 2023
295
46
1,710
What's most interesting to me is when hadrons split into quarks and the products of this fission have a mass greater than the original, often tens or even hundreds of times more. Does this mean that in a bound state the mass of the quarks decreases, or that they are less affected by gravity?
 
Helio, The statement that things follow "straight" lines through "bent space" is the problem. There are no "contours" in that statement. And, when you look at the path that a satellite follows, it is not the path that light follows, through the same space. So, how does the speed of the passing object affect what is a "straight through" path?

Frankly, my impression is that the "explanation" is a complete failure to communicate.

The other attempt I have seen uses a rotation associated with time dilation. The object closest to the mass moves more slowly, compared to the object farther away, so it turns around the mass. Again, that has some plausibility, but I have not tried to make the math fit the observations. It doesn't "feel right" for something small like a rock orbiting something large like the Earth at LEO distances. And, what we perceive it just the opposite, that the objects near the mass need to move faster to stay in a circular path.

I can actually make a more plausible explanation using the "flowing space" analogy for orbits. The "straight path" of the orbit is across the space flow current towards the mass, so there is a net "drift" direction that is the orbit. Remember, that would be a combination of motion "through" space plus the motion "of" the space itself. And the time dilation would be a combination of the Special Relativity factor for the orbital speed through space, plus the General Relativity factor for the effect of the proximity to mass, which is the same as traveling through space at the local escape velocity from that mass. I think that is consistent with the time dilation correction observations from GPS satellites, too.
 
Last edited:
If you accept present theory, wouldn’t there be a spherical spacetime gradient around the object.

A radial gradient time zone from the object. Which has been set with object mass and velocity.

Hard to compute. And that doesn’t include the time difference of an intruder, and it’s mass and velocity.

And then the interaction of these two time zones. Time entities.

Harder to compute.

For a false constant velocity. Never measured velocity. Let alone if it’s constant.

Just an uninformed observation.

Gravity remains the great paradox for me.