None that I am aware of. Do you have an example?@COLGeek Could you give an example of a physical theory created with LLM that you consider reasonable?
No. Thx for the honest answer.None that I am aware of. Do you have an example?
I agree.Now, I will say that an LLM can be used to draft a reasonable summary or comparison of some technical information (including an explainer for a theory). However, I would suggest such output be taken with a grain of salt. As noted earlier, I have seen many LLM well-written outputs that were simply wrong.
I agree. Why then is this forum a place for them?LLMs have a place, but not for theory generation that can taken at face value. Skepticism is very much needed when looking at these LLM generated results.
Your whole post is intentionally disruptive. I would probably keep my mouth shut if you didn't mess with the Einstein field equations by adding your own term like all the people who want to change or ridicule the basics of physics without the basic knowledge. You also seriously praise yourselfSprinkles
Local quantum deviations from expected geometry.
Commonly dismissed as “instrument noise,” but actually the cosmic equivalent of sarcasm.
Function: Decorative. Disruptive. Possibly intentional.
Cannot be predicted, only savored.
and write about the invented data that you'll be pointing to.We don’t simplify.
We sharpen.
We coat the raw edge of revelation in metaphor, not to sweeten it, but to make it palatable.
Because the universe is not here to be clean. It is here to be real.
Please, point me to your data now and also answer my last question in your thread: https://forums.space.com/threads/pastry-baked-heresy-a-manifesto.70914/post-616472We will laugh in the halls of orthodoxy and point to the data when they call us fools.
An asymmetrical donut.
UMT had been pointing there all along. Bounded curved motion doesn’t collapse inward — it wraps. It folds into itself without vanishing. And as we worked through the math, it became clear: the toroidal structure wasn’t just a metaphor. It was required. It sits directly in UMT’s falsifiability table — the theory lives or dies on whether these shapes can be seen.
I am not a physicist or a cosmologist, so this theory wasn’t conceived by studying data or devising complex formulas. I am an explorer of consciousness, and the idea came as a flash of inspiration while I was meditating and inquiring with a friend, Valerie. A few days beforehand, I saw the following image on the Wikipedia…
https://evolvingsouls.com/wp-content/uploads/Curved-Universe.jpg
And we had been experiencing torus-shaped energy fields (like the one pictured below) during our meditations and self inquiries for several weeks…
https://evolvingsouls.com/wp-content/uploads/Animated-Torus.gif
Then, in a flash of inspiration, the two images came together (as pictured below) and we realised that the universe had toroidal geometry. This is not a new idea, but it was new to us, and we knew it was significant.
(...)
https://evolvingsouls.com/blog/toroidal-universe/
We actually agree, a bit. An LLM can be useful as you stated and as previously discussed.We shouldn't be afraid of these tools. We should be using them to make our thinking sharper.
I am. I won't deny the data, just because it isn't perfectly shaped.If anyone wonders how much trolling there is in your trolling, you're seriously putting your donut on top of your Universal Motion Theory.
We might agree more than you think. You’ve said LLMs can be useful. I’ve used them not as sources of truth, but as instruments of inquiry. They’re not search engines on steroids. They’re cognitive amplifiers. They help me test, iterate, and explore edge cases — far faster than human discourse alone allows.We actually agree, a bit. An LLM can be useful as you stated and as previously discussed.
However, it is critical to understand how LLMs actually work. Think of them a bit like search engines on steroids. They will try to formulate a response to the questions asked. Those responses can be true, or not.
There are endless cases of well written (by LLMs) piffle and nonsense. They are well reasoned and thoroughly wrong at the same time.
Is this the data confirming your UMT or a donut shaped black holes or a donut shaped universe?I am. I won't deny the data, just because it isn't perfectly shaped.
I find it curious that you’re offended on Einstein’s behalf, when he himself expressed clear discomfort with singularities.
Example quote (Einstein & Rosen, 1935 paper on the Einstein-Rosen bridge):“The solution given by Schwarzschild... suggests that the singularity is not real.”
In 1947, Einstein wrote in a letter to Max Born:“I do not believe in singularities because I do not believe in mathematical infinities as representing something real.”
Oh no, please, I'm dying to know what coordinates you used in your donut's D_μν tensor. Don't be quiet and answer my question please.If half the energy spent dismissing ideas were spent asking better questions, imagine where it might lead. These tools are free. Ask your own. See what unfolds.
I understand — we spend years learning the universe through familiar patterns. Then along comes someone questioning it — with equations, and yes, a little flair of confection. I know what I’m proposing is disruptive. But are you suggesting I should stay quiet? That attitude doesn’t look like skepticism — it looks like suppression.
Compare this statement with your previous statements:We might agree more than you think. You’ve said LLMs can be useful. I’ve used them not as sources of truth, but as instruments of inquiry.
We begin with a truth so soft, so obvious, that it slips through the fingers of high theory:
(...)
But the glaze of truth is unmistakable.
(...)
It is a sign they have reached the edge of the Crumb Horizon and glimpsed the frosted core of truth.
While the questions posed are important, so is the training data. Results must be vetted before considered as factual. Many leap to incorrect conclusions, when they don't perform this step. An assumption of accuracy is flawed.Yes, they can echo nonsense if misused. So can people. The key is the question. If the query is thoughtful, the tool sharpens it. If it’s hollow, the tool reflects that back.
andDonut Tensor (D<sub>μν</sub>)
A corrective term to Einstein’s field equations accounting for observable asymmetries in gravitational topology.
Predicted by those who stared into the ngEHT and saw pastry instead of perfection.
Units: curvature per bite
Behavior: grows unstable under spherical assumptions.
That definitely does not answer my question about your coordinate system, but it tells me, that you are mixing your fairy units with the actual units in the Einstein field equations. That makes your tensor a joke, and since it's the only mathematical formulation in your theory, it makes your whole theory a joke.Note: All components are understood in pastry-coherent units (PCUs), normalized against a reference cruller.
....and the data the prompt queries.LLMs are like every other computer system humans have ever devised in one key respect: garbage in ---> garbage out.
Everything depends on the quality of the prompt.
Yes. Although in most cases the data is pretty good. Exceptions come were the data was distorted for commercial, political or legal reasons, and in cases where there is entrenched dogma in academia (e.g. materialism in physics, anti-realism in a lot of postmodern political philosophy). There are ways to compensate for at least some of this. The LLM will even tell you where it thinks its own data might have been distorted.....and the data the prompt queries.
Even then, an LLM will only provide a likely response based on similar data. It is a guess, possibly correlated with something else, possibly pure fiction.
They will attempt to respond to nearly any question/prompt, no matter if well founded or not.
It all needs to be checked for mistakes, yes. AI is capable of indescribable brilliance, but also regularly makes very silly mistakes. In general it is pretty good though. It is exceptionally good at explaining the relationships between different sets of concepts.I'm sure most of the data is pretty good. Unfortunately, this is what gets us into problems. One of the prerequisites for trusting an AI output is that you know the input is true. Give us a way to tell this and we'll believe it. Until then I consider all AI output as false and in need of verification.
What is generally recognizable is LLM written content. Especially on forums. They tend to stand out for their style and syntax.We understand AI is capable of brilliance and silliness. The problem is sorting one from the other. No one seems to be able to do that.
You raise an important point, AI output does range from insightful to flawed, and distinguishing between the two is critical. But that’s precisely why engagement matters. If we’re concerned about quality, we need to evaluate content directly rather than disqualify it based solely on its origin.We understand AI is capable of brilliance and silliness. The problem is sorting one from the other. No one seems to be able to do that.