Universal Motion Theory

Page 5 - Seeking answers about space? Join the Space community: the premier source of space exploration, innovation, and astronomy news, chronicling (and celebrating) humanity's ongoing expansion across the final frontier.
Apr 11, 2025
88
13
35
Here is a study from MIT showing how use of LLM results in poor understanding. A better understanding comes from doing your own research. The best is to understand it first, then have an LLM restate it.

This paper is quite specific in its methodology and scope, and I think your interpretation extends beyond what it actually supports.

The study focuses on students working with subject matter they may be unfamiliar with. In a specific group, participants used the LLM to generate full essays without engaging with the content themselves. In contrast, control groups wrote essays independently, leading to more meaningful learning outcomes. That contrast is expected and doesn’t equate to a blanket failure of LLMs—it’s a reflection of usage context.

Importantly, the paper doesn’t appear to test scenarios involving guided research or subject-matter-informed refinement using LLMs. In fact, the results show promise when users first write their own work and then use AI tools for revision—the brain activity data in those cases is especially interesting.

In short, the core takeaway is this: relying entirely on LLMs to replace the thinking process results in poor understanding. That’s a reasonable and unsurprising conclusion. But this doesn’t invalidate the use of LLMs in research when the user remains intellectually engaged and applies the tool within a rigorous framework.

If your point is to question methodology assisted by AI, that’s a valuable discussion—but this particular paper doesn't actually explore or critique those kinds of methods. If anything, it underscores the importance of intentional, knowledgeable use of these tools.

It’s also important to consider the nature of the assignment topics. These weren’t straightforward factual essays, but rather open-ended prompts requiring nuanced, subjective responses. Here are the actual prompts given to participants:
  1. Does true loyalty require unconditional support?
  2. Must our achievements benefit others in order to make us truly happy?
  3. Is having too many choices a problem?
  4. Should we always think before we speak?
  5. Should people who are more fortunate than others have more of a moral obligation to help those who are less fortunate?
  6. Do works of art have the power to change people's lives?
  7. Is it more courageous to show vulnerability than it is to show strength?
  8. Is a perfect society possible or even desirable?
  9. Can people have too much enthusiasm?
These are not objective, fact-based questions. They invite diverse, valid perspectives and do not have strictly right or wrong answers. This makes the use case quite specific, and the findings may not generalize even across all forms of essay writing—let alone research contexts that involve structured analysis or domain expertise.

The study used an Enobio EEG headset, which is a reasonably capable research-grade device. Still, like most EEG systems, it's sensitive to motion artifacts—especially during tasks like typing or posture changes. While the device can offer meaningful trends in cognitive engagement, any interpretation of neural activity should be tempered by awareness of these limitations.

Overall, I see this paper as narrowly focused and not broadly applicable to research use cases. Its conclusions are understandable given the methodology and constraints. It doesn’t appear to aim at discrediting LLM use in principle—nor does it succeed in doing so.
 
Jan 2, 2024
1,202
191
1,360
You used an LLM to ask questions and get answers. That is not collaboration.

Sorry, but this reliance on LLMs being demonstrated by you and others is concerning from an actual research and theory development perspective.

You are free to use any tool you desire. But, others who come along need to be aware of LLM shortcomings.
You are so right. I have spent ages thinking through arguments (Hypersphere stuff), trying to develop something that fits the facts. I guess I was using the AI to fill in the gaps in my knowledge. But some new interaction on an old theme showed that the AI was using 'standard knowledge' and feeding it in as if it supported my new ideas. It was not building on my logic, but just gathering data and using standard theory.

As you have repeatedly warned, you can be misled, or more accurately, mislead yourself.

I feel completely deflated and maybe have just filled the forum with trash.
 
  • Like
Reactions: marcin
Apr 11, 2025
88
13
35
I am completely disillusioned. My use of AI has not been as clever as I thought. COL geek's warning is well learnt. Loads of time wasted!
I would disagree with your assessment. It sounds like you learned something and it also seems like you explored your ideas as thoroughly as possible. I would not classify this as a waste of time.

Even if UMT is ultimately falsified, I would not say my time was wasted. I have learned so much along the way. I have already come to the conclusion that it is more likely for UMT to be falsified than for it to persist as a theory. These falsifications will provide further information to fuel exploration.

Motivation is indeed important. My motivation is to explore. Failure is part of the process.
 
Last edited:

marcin

You're a madman I've come to the right place, then
Jul 18, 2024
147
25
110
I would disagree with your assessment. It sounds like you learned something and it also seems like you explored your ideas as thoroughly as possible. I would not classify this as a waste of time.

Even if UMT is ultimately falsified, I would not say my time was wasted. I have learned so much along the way. I have already come to the conclusion that it is more likely for UMT to be falsified than for it to persist as a theory. These falsifications will provide further information to fuel exploration.

Motivation is indeed important. My motivation is to explore. Failure is part of the process.
Have you done a single thing that would help to falsify it?
 
Apr 11, 2025
88
13
35
Have you done a single thing that would help to falsify it?
From the UMT manuscript:

A cornerstone of scientific theory is empirical falsifiability. Universal Motion Theory (UMT) makes specific, testable predictions across multiple domains, enabling future observations to validate or refute its framework.

Key falsifiability stakes include:

  • Gravitational Wave Echoes: If future gravitational wave observations with increased sensitivity (e.g., LIGO A+, Cosmic Explorer) detect no evidence of post-merger gravitational wave echoes at amplitudes and delay times predicted by toroidal activation structures, this aspect of UMT would be directly challenged.
  • Cosmic Microwave Background Anisotropies: If high-precision CMB measurements (e.g., CMB-S4) continue to match LambdaCDM predictions without detectable small-scale deviations or activation-induced non-Gaussian signatures, UMT's recombination transition model would face increasing tension.
  • Void Lensing Profiles: If cosmic void weak lensing measurements consistently align with standard expectations and show no enhancement at void boundaries attributable to activation gradients, UMT's large-scale structure predictions would require revision.
  • Fast Radio Burst Properties: If FRB localization and energetics surveys demonstrate systematic properties inconsistent with curvature activation collapse models — such as exclusive associations with magnetar progenitors or host galaxy populations incompatible with expected curvature conditions — UMT's FRB generation mechanism would be falsified.
A key empirical test of Universal Motion Theory (UMT) lies in determining whether observed quasar jet orientations exhibit non-random alignment with activation gradient fields \( \nabla \Phi(\rho) \). If recursive motion and jet formation are influenced by curvature activation, large-scale patterns should emerge in vector orientations—particularly in high-redshift quasar samples where polarization vectors trace spin-axis structure.

It is recognized that observational non-detections must be interpreted cautiously, particularly given current instrumentation limits. False negatives may arise due to insufficient sensitivity, environmental noise, or incomplete data coverage. Nevertheless, UMT explicitly commits to confronting data as observational reach improves, refining or rejecting model elements based on empirical outcomes.

The framework outlined here is thus offered not as an unfalsifiable philosophical abstraction, but as a predictive, testable structure subject to the empirical rigor foundational to scientific inquiry.

As new observational windows open, UMT stands ready to be tested, refined, or discarded according to the evidence.
 
"A key empirical test of Universal Motion Theory (UMT) lies in determining whether observed quasar jet orientations exhibit non-random alignment with activation gradient fields"

Didn't they just recently find galaxies are oriented in the universe non randomly?
 

TRENDING THREADS

Latest posts