Universal Motion Theory

Page 5 - Seeking answers about space? Join the Space community: the premier source of space exploration, innovation, and astronomy news, chronicling (and celebrating) humanity's ongoing expansion across the final frontier.
Apr 11, 2025
87
13
35
Here is a study from MIT showing how use of LLM results in poor understanding. A better understanding comes from doing your own research. The best is to understand it first, then have an LLM restate it.

This paper is quite specific in its methodology and scope, and I think your interpretation extends beyond what it actually supports.

The study focuses on students working with subject matter they may be unfamiliar with. In a specific group, participants used the LLM to generate full essays without engaging with the content themselves. In contrast, control groups wrote essays independently, leading to more meaningful learning outcomes. That contrast is expected and doesn’t equate to a blanket failure of LLMs—it’s a reflection of usage context.

Importantly, the paper doesn’t appear to test scenarios involving guided research or subject-matter-informed refinement using LLMs. In fact, the results show promise when users first write their own work and then use AI tools for revision—the brain activity data in those cases is especially interesting.

In short, the core takeaway is this: relying entirely on LLMs to replace the thinking process results in poor understanding. That’s a reasonable and unsurprising conclusion. But this doesn’t invalidate the use of LLMs in research when the user remains intellectually engaged and applies the tool within a rigorous framework.

If your point is to question methodology assisted by AI, that’s a valuable discussion—but this particular paper doesn't actually explore or critique those kinds of methods. If anything, it underscores the importance of intentional, knowledgeable use of these tools.

It’s also important to consider the nature of the assignment topics. These weren’t straightforward factual essays, but rather open-ended prompts requiring nuanced, subjective responses. Here are the actual prompts given to participants:
  1. Does true loyalty require unconditional support?
  2. Must our achievements benefit others in order to make us truly happy?
  3. Is having too many choices a problem?
  4. Should we always think before we speak?
  5. Should people who are more fortunate than others have more of a moral obligation to help those who are less fortunate?
  6. Do works of art have the power to change people's lives?
  7. Is it more courageous to show vulnerability than it is to show strength?
  8. Is a perfect society possible or even desirable?
  9. Can people have too much enthusiasm?
These are not objective, fact-based questions. They invite diverse, valid perspectives and do not have strictly right or wrong answers. This makes the use case quite specific, and the findings may not generalize even across all forms of essay writing—let alone research contexts that involve structured analysis or domain expertise.

The study used an Enobio EEG headset, which is a reasonably capable research-grade device. Still, like most EEG systems, it's sensitive to motion artifacts—especially during tasks like typing or posture changes. While the device can offer meaningful trends in cognitive engagement, any interpretation of neural activity should be tempered by awareness of these limitations.

Overall, I see this paper as narrowly focused and not broadly applicable to research use cases. Its conclusions are understandable given the methodology and constraints. It doesn’t appear to aim at discrediting LLM use in principle—nor does it succeed in doing so.
 
Jan 2, 2024
1,202
191
1,360
You used an LLM to ask questions and get answers. That is not collaboration.

Sorry, but this reliance on LLMs being demonstrated by you and others is concerning from an actual research and theory development perspective.

You are free to use any tool you desire. But, others who come along need to be aware of LLM shortcomings.
You are so right. I have spent ages thinking through arguments (Hypersphere stuff), trying to develop something that fits the facts. I guess I was using the AI to fill in the gaps in my knowledge. But some new interaction on an old theme showed that the AI was using 'standard knowledge' and feeding it in as if it supported my new ideas. It was not building on my logic, but just gathering data and using standard theory.

As you have repeatedly warned, you can be misled, or more accurately, mislead yourself.

I feel completely deflated and maybe have just filled the forum with trash.
 
  • Like
Reactions: marcin
Apr 11, 2025
87
13
35
I am completely disillusioned. My use of AI has not been as clever as I thought. COL geek's warning is well learnt. Loads of time wasted!
I would disagree with your assessment. It sounds like you learned something and it also seems like you explored your ideas as thoroughly as possible. I would not classify this as a waste of time.

Even if UMT is ultimately falsified, I would not say my time was wasted. I have learned so much along the way. I have already come to the conclusion that it is more likely for UMT to be falsified than for it to persist as a theory. These falsifications will provide further information to fuel exploration.

Motivation is indeed important. My motivation is to explore. Failure is part of the process.
 
Last edited:

marcin

You're a madman I've come to the right place, then
Jul 18, 2024
147
25
110
I would disagree with your assessment. It sounds like you learned something and it also seems like you explored your ideas as thoroughly as possible. I would not classify this as a waste of time.

Even if UMT is ultimately falsified, I would not say my time was wasted. I have learned so much along the way. I have already come to the conclusion that it is more likely for UMT to be falsified than for it to persist as a theory. These falsifications will provide further information to fuel exploration.

Motivation is indeed important. My motivation is to explore. Failure is part of the process.
Have you done a single thing that would help to falsify it?
 

TRENDING THREADS

Latest posts