Question Can LLM's theories be banned?

Page 2 - Seeking answers about space? Join the Space community: the premier source of space exploration, innovation, and astronomy news, chronicling (and celebrating) humanity's ongoing expansion across the final frontier.

marcin

You're a madman I've come to the right place, then
Jul 18, 2024
211
25
110
You raise an important point, AI output does range from insightful to flawed, and distinguishing between the two is critical. But that’s precisely why engagement matters. If we’re concerned about quality, we need to evaluate content directly rather than disqualify it based solely on its origin.

Blanket dismissal makes it impossible to ever identify the ‘brilliance’ you acknowledge AI might produce. The act of scrutiny, of assessing reasoning, evidence, and internal consistency, is what separates skepticism from evasion. I'm fully open to critique of any specific claims I’ve presented, and I welcome it. But rejecting them without examination doesn’t protect standards—it prevents discourse.

Much of the conversation here appears focused on discrediting LLM-generated content rather than evaluating the ideas themselves. The thread title, “Can LLM’s theories be banned?”, reflects that tendency.
I just want to tell you, that you even talk like LLM already. My beloved phrase is A rather than B. You don't need to state firmly anything anymore. You can always give your preference rather than firm statement using A rather than B expression. I used it ironically.
 
Last edited:
I agree the source is irrelevant. Didn't the benzene ring come from a dream of a snake holding his tail? Similarly, AI can point us in a direction we can search. But we cannot use AI as a source, it cannot be quoted. There is no way to go check the source. The problem we are seeing in our forum is people using AI and not telling us. They are treating it as an original source. Can't do that.
 
I am not really hip on all these AI methods. But have you seen the latest? AI UFO enhanced imaging---- now we can see what those alien space ships really look like.

They have AI enhanced the UFO videos now.

It leaves no doubt of their existence. For all to see. Aliens are real.

And action must be taken.

In two weeks, ha ha.
 
Jun 19, 2025
99
3
35
We understand AI is capable of brilliance and silliness. The problem is sorting one from the other. No one seems to be able to do that.
It is still very new. Some of us are learning those lessons. There is also a great deal of deep confusion (people dismissing it out of hand, people who make no attempt to sort good from bad...)
 
Jan 2, 2024
1,227
197
1,360
I am not really hip on all these AI methods. But have you seen the latest? AI UFO enhanced imaging---- now we can see what those alien space ships really look like.

They have AI enhanced the UFO videos now.

It leaves no doubt of their existence. For all to see. Aliens are real.

And action must be taken.

In two weeks, ha ha.
Eh? No not seen. Have you a link?
 
I didn’t see it on the internet. It was on NBC news I believe. They had a couple of pro UFOers, who were showing this and wanted some new congressional hearings.

AI makes your dreams come true. With great amounts of CO2. The truth was right here all along.
 

COLGeek

Cybernaut
Moderator
Apr 3, 2020
2,175
1,116
13,560
We are starting to see "duelling LLMs" in discussing various "theories". Hard to see the value in this when the underlying issues with LLMs, in general, seem to be ignored by so many.

Grains of salt need to be taken here, as well as maintaining a sense of humor and decorum.
 
  • Like
Reactions: marcin
LLM 's are valid provided the input is true. That cannot be the case with any LLM trained on the internet. There is no need to evaluate the output, it is false by definition. It cannot possibly be true, except by chance, on any LLM trained on the internet. Any output from any "thing" can be used to stimulate ideas but those ideas must be validated. Given that internet trained LLM's are trained on false data, it is an excellent use of time to ignore them.
 

COLGeek

Cybernaut
Moderator
Apr 3, 2020
2,175
1,116
13,560
LLM 's are valid provided the input is true. That cannot be the case with any LLM trained on the internet. There is no need to evaluate the output, it is false by definition. It cannot possibly be true, except by chance, on any LLM trained on the internet. Any output from any "thing" can be used to stimulate ideas but those ideas must be validated. Given that internet trained LLM's are trained on false data, it is an excellent use of time to ignore them.
True. Something can certainly by true, yet misapplied. More than a little of that in play.
 
True. Something can certainly by true, yet misapplied. More than a little of that in play.
I never dismiss anything out of hand without recognizing the risk.
I once came across a guy alongside the road in NC, 500 miles from my home in GA, he was unwashed, unkempt. I made friends and found he knew of me. He had heard an old friend of mine singing my praises 21 years previous in PA. I learned from this never to dismiss anyone out of hand without recognizing you might be making a mistake.
 
  • Like
Reactions: COLGeek

TRENDING THREADS