Note the response regarding banning LLMs. They do serve a useful purpose. Just not the one many assume in terms of accuracy.
Last edited:
I just want to tell you, that you even talk like LLM already. My beloved phrase is A rather than B. You don't need to state firmly anything anymore. You can always give your preference rather than firm statement using A rather than B expression. I used it ironically.You raise an important point, AI output does range from insightful to flawed, and distinguishing between the two is critical. But that’s precisely why engagement matters. If we’re concerned about quality, we need to evaluate content directly rather than disqualify it based solely on its origin.
Blanket dismissal makes it impossible to ever identify the ‘brilliance’ you acknowledge AI might produce. The act of scrutiny, of assessing reasoning, evidence, and internal consistency, is what separates skepticism from evasion. I'm fully open to critique of any specific claims I’ve presented, and I welcome it. But rejecting them without examination doesn’t protect standards—it prevents discourse.
Much of the conversation here appears focused on discrediting LLM-generated content rather than evaluating the ideas themselves. The thread title, “Can LLM’s theories be banned?”, reflects that tendency.
Your response is a valuable addition to the rich back and forth of stimulating ideas in an open forum.What is generally recognizable is LLM written content. Especially on forums. They tend to stand out for their style and syntax.
It is still very new. Some of us are learning those lessons. There is also a great deal of deep confusion (people dismissing it out of hand, people who make no attempt to sort good from bad...)We understand AI is capable of brilliance and silliness. The problem is sorting one from the other. No one seems to be able to do that.
Indeed. And I am currently seeing that very regularly from people who are intelligent enough to know better.Much of the conversation here appears focused on discrediting LLM-generated content rather than evaluating the ideas themselves.
Eh? No not seen. Have you a link?I am not really hip on all these AI methods. But have you seen the latest? AI UFO enhanced imaging---- now we can see what those alien space ships really look like.
They have AI enhanced the UFO videos now.
It leaves no doubt of their existence. For all to see. Aliens are real.
And action must be taken.
In two weeks, ha ha.
True. Something can certainly by true, yet misapplied. More than a little of that in play.LLM 's are valid provided the input is true. That cannot be the case with any LLM trained on the internet. There is no need to evaluate the output, it is false by definition. It cannot possibly be true, except by chance, on any LLM trained on the internet. Any output from any "thing" can be used to stimulate ideas but those ideas must be validated. Given that internet trained LLM's are trained on false data, it is an excellent use of time to ignore them.
I never dismiss anything out of hand without recognizing the risk.True. Something can certainly by true, yet misapplied. More than a little of that in play.
She was saying it with ironyAI is not a "spiritual bliss attractor", it is mimicking what it was trained on. Without knowing what the two models were trained on, the output is meaningless. AI output is valid only with valid input. Otherwise it is meaningless gibberish.
Can you show me where I should have picked up on this? Based upon what I have read so far about AI, it makes sense. Such things as it is doing are common in humans, it would have scraped them off the internet and then it would have taken as gospel truth their assertions that "this is the only way". I've interacted with such people. Their work is on the net. AI scrapes the net and takes everything it sees as gospel truth. The purveyors of crystals, etc will absolutely insist they are right. How should AI know otherwise?
In Sabine's voice, and in her last sentence about leaving the world to AI.So, where is the irony?
You noted, that it's not a "spiritual bliss attractor" and I replied, that she's saying it with irony, which means, that it's silly also for her. For me as well.Sorry, not following you, standing on my interpretation, but you're free to think whatever you want.
Having played with AI a lot, I can say they are programmed to please if possible. Two operating sessions, both trying to please.I interpreted your remark as she was making it up as a joke. It really happened, but we all laugh at it.