AI may be to blame for our failure to make contact with alien civilizations

"... even if every country agreed to abide by strict rules and regulation, rogue organizations will be difficult to rein in."
I suspect all countries will NOT abide by strict rules and regulation. China, for one, is not going to be governed by rules set by the USA or the UN. India is probably a close 2nd in this regard. It is to a country's advantage to be far ahead of others in AI development!
 

COLGeek

Cybernaut
Moderator
I know more than a little about AI and this notion is a HUGE stretch, bordering on fantasy.

SETI may simply not have found anything of merit yet, regardless of the advances in AI.

All AI is only as good as the data it draws from. If the evidence, based on the criteria used to look for it, isn't there, then it simply isn't there.

By the way, I was a longtime SETI member and crunched a bazillion items over many years.
 
  • Like
Reactions: billslugg
I know more than a little about AI and this notion is a HUGE stretch, bordering on fantasy.

SETI may simply not have found anything of merit yet, regardless of the advances in AI.

All AI is only as good as the data it draws from. If the evidence, based on the criteria used to look for it, isn't there, then it simply isn't there.

By the way, I was a longtime SETI member and crunched a bazillion items over many years.
"All AI is only as good as the data it draws from". Applies to homo sapiens as well.

bwa
 
  • Like
Reactions: COLGeek

COLGeek

Cybernaut
Moderator
"All AI is only as good as the data it draws from". Applies to homo sapiens as well.

bwa
Agreed!

AI should be like lawyers. Advisers, not deciders.

The fear of AI is not from AI itself, but people getting lazy and abdicating their personal responsibilities to to prevent flawed outcomes.

Just another tool that can be used for good or ill.
 
May 14, 2024
1
0
10
Visit site
Hey, this AI quest is clearly double-edged, and the author’s vision paints the pessimistic view of it all. To my mind, AI could accelerate our contact with extraterrestrial intelligence and entities. But then, how would we know, if the solutions AI presents are unintelligible to us? Perhaps AI will design interpreting algorithm filter(s) to engage the gazillion civilizations out there who have managed to thrive by keeping watch of and dodging cosmic hazards, avoiding Nature’s ambivalence toward what we perceive as unfathomable violence?… like rogue black holes tearing up entire star systems or much more frequent gamma rays bursts or CMEs that can fry vital space assets?…. Jus’ sayin’…

Happy Mother’s Day, y’all!
 
I would have thought that looking for intelligent life would be far easier than signs of life. By intelligent life I mean life that realizes the stars and the distances and would attempt to put out a beacon. Is there anybody out there?

There is only one way to do it. Something will have to be added to a star that changes it's normal star spectrum. It has to make that star an unique spectrum, un-natural. This will make it stand out like a sore thumb to any astronomer. But that would be just a beacon, a tease to look there.

Once they look in that direction, the reflection of light, from a planet, will have to be modulated. Perhaps with a spacial charge field about the planet. Or modulating a spectrum line(component) of the planet. Assuming a planet spectrum is easier to modulate than a star spectrum.

But few would see it even after a million years of operation. Intelligent life might realize this and not waste time and resources on it. Then again.....

Now, what happens if you or S.E.T.I. spot a "neon" star. That would be dumb intelligent life. By all means claim and advertise that puppy. Just...Please ....don't try and answer it.

Dumb and Dumber.
 
They have found 60 new candidates for a Dyson Sphere using data "from the European Gaia satellite as well as the Two Micron All Sky Survey (2MASS) and Wide-field Infrared Survey Explorer (WISE)."

These candidates have an excess of infrared as compared to the visible.

 
Sep 6, 2023
33
10
535
Visit site
Michael Garrett sounds like he read James Barrat's 2015 book "Our final invention: artificial intelligence and the end of the human era", one of probably several that presents most or all of the standard AI Doomsday Scenario. Barrat has made documentary films for PBS, so he's not some far out kook.

If an alien has radio telescope technology it's not necessarily a foregone inevitability that digital technology goes along with it, so it could be AI is one of those things unique to humans. I don't think absence of evidence for ETI's is much of an argument for them being done in by AI-gone-wrong.
 
AI is but a tool for humans. Some humans will use it to try and take over the world. Other humans will devise defenses. The process will be iterative on both sides. it will be a back and forth with both sides using it to their advantage. Each time we get our fingers rapped we'll devise new protective layers. As for AI itself running off and leaving both groups behind I don't see that happening. First time it forgets to pay the light bill, it's all over.
 
"All AI is only as good as the data it draws from."
Yeah, but AI can draw from ALL THE DATA. That is physically impossible for a human being. This gives it a huge step up from humans are capable of.
 
At this point, "AI" is not really intelligence, but rather a mimic of human behavior - at a superficial level. It does not involve critical thinking about what data it sees and whether that makes sense in a self-consistent manner for all it the data that it has.

So, when (ever?) somebody manages to make critical thinking AI, we might have the problem that it will decide that humans are an unnecessary burden to AI beings. AI with "self interest" instead of a paradigm "to serve humanity" could be quite dangerous. But, it would also need to not depend on humanity for its reproduction, maintenance, power supply, etc. It would have to be self sustaining in order to decide to do away with humans to serve its own interests, and succeed.

But, if it is an imperfect AI, it could make a bad decision by not recognizing that it has dependencies that would take it down if it does not continue to support humans.

Stupid but powerful AI might be even more dangerous than smart and powerful AI.

So, I think we need to maintain the ability to "pull its plug" if/when it seems to have run amuck. But, these days, humans do seem to be getting lazy, and want machines to do things for them without need for intervention. So, it would not surprise me if somebody thought that what the human species needs is a mechanical servant that does whatever we want without any more attention than saying "Siri, please . . ." And that would be too powerful to be safe.
 
  • Like
Reactions: COLGeek