How AI is helping us search the universe for alien technosignatures

"AI" at this point in its development is not capable of "critical thinking" - the ability to look at the big picture and say "something doesn't match expectations here, what is going on".

So, I personally don't credit it with actual intelligence. It is more an automation of an identification process, based on having been given data about what things correlate with other things so that it "learns" by adjusting internal parameters in mathematical algorithms.

And, it really doesn't seem to have a "self-awareness" capability to do the type of thinking that Questioner is talking about to "see something making the same sinuous shortcuts around cognition that it is making."
 
  • Like
Reactions: COLGeek
[never back down even if you're wrong]

If there is anything designed &/or constructed by an AI another AI will tend to pick up on it, because they are both operating in the same manner
namely a single continuous math manifold.

One big problem with AIs is they don't isolate things into independent discrete concepts.
They don't have a library of concepts to mix & match & configure in various ways.
They don't discover and add new concepts to their 'library'.

They try to make everything into a single, rigid continuous relationship.

They tend to smooth over discrete transitions,
like people often try to.

AIs will tend to cluster into self referencing self reinforcing cliques the way groups of people do.
A harmonic, amplifying echo chamber to the exclusion of all else [other considerations].

'Certainty' becomes a function of amplification harmonics.

AIs like most people are compelled to having a single continuous conclusion to/for everything.
They aren't designed to hold on to the unknown, the nonconclusive, ambiguities.
 
If there is anything designed &/or constructed by an AI another AI will tend to pick up on it, because they are both operating in the same manner
namely a single continuous math manifold.
That is the part that I am disagreeing with. It is not a logical expectation for what current "AI" really is and does.

In order for an "AI" of the current form created by humans to recognize any other "AI", it would need to be "trained" with data about the indicators that something was generated by "AI" instead of by direct human action.

We could obviously do that with human effort for reviewing our own AI's products, such as AI-generated pictures of people with the wrong number of fingers, arms shown in places they can't really reach, etc. But, it takes a human to generate that training data, because the AIs that are doing it are not "aware" that those are mistakes that don't fit reality until some human compiles the learning data for the AIs to learn to recognize those errors. But, all we are really doing is making a better version of the AI as we see its shortcomings.

However, if we don't know what an extraterrestrial AI does incorrectly for the biological extraterrestrials that created it, there is no way to train our AI to look for their AI. We don't even know if those extraterrestrial beings have fingers or arms, for example.

And, also consider what our current AI would think of a Salvador Dali painting if we did train it to "recognize AI" using the errors we see in current AI pictures. It would probably generate a false positive and we would be getting headlines that "Salvador Dali created AI before computers even existed!!!!"

Far too often, what we call "artificial intelligence" today seems to behave more like "automated stupidity".
 
  • Like
Reactions: COLGeek

COLGeek

Cybernaut
Moderator
[never back down even if you're wrong]

If there is anything designed &/or constructed by an AI another AI will tend to pick up on it, because they are both operating in the same manner
namely a single continuous math manifold.

One big problem with AIs is they don't isolate things into independent discrete concepts.
They don't have a library of concepts to mix & match & configure in various ways.
They don't discover and add new concepts to their 'library'.

They try to make everything into a single, rigid continuous relationship.

They tend to smooth over discrete transitions,
like people often try to.

AIs will tend to cluster into self referencing self reinforcing cliques the way groups of people do.
A harmonic, amplifying echo chamber to the exclusion of all else [other considerations].

'Certainty' becomes a function of amplification harmonics.

AIs like most people are compelled to having a single continuous conclusion to/for everything.
They aren't designed to hold on to the unknown, the nonconclusive, ambiguities.
You assume AIs would share something in common. Our systems tend to be binary based. What if the alien AI is trinary?

What if the Alien AI used a completely different math base (10 vs. 13, for example). Then there is transmission media/format/etc.

For all intensive purposes, these sorts of things would more likely be "seen" as white noise, or static.

Our current state of "AI" is more like advanced, greater contextual, search engine mechanics. Not forward looking, not intuitive, vastly over-advertised and grossly misunderstood.

Still, an interesting topic of discussion.
 
We know some of the higher animals are speaking in a complex language. The cetacean's noises are not random, have purpose, convey information. We don't know how much information is in there versus how much is junk. Our scientists have access to untold amounts of recorded squeaks in the ocean. We can't even understand them, how we going to understand an alien radio signal?
 
I’m guessing before communication can occur, awareness must occur. And for some awareness is enough, if not too much. Communication in our world involves risk. Just talking no matter what about can get you killed. The only parameter needed from it is location.

And an awareness communication can be used to divert attention with a warning. So basic communication is continuous in a sense. If awareness is communication. And location can be hidden with an awareness flux. Thousands of awareness's. Can’t define a single one.

Like my backyard.
 
We are making some progress in understanding the language of sperm whales, and using "AI" to do it. See https://www.bbc.com/future/article/20240709-the-sperm-whale-phonetic-alphabet-revealed-by-ai .

We and other animals have long understood the meaning behind some sounds that species other than our own make. A study of animal sounds in an African rain forest showed that some species made different danger calls for different kinds of predators, and other species understood what kind of predator was being announced and responded on the basis of whether it was also a threat to their species or not. That was some years ago, and I don't have time to track down a link.

But, we understand what the situations are that other species are talking about. AI helps us sift through more data than a human's attention span can tolerate, looking for patterns that we are interested in finding.

But, listening to even the extraterrestrial equivalent of a television broadcast without being able to decode the picture content would leave us recognizing noise as artificial, but not what was being communicated.
 
For specific information one needs a pattern. A chirp/whistle envelope is a pattern. But any EMR signal that has an unnatural pattern is awareness. Or a high probability of it.

An extra H spectrum line will get attention without acknowledgment. If we could add one to a star. If there is any attention out there. If we found one all we could do is watch it and ponder. Even if we could answer we wouldn’t know where to aim it and we wouldn’t know if they still exist.

And the bluster/cluster from it would never end.
 
Compared to next-word predictors, I'm not sure how you "train" software on what's basically radio static, noise.

IOW, language means there's some information present in just about every word being used to train the software, but what the radio telescopes are going to be picking up ~99+% of the time is the equivalent of pure gibberish.
 

TRENDING THREADS