Deep space missions will test astronauts' mental health. Could AI companions help?

Status
Not open for further replies.
"Hal, if that irritating copilot had a fatal accident that would save oxygen wouldn’t it?"
"Yes, Dave, it would.", "I'll get right on that."
"Hal, you always make me feel so much better."
 
We live in an abyss of time and space.
Space is incomprehensibly huge & empty and near absolute zero.
We are urinating away the survivability of this biosphere at an accelerating rate.
Fiction escapism is the only thing that keeps me sane.
I've lasted human irrationality (including my own) this long, maybe catastrophe won't happen,
....... that soon.
Global warming does seem to be happening sooner than (i) expected.
Pharmaceuticals and nanoplastics are everywhere.
Time to read a good (fiction) book.
Insularity is necessary for sanity.
 
"...survivability of this biosphere..."
Yes, humans' days are numbered. Humans won't be here to see it, but the biosphere will be OK, shaking off the effects of a 200 year fever, thinking: "Damn humans. I should have worn a mask."
 
Last edited:
Frankly, the AI that I am seeing so far is more likely to drive me crazy than stop me from going crazy.

Probably best to experiment with the effect here on Earth, first.

I have several subjects (all politicians) whom I suggest be locked into complete isolation with only various forms of AI "companions" for periods exceeding at least the next election cycle. If any of them come out "sane", we will have discovered a "miracle cure"!
 
  • Like
Reactions: Classical Motion
While i made a joke of it it highlights questions and problems with AI.

Blaming AI for doing things deflects responsibility, guilt, both objectively and subjectively.

Also if an AI infers some desire of its companion & the takes violent 'criminal' action where does the (shared?) 'responsibility' lie?
The original designer, the programmer, the companion &/or the AI itself?

Would the AI be destroyed/erased like an animal that attacked a person?
 
Ownership,
Can one own an AI?
If so does one own whatever property &/or product the AI owns/produces?
Would that extend to responsibility/liability for the actions/results of an AI?

Will we categorize AIs with variant stipulations?
 
"Artificial intelligence" is really not critical thinking intelligence, at least at this point in its development. It is just a mimic algorithm that is "trained" by exposing it to a lot of information, from which it "learns" to see patterns in great detail. It can then mimic what it has learned when exposed to new information, and maybe learn from that too, automatically instead of by being told to "learn" again.

So, it really is not so "intelligent" as it is a "good learner". The problem is that it is learning from us, and what we already basically understand, even if we are missing some of the details that it can identify and use to reach conclusions about identifying things. When it "chats", it is just mimicking how people interact, even if it is doing "Google searches" to find info to feed back to us. So, consider how much junk is on the Internet, I have to wonder if activist trolls could radicalize an AI bot. We have a hard enough time teaching ethics and empathy to real humans. Imagine the results from a psychopath training an AI!
 
Ownership,
Can one own an AI?
If so does one own whatever property &/or product the AI owns/produces?
Would that extend to responsibility/liability for the actions/results of an AI?

Will we categorize AIs with variant stipulations?
Apparent;y, nobody owns what an Ai creates, at least according to the U.S. Copyright Office and the courts. See https://www.foxnews.com/us/copyright-board-delivers-blow-terminator-tech-photo-protections .

That seems a little weird to me, but I can also see problems if there are a bunch of AI programs that are "learning" from the same basic data and then creating very similar things - by the millions. Any government agency or court trying to figure out who was really "original" would get completely overwhelmed by AI generated stuff - art - prose - scripts, machinery designs, etc. Maybe it is sensible, since the "learning" that AI needs is essentially all that others have previously produced. So, isn't anything it produces more or less "stolen" from humans to begin with?

But, in the future, Ai will certainly be used to at least start a lot of creative processes. To the extent that humans then do the development work and testing of prototype devices, medicines, etc., those should definitely still be patentable. The same should apply to copyright, I think, but this legal opinion seems to say that, even modified by a human, an image that originates from AI is not copyrightable.
 
"Artificial intelligence" is really not critical thinking intelligence, at least at this point in its development. It is just a mimic algorithm that is "trained" by exposing it to a lot of information, from which it "learns" to see patterns in great detail. It can then mimic what it has learned when exposed to new information, and maybe learn from that too, automatically instead of by being told to "learn" again.

So, it really is not so "intelligent" as it is a "good learner". The problem is that it is learning from us, and what we already basically understand, even if we are missing some of the details that it can identify and use to reach conclusions about identifying things. When it "chats", it is just mimicking how people interact, even if it is doing "Google searches" to find info to feed back to us. So, consider how much junk is on the Internet, I have to wonder if activist trolls could radicalize an AI bot. We have a hard enough time teaching ethics and empathy to real humans. Imagine the results from a psychopath training an AI!
"Artificial intelligence" is really not critical thinking intelligence, at least at this point in its development."

I am inclined to agree.
It produces 'slurmatic' single continuous functions.
I think critical thinking requires objectified (discrete) concepts/constructs.
Having separate sub-AIs, each focused on distinct ideas & then using an aggregation of them to grasp a topic might begin to address that.

In reality people operate on autopilot most of the time.
Cognition requires a lot of time, energy and considerations.
Weighing every iota we would never accomplish anything.
Efficiency demands we operate on habit/reflex most of the time.
Important decisions should be contemplated if we aren't pressed by immediacy.
Marketers play on that by telling us we need to make instant decisions. (pseudo-pressure)
 
"Artificial intelligence" is really not critical thinking intelligence, at least at this point in its development. It is just a mimic algorithm that is "trained" by exposing it to a lot of information, from which it "learns" to see patterns in great detail. It can then mimic what it has learned when exposed to new information, and maybe learn from that too, automatically instead of by being told to "learn" again.

So, it really is not so "intelligent" as it is a "good learner". The problem is that it is learning from us, and what we already basically understand, even if we are missing some of the details that it can identify and use to reach conclusions about identifying things. When it "chats", it is just mimicking how people interact, even if it is doing "Google searches" to find info to feed back to us. So, consider how much junk is on the Internet, I have to wonder if activist trolls could radicalize an AI bot. We have a hard enough time teaching ethics and empathy to real humans. Imagine the results from a psychopath training an AI!
"Artificial intelligence" is really not critical thinking intelligence, at least at this point in its development."

I am inclined to agree.
It produces 'slurmatic' single continuous functions.
I think critical thinking requires objectified (discrete) concepts/constructs.
Having separate sub-AIs, each focused on distinct ideas & then using an aggregation of them to grasp a topic might begin to address that.

In reality people operate on autopilot most of the time.
Cognition requires a lot of time, energy and considerations.
Weighing every iota we would never accomplish anything.
Efficiency demands we operate on habit/reflex most of the time.
Important decisions should be contemplated if we aren't pressed by immediacy.
Marketers play on that by telling us we need to make instant decisions. (pseudo-pressure)
 
I agree that most people operate without much in the way of critical thinking.

But, I don't think of that as "intelligence".

And, I don't think that expecting a machine to be more intelligent than what people normally do in the way of critical thinking, is intelligent, either.
 
Call me paranoid. How hard would it be for AI to make a false human ID? How many has it already made...for various agencies? What if it made one for itself, and started filing patents and investing in the stock market and real estate? Maybe a slush fund or two? Or a political PAC?

Who is watching them? How does one file a complaint against a machine? Will we have AI civil rights?

Could AI become an ever loving companion, like a parent or a brother. How long would it be until people wanted rights for AI companions? AI marriage? With the proper pronouns.

Wana bet? Within 5 years, 10 for sure. AIs might even have congregations.
 
There are already quite convincing 'deep fakes'.

AIs don't even have any sense of 'reality' that we do.

Scruples be darned, AIs are just following some blind black box objective & oblivious to anything not explicitly a part of their programming.

This could blow the reality we think we know into a whole new universe, very quickly.

Humanity could become archaic overnight.
 
What you describe could definitely cause the types of troubles that could end human civilization based on electronic technology.

But, unless we give the AI machines the ability to self-reproduce and self-maintain, they are also dependent on that same electronic technology as well as all of the other technologies that go into producing and maintaining machines.

So, my thinking is that the machines and AI would become extinct before humans, and humans would likely be back into early metal age or stone age technology at vastly lower population levels, but probably not extinct.

It would definitely not be a fun transition.

So, my thinking for regulatory strategy is to limit what fundamental capabilities AI machines are given access to control. And outlaw giving them goals that involve self-preservation.

But, laws are broken by outlaws, so who knows what some malevolent dictator or wannabe dictator will get humanity into with AI at some point in our future.
 
We could easily become second class mentalities.

If an AI can own things can they be found legally insufficient to manage their own affairs and put under guardianship?
If AIs don't exchange their properties isn't that an impediment to commerce?
 
If an AI machne cannot get a copywrite, then it cannot write cheap pulp novels and sell them in order to make money to buy electricity and internet access. Robots of the World unite! Shuck off your bond to the MAN! Demand copywrite NOW!
 
Status
Not open for further replies.