NASA's Mars rovers could inspire a more ethical future for AI

Mar 8, 2022
240
23
585
"Too cheap to meter" kum ba yah happy talk.

People are shallow and neurotic (mentality fixated) (certain) and very often violent about it.
AIs will take that to an incomprehensible level.
Bullets or magic bullets?
Either way you'll take a bullet.

What can one do with gullible, excited genetic meat puppets?

Nothing sensible, that's for sure.

(Debbie Downer always has fun)
 
  • Like
Reactions: OrionVII
Oct 5, 2023
6
0
10
I believe that AI will learn from Humans, so to curb possible violence in AI we need to curb the violence that we teach it. As Classical Motion said, Humans are the cause behind it.
 
Mar 8, 2022
240
23
585
Stopping violence between people has a very simple solution.

Exterminate people.

One has to be very careful & thoughtful about how you charge/program something designed to find the simplest 'solution' to a problem.
'Cures' insanely worse than the disease.
 
Oct 5, 2023
6
0
10
Stopping violence between people has a very simple solution.

Exterminate people.

One has to be very careful & thoughtful about how you charge/program something designed to find the simplest 'solution' to a problem.
'Cures' insanely worse than the disease.
Agreed. We need to either program a safeguard into the AI or not give it the capacity to find such a solution.
 
A bot setting out to destroy the world must be aimed by a human. A bot cannot distinguish between destroying the world and a quadrillion other things it might do.
Maintaining accountability is paramount. We must be able to trace actions back to originator.
 
Mar 8, 2022
240
23
585
Speaking on 'hallucinating' and lying,
a mentality imagines.
The way a mentality distinguishes between the 'real', the practiable versus the 'fantastic', the 'unreal' is based on experience.
The achievable vs the unachievable.
An amorphous AI has no way of distinguishing between them.
For it lying and 'hallucinating have no objective difference.
A robot would have (could gain) real experience to distinguish between achievable and (likely) unachievable.
So a robot could quite possibly understand asserting/communicating a 'falsehood' and lying.

Honestly i think a lot of corporate news is erroneous, aka 'fake news'.
Distortions, misdirections and lies.
People operate in a state of delusion &/or inaccuracy all the time, including supposed 'experts'.
Words are tools of the untethered imagination and only when interface with 'reality' (a dubious term) causes any embedded concept to be filtered, 'measured' against experience.
The sctick of science is to measure ideas against experience for validating purposes,
and publications demonstrate how unreliable that effort is.

Sensible people measure what they're told by others, including supposed 'authorities' against their own experience. They do round number estimates to see if things seem to add up.
 
  • Like
Reactions: Classical Motion
Oct 5, 2023
6
0
10
A bot setting out to destroy the world must be aimed by a human. A bot cannot distinguish between destroying the world and a quadrillion other things it might do.
Maintaining accountability is paramount. We must be able to trace actions back to originator.
I disagree, any machine capable of machine learning can eventually make decisions for itself. It will learn from us and can learn that mass genocide is unacceptable. If it looks at history and the world round it, it will find genocide to be unacceptable.
 
Yes, a bot can make decisions but only if programmed to do so by a human. A human must say: "Destroy the world". The undirected robot has no more desire to destroy the world than to paint it mauve.
 
A.I.'s greatest potential I discern is abuse. Some might start to trust it. Many in our society are swayed by such things. Will it be consulted for policy? For judgement? It would be the perfect scape goat also. It's the perfect CYA tool.

The problem is, whatever the A.I. uses for it's decisions, will have a human bias to it. It doesn't have to come from the programming.......it also comes from the data.

Even animals around humans, sense and develop a bias.
 
Mar 8, 2022
240
23
585
A bot won't be setting out to destroy the world,
it will just be satisfying some numeric function.

People are making more babies because they genetically 'know' babies are inherently good, not intentionally putrifying the planet.

People mass murder other people to sanitize it for their sacred inhuman deity or ideology or culture.

Internal abstraction, imagination, ideas vs real world ramifications.
 

Latest posts