If we want to settle on other planets, we’ll have to use genome editing to alter human DNA

Nov 25, 2019
117
43
4,610
Visit site
The only reason to think that humans are even needed in space is because you compare humans to the robots we at present. Wait 100 years and the robots will be MUCH better. So if we assume very advanced robots and AI the only reason for sending people to space is for tourism.

If there are very good robots available, living in space could never be economically justified. Even if there was a way to make enough money in space to pay for the cost of living there, robots could do it cheaper,. That said, tourism might be huge if it could be made affordable

I suspect the Moon would be like Antarctica, A place where a few scientists go for a year or so and a greater number of tourists go for a few days.

Again, you have to stop thinking of robots as being like the current Mars rover. Think instead of a powerful machine that can do some thinking on its own and follow human supervision

Yes. it is obvious that humans would have to change if they were to live in space. The lack of gravity would eventually kill most people and child development might not even work on low gravity. So it is a "race". which science fiction will come true first, bio-re-engineered humans with modified DNA or intelligent robots? Neither will happen in our lifetime.
 
I think you are mostly right about robots to places other than the Moon and Mars.

But, humans still seem to have the advantage of being more mentally complex than robots, so that when something that is not planned for is observed, and critical thinking is applied, progress is faster than having to design a new robot and send it to where the old robot was insufficient. And, that assumes that the robot recognized the unexpected to begin with.

I believe that thought will drive humans to go explore places that we have been thinking about exploring for decades. Which is mostly the Moon and Mars. I don't see people dreaming of going to Venus or Mercury, or even the moons of the gas giants. Obviously, we aren't going to land on Jupiter, Saturn, Uranus or Neptune, and why bother with Pluto?
 
Nov 25, 2019
117
43
4,610
Visit site
But, humans still seem to have the advantage of being more mentally complex than robots,
This is true in 2024. But will it still be true in 2124? Likely not.

And why do we care about mental complexity? Is it a race? Even with a dim-witted robot, it can say. "I see a green rock that is unexpected." Then the scientist on Earth asks the robot to try some experiments and send back the result.

But it will likely not be like that. The robot will know more than any one human possibly can and will be able to sort through facts faster. It may lack imagination but it only needs to be a smart technician, a good observer, and good at communication.

The discussion about living in space is not for this century. We need to imagine 22nd-century technology.
 
I am not so sure that building robots that are smarter than humans is a smart idea. That could have more "unintended consequences" than editing our own genome.

I'm not sure I would even know who to root for when the AI robots get into a war with the mutant humans.
 
Nov 25, 2019
117
43
4,610
Visit site
Back when I was a kid your age. We all lived in caves and ate raw dinosaur meat. Then one day, Thagg showed us this new thing he called "fire". We all saw it as a novelty and wondered what to use it for but we quickly found that we liked roasted dinosaurs better than raw and we liked being warm and we liked having light after dark.

The tribal elders saw this "fire" as a danger and said it would be the end of us all, fire can burn down the whole forest and destroy our food supply. They said "We must regulate fire and make rules about who and when it can be used and by who." But it was too late. By that time even women could make their own fires and fire technology spread to other tribes.

We have the same situation with AI. some say "it must be regulated" but the reality is that even undergraduate computer science majors can read journal articles and make their own AI and thousands of them can and do. The technology is now in the wild.

The technology will evolve as it will. I suspect AI will be very smart but also not so much like we are. But we will evolve too. Not by the natural Darwinian process but by self-modification. Today we are "human-plus-cellphone". and we are different from the people who lived 50 years ago because we can answer ANY random factual question in seconds (What is the mass of Saturn's Moon Titan and what is the pressure at the surface?) and we can talk with each other over global distances. Whatever technology is used to make robots can be used to make the next version of humans. They will become "human-plus-robotech" and be able to do things we can't just like we can do things our grandparents couldn't.

The robots will be in a race with humans but humans are a moving target, we change.

They were not so good at stopping the use of fire and the same story repeated hundreds of times, whenever a technology spills out into the wild.

300,000 years ago there were multiple human species on Earth. I'd argue the Earth was more interesting, a little like Tolkien's Middle Earth. With gene editing and AI, perhaps the solar system will host multiple intelligent species once again. Some will be 100% biological, some robots, and some a mix of the two. Some will live in the real universe and some will like their simulated virtual worlds. in 1,000 years it will be a place even Tolkien could not imagine
 
Last edited:
Your rather fanciful description of history is missing some pieces, particularly the crashes of some societies.

And, it misses the parameters affecting the vulnerabilities of societies. Crashes tend to occur when the population is so dependent on something that it cannot continue to exist without it.

In the "old days" of maybe 1000 years ago, rainfall, or lack thereof, was what did in agrarian societies.

Today's vulnerability is electric power. Without that, what do you think will happen to the people who don't know how to coordinate or actually produce anything without their cell phones?

So, the question is really whether we are headed for better times with new technologies, or headed for a crash.

Looking back at the histories of the populations of other animals, exponential population increases such as humans are currently experiencing are typically followed by major population crashes.

So, the real question is whether humans are actually so much smarter than the other animals of Planet Earth that we can avoid crashing our own population.

I am not seeing convincing evidence that we really are that smart.
 
McKee is a philosopher and Rees is an astronomer, neither has a say on the use of biotechnology, and it is telling that McKee has to reference a criminal like Jiankui. Besides the medical, biological and juridical ethics of human gene editing there is the problem that it is currently unfeasible in individuals.

But it would be useful if GMO becomes more accepted, since we can e.g. cut down on antibiotics - resistance problems - and pesticides - spreads by insects and kills outside the fields.
 
A study looked at robotic exploration but found that humans do it faster and cheaper, assuming you invest enough money for manned exploration. Perhaps future generations of technology will change that, but we'll have to wait and see.

Historically humans explore and settle wherever we can, which likely will happen with the rest of the system too. And if we get to the Oort cloud we will go interstellar and perhaps never bother with the energetically costly and risky gravity wells earlier generations called "planets". Then we will speciate, meanwhile it takes only one crossbreeding per generation on average to prevent such splits. It is unlikely to ever happen in the Sun system.

the authors are probably correct that somebody somewhere will do it, even if illegally. It has become too easy to control effectively.
It is costly but fairly easy to try, but current techniques has too many unintended inserts to work in individuals or safely on fertilized eggs. It may never change, chemistry is basically random and crowded cell chemistry even more so (stochastic metabolism, cancer, mutations).

300,000 years ago there were multiple human species on Earth. I'd argue the Earth was more interesting, a little like Tolkien's Middle Earth. With gene editing and AI, perhaps the solar system will host multiple intelligent species once again. Some will be 100% biological, some robots, and some a mix of the two. Some will live in the real universe and some will like their simulated virtual worlds. in 1,000 years it will be a place even Tolkien could not imagine
Unless you develop technology that reproduces, there will only be one human population due to the population genetics I described above - one crossbreeding per generation on average suffice.

Looking back at the histories of the populations of other animals, exponential population increases such as humans are currently experiencing are typically followed by major population crashes.

So, the real question is whether humans are actually so much smarter than the other animals of Planet Earth that we can avoid crashing our own population.
The population projections is that we are heading for a crash from 10 billion people to about 1 billion sometime after 2100. Not because of primarily technology (though birth control has made it a possibility) but because of social reasons. C.f. China's shrinking population, having children cuts into your personal time as well as your economy.

I am not so sure that building robots that are smarter than humans is a smart idea.
That ship has sailed, current LLMs have made better medical diagnoses and behaved better to patients than your average above average intelligence doctor.

Like ChrisA says, they are different though. Currently they have limited introspection for adjusting on the parameter level during basic training, say.

We have the same situation with AI. some say "it must be regulated" but the reality is that even undergraduate computer science majors can read journal articles and make their own AI and thousands of them can and do. The technology is now in the wild.
AI is regulated. Like GMO technology it isn't feasible for hackers. It takes access to vast amount of data to make LLMs. And costly cloud computing power to make their transformer cores even if you want to make some limited AI such as cat image recognition (say). 🐈
 
Last edited:
Regarding regulatory possibilities:

We have already had a Chinese doctor create some gene-edited children. Yes, it takes some significant equipment, but that equipment is already not supervised sufficiently to prevent "illegal" uses. Add in a motivated multi-billionaire, and all sorts of things could be attempted.

And, AI is just in its beginning stages. There are already multi-billionaires working on multiple versions. Yes, it currently requires massive amounts of computing infrastructure to make train and use it. But, with nanotechnology and established algorithms, I am not seeing any theoretical limit on how small an AI "being" could be made - even mass produced. We can already make electrical junction circuits smaller than the synapses in human brains, so I don't see any reason we could not eventually make an AI brain of similar function in a similar volume. I think in AI we are still in the "vacuum tube computer" stage of development, and getting to something analogous to today's computers using integrated circuits on multi-layer chips will make the AI more mobile and available to more users. The question is, what will those users "train" their AIs to do - "serve humanity" or "take over the world" for some rogue user?
 

Latest posts