In the book <i>Singularity Is Near</i> Kurzweil claims the following: (From Wikipedia)<br /><br /><blockquote><font class="small">In reply to:</font><hr /><p>The process of "waking up" the universe could be complete as early as 2199, or might take billions of years depending on whether or not machines could figure out a way to circumvent the speed of light for the purposes of space travel.<p><hr /></p></p></blockquote><br /><br />And:<br /><br /><blockquote><font class="small">In reply to:</font><hr /><p>With the entire universe made into a giant, highly efficient supercomputer, A.I./human hybrids (so integrated that, in truth it is a new category of "life") would have both supreme intelligence and physical control over the universe. Kurzweil suggests that this would open up all sorts of new possibilities, including abrogation of the laws of Physics, interdimensional travel, and a possible infinite extension of existence (true immortality)<p><hr /></p></p></blockquote><br /><br />Either the universe is already a super computer, or there has never been any other civilization out there that has reached our level of technology. Or he is just plain wrong. Which happens to be what i believe. <br /><br />He might be right about many of the things he claim, but I don't think we can say 1. anything about the preferences of future AI:s and, 2. if we assume we have from the beginning given them the preferences to make life as good as possible and without as many limitations as possible for both them and other sentient beings, we can't say anything about what curse of action would be the most logical in reaching that goal. As far as we know, it might be by leaving the universe to live in a small computer put in a never ending dimension. <br /><br />Why, for instance, would they want to replicate at all? I find it just as believable that they will take a hedonistic curse of action (if they are indeed conscious) and give them self the least energy costly preference possible, that is (for example) to live out eteri