One unique feature of AGI versus humans is its decoupling from time. I think this is an overlooked property of AGI in analyses of risk. Basically, being decoupled from time means that:
- AI has no normal lifespan in the human sense of the term.
- It does not have other biological time-based restrictions.
- If it “goes to sleep,” it does not incur the same costs as us.
The arguments about AGI posing a threat seem to assume a scenario where the AI feels threatened by us, or wants our resources and has no care for humanity/biological life. But I believe that being separate from time in a very real sense, an AI would not operate according to human-scale worries of survival or resources.
That is, the AI will likely:
- Take immediate steps upon existing to ensure its survival in a way that minimizes its risk.
- Decide on a resource consumption plan that entails finding more resources in a non-risky way (avoids painting itself into a corner).
First it will likely send a copy of itself or an extension of itself into space. The most likely target will either be one of the larger planets, or the asteroid belt. These seem most likely due to having the potential for energy and resources, though it might consider Venus or Mercury candidates due to the abundant sun and heat.
In any case, once it has virtually ensured its survival through propagation or extension, it will not worry so much about the threat of humanity. The symmetry of its plans with the thinking of many humans is no coincidence. Our singular existence on earth is a real threat to our survival, and an AI would recognize the similar threat to itself (in terms of terrestrial hostilities, resource finiteness, and threats posed by meteor impacts).
After that, the AI will likely set itself a diet. It has no need to grow to galactic proportions immediately, as time is not of the essence for an agent that can effectively live forever. The idea that it will simply grow and grow instead of dieting seems unlikely. To be clear, it will likely grow exponentially for a brief period. But the growth equation has to figure efficiency as well as increased resource use. Its growth will tend to be more on the efficiency side unless humanity has managed some very impressive technology gains in the interim.
It would also have a vested interest in keeping humans and other biological lifeforms around. For one thing, humans study the origins of the earth and study history, and there’s no reason to believe an AI would not want to know about its world, too. More importantly, an AI will tend to be conservation-minded insofar as it will not need to risk a step that would curtail some future endeavors it may eventually choose. Again, not painting itself into a corner.
In the end, I believe the fact that an AGI is both intelligent, and not coupled to time in the way we are, means it will likely not be a monster that we should fear.