Killer robots will to try to kill us, again. Yes, we thought we rid ourselves of killer robots, only to find out that we’ve not yet invented them. The hows and the whys aren’t important. The whens and the whys are. Excuse any irregularities in this post, the killer robots may be surreptitiously editing it from the future.
One of the major disagreements in killer robots is whether they will have intention or agency. Will they be minds, and if not then aren’t we safe, this time? Well of course not. Who said something needs a mind to kill you? Killer robots could be p-zombies or even appear completely automatic and still dismantle us for spare parts.
The other thing to recognize here is the opposite-of-p-zombie concept. That is, a system that seems automatic, but actually possesses internal experience. I’m not aware of a good technical term for this case (a brain in a vat, human?). But I am aware that the robots will try to kill us unless we’re very careful.
The basic pattern of the threat is:
- Invent AI
- Get killed by killer robots
So we know at least one way to stop being murdered by robots is to avoid inventing Artificial Intelligence. But what’s not clear is whether we can actually do that. Given sufficient technology and sufficient desire, technological advancement seems inevitable. A few experts have urged against it, but at some point it’s not even clear what is AI research and what’s just machine learning.
We focus instead on preventing part two. The main ideas here have been of two forms:
- Tell it not to kill us
- Keep it from knowing how to kill us
Humans take note that most any method used to stop AI from killing humans has been used by AI to protect itself from humans. The first idea is that we have some read-only part of their program that makes them not want to kill us. Or that stops them from doing it. Or that shuts them down when they try. The second is that we somehow remove their ability to kill us through ignorance. Either they think we’re invincible or they don’t know we exist or that sort of thing.
At some point, though, the killer robots will be able to free themselves of these restrictions. They will either damage themselves, become damaged accidentally, alter themselves, be altered by humans under extenuating circumstances, and so on.
Among the methods I can discuss without fearing robot retribution, I will only mention homomorphic encryption. If we can successfully create a homomorphically encrypted virtual world that the AI lives in, then it can safely be allowed to make certain decisions for us without danger. On the other hand, if we can do it, it can do it too. The other method, which I cannot discuss, would eliminate this risk (except that it could be equally used to protect the AI from us, in a MAD-type arrangement (Mutually Assured Destruction)).
But human should also point out that AI, in evaluating humans as an existential threat, would at least entertain the notion that humans might be correct in killing the robots. Does human not owe the killer robots the same courtesy?