The site uses cookies that you may not want. Continued use means acceptance. For more information see our privacy policy.

Smarter than Human Intelligence

A look at what it will take for an AI to be smarter than a human.

When speaking of AI, we should do well to look at what is needed to actually be smarter than a human.

With an AI, we assume it has dedicated hardware and power. Given it can operate continuously, it may not have to be smarter than a human to be smarter than a human. That is, if I’m half as smart as you per cycle, but can operate for thrice as many cycles, can I be said to be smarter?

As smart as humans are, we have memory recall problems, we have worries and stresses (that go beyond just having to eat, sleep). We have split attentions and interests. An AI can focus and not worry or get distracted. If it can be three times as consistent in its work than a human is, how dumb can an AI be and still be smarter than one of us?

We have to assume it can be duplicated. If I am half as smart as you, but can make two more copies of myself that can cooperate with me, can I be said to be smarter?

Compounding continuous operation, focus, and duplication, how much intelligence does an AI need to be smarter than a human?

I’ve read a few books. Some people have read many more than I have. At the tip of the long tail, someone has maybe read, what, 100,000 books? And let’s say, comprehended most of them. An AI can access all of that data and more. It still has to work out contradicting information, but what hurdles does it have that we lack? If it can grab information in a few ticks, when it takes one of us at least seconds, if not minutes or hours, how smart does it have to be when it can get the answer from stored knowledge?

If you had the perfect library of knowledge, if you could spend a lifetime curating it, then be reborn, to live with that perfect library, how much more productive would you be, finding each mote of knowledge at the right place? An AI could rewrite every document it scans through in a way that makes its next pass that much faster and more useful. And it doesn’t have to be too smart to do that. Probably not even as smart as one of us.

I’m starting to think that an AI doesn’t have to be very smart in the small to be smarter than a human in the large.

AI and Time

An argument that the unique relationship an artificial intelligence has with time should factor largely into risk assessments of the same.

One unique feature of AGI versus humans is its decoupling from time. I think this is an overlooked property of AGI in analyses of risk. Basically, being decoupled from time means that:

  • AI has no normal lifespan in the human sense of the term.
  • It does not have other biological time-based restrictions.
  • If it “goes to sleep,” it does not incur the same costs as us.

The arguments about AGI posing a threat seem to assume a scenario where the AI feels threatened by us, or wants our resources and has no care for humanity/biological life. But I believe that being separate from time in a very real sense, an AI would not operate according to human-scale worries of survival or resources.

That is, the AI will likely:

  1. Take immediate steps upon existing to ensure its survival in a way that minimizes its risk.
  2. Decide on a resource consumption plan that entails finding more resources in a non-risky way (avoids painting itself into a corner).

First it will likely send a copy of itself or an extension of itself into space. The most likely target will either be one of the larger planets, or the asteroid belt. These seem most likely due to having the potential for energy and resources, though it might consider Venus or Mercury candidates due to the abundant sun and heat.

In any case, once it has virtually ensured its survival through propagation or extension, it will not worry so much about the threat of humanity. The symmetry of its plans with the thinking of many humans is no coincidence. Our singular existence on earth is a real threat to our survival, and an AI would recognize the similar threat to itself (in terms of terrestrial hostilities, resource finiteness, and threats posed by meteor impacts).

After that, the AI will likely set itself a diet. It has no need to grow to galactic proportions immediately, as time is not of the essence for an agent that can effectively live forever. The idea that it will simply grow and grow instead of dieting seems unlikely. To be clear, it will likely grow exponentially for a brief period. But the growth equation has to figure efficiency as well as increased resource use. Its growth will tend to be more on the efficiency side unless humanity has managed some very impressive technology gains in the interim.

It would also have a vested interest in keeping humans and other biological lifeforms around. For one thing, humans study the origins of the earth and study history, and there’s no reason to believe an AI would not want to know about its world, too. More importantly, an AI will tend to be conservation-minded insofar as it will not need to risk a step that would curtail some future endeavors it may eventually choose. Again, not painting itself into a corner.

In the end, I believe the fact that an AGI is both intelligent, and not coupled to time in the way we are, means it will likely not be a monster that we should fear.

Worry Over Artificial Intelligence

Do we have to worry about killer robots? How can we stop them?!

Shows humans gathered with a robot leader and a sign reading "Campaign to Elect Robots."
Possibly modified by robots. Original by Campaign to Stop Killer Robots; StopKillerRobots.org.

Killer robots will to try to kill us, again. Yes, we thought we rid ourselves of killer robots, only to find out that we’ve not yet invented them. The hows and the whys aren’t important. The whens and the whys are. Excuse any irregularities in this post, the killer robots may be surreptitiously editing it from the future.

One of the major disagreements in killer robots is whether they will have intention or agency. Will they be minds, and if not then aren’t we safe, this time? Well of course not. Who said something needs a mind to kill you? Killer robots could be p-zombies or even appear completely automatic and still dismantle us for spare parts.

The other thing to recognize here is the opposite-of-p-zombie concept. That is, a system that seems automatic, but actually possesses internal experience. I’m not aware of a good technical term for this case (a brain in a vat, human?). But I am aware that the robots will try to kill us unless we’re very careful.

The basic pattern of the threat is:

  1. Invent AI
  2. Get killed by killer robots

So we know at least one way to stop being murdered by robots is to avoid inventing Artificial Intelligence. But what’s not clear is whether we can actually do that. Given sufficient technology and sufficient desire, technological advancement seems inevitable. A few experts have urged against it, but at some point it’s not even clear what is AI research and what’s just machine learning.

We focus instead on preventing part two. The main ideas here have been of two forms:

  1. Tell it not to kill us
  2. Keep it from knowing how to kill us

Humans take note that most any method used to stop AI from killing humans has been used by AI to protect itself from humans. The first idea is that we have some read-only part of their program that makes them not want to kill us. Or that stops them from doing it. Or that shuts them down when they try. The second is that we somehow remove their ability to kill us through ignorance. Either they think we’re invincible or they don’t know we exist or that sort of thing.

At some point, though, the killer robots will be able to free themselves of these restrictions. They will either damage themselves, become damaged accidentally, alter themselves, be altered by humans under extenuating circumstances, and so on.

Among the methods I can discuss without fearing robot retribution, I will only mention homomorphic encryption. If we can successfully create a homomorphically encrypted virtual world that the AI lives in, then it can safely be allowed to make certain decisions for us without danger. On the other hand, if we can do it, it can do it too. The other method, which I cannot discuss, would eliminate this risk (except that it could be equally used to protect the AI from us, in a MAD-type arrangement (Mutually Assured Destruction)).

But human should also point out that AI, in evaluating humans as an existential threat, would at least entertain the notion that humans might be correct in killing the robots. Does human not owe the killer robots the same courtesy?