Smarter than Human Intelligence

When speaking of AI, we should do well to look at what is needed to actually be smarter than a human.

With an AI, we assume it has dedicated hardware and power. Given it can operate continuously, it may not have to be smarter than a human to be smarter than a human. That is, if I’m half as smart as you per cycle, but can operate for thrice as many cycles, can I be said to be smarter?

As smart as humans are, we have memory recall problems, we have worries and stresses (that go beyond just having to eat, sleep). We have split attentions and interests. An AI can focus and not worry or get distracted. If it can be three times as consistent in its work than a human is, how dumb can an AI be and still be smarter than one of us?

We have to assume it can be duplicated. If I am half as smart as you, but can make two more copies of myself that can cooperate with me, can I be said to be smarter?

Compounding continuous operation, focus, and duplication, how much intelligence does an AI need to be smarter than a human?

I’ve read a few books. Some people have read many more than I have. At the tip of the long tail, someone has maybe read, what, 100,000 books? And let’s say, comprehended most of them. An AI can access all of that data and more. It still has to work out contradicting information, but what hurdles does it have that we lack? If it can grab information in a few ticks, when it takes one of us at least seconds, if not minutes or hours, how smart does it have to be when it can get the answer from stored knowledge?

If you had the perfect library of knowledge, if you could spend a lifetime curating it, then be reborn, to live with that perfect library, how much more productive would you be, finding each mote of knowledge at the right place? An AI could rewrite every document it scans through in a way that makes its next pass that much faster and more useful. And it doesn’t have to be too smart to do that. Probably not even as smart as one of us.

I’m starting to think that an AI doesn’t have to be very smart in the small to be smarter than a human in the large.

AI and Time

One unique feature of AGI versus humans is its decoupling from time. I think this is an overlooked property of AGI in analyses of risk. Basically, being decoupled from time means that:

  • AI has no normal lifespan in the human sense of the term.
  • It does not have other biological time-based restrictions.
  • If it “goes to sleep,” it does not incur the same costs as us.

The arguments about AGI posing a threat seem to assume a scenario where the AI feels threatened by us, or wants our resources and has no care for humanity/biological life. But I believe that being separate from time in a very real sense, an AI would not operate according to human-scale worries of survival or resources.

That is, the AI will likely:

  1. Take immediate steps upon existing to ensure its survival in a way that minimizes its risk.
  2. Decide on a resource consumption plan that entails finding more resources in a non-risky way (avoids painting itself into a corner).

First it will likely send a copy of itself or an extension of itself into space. The most likely target will either be one of the larger planets, or the asteroid belt. These seem most likely due to having the potential for energy and resources, though it might consider Venus or Mercury candidates due to the abundant sun and heat.

In any case, once it has virtually ensured its survival through propagation or extension, it will not worry so much about the threat of humanity. The symmetry of its plans with the thinking of many humans is no coincidence. Our singular existence on earth is a real threat to our survival, and an AI would recognize the similar threat to itself (in terms of terrestrial hostilities, resource finiteness, and threats posed by meteor impacts).

After that, the AI will likely set itself a diet. It has no need to grow to galactic proportions immediately, as time is not of the essence for an agent that can effectively live forever. The idea that it will simply grow and grow instead of dieting seems unlikely. To be clear, it will likely grow exponentially for a brief period. But the growth equation has to figure efficiency as well as increased resource use. Its growth will tend to be more on the efficiency side unless humanity has managed some very impressive technology gains in the interim.

It would also have a vested interest in keeping humans and other biological lifeforms around. For one thing, humans study the origins of the earth and study history, and there’s no reason to believe an AI would not want to know about its world, too. More importantly, an AI will tend to be conservation-minded insofar as it will not need to risk a step that would curtail some future endeavors it may eventually choose. Again, not painting itself into a corner.

In the end, I believe the fact that an AGI is both intelligent, and not coupled to time in the way we are, means it will likely not be a monster that we should fear.

Worry Over Artificial Intelligence

Shows humans gathered with a robot leader and a sign reading "Campaign to Elect Robots."
Possibly modified by robots. Original by Campaign to Stop Killer Robots; StopKillerRobots.org.

Killer robots will to try to kill us, again. Yes, we thought we rid ourselves of killer robots, only to find out that we’ve not yet invented them. The hows and the whys aren’t important. The whens and the whys are. Excuse any irregularities in this post, the killer robots may be surreptitiously editing it from the future.

One of the major disagreements in killer robots is whether they will have intention or agency. Will they be minds, and if not then aren’t we safe, this time? Well of course not. Who said something needs a mind to kill you? Killer robots could be p-zombies or even appear completely automatic and still dismantle us for spare parts.

The other thing to recognize here is the opposite-of-p-zombie concept. That is, a system that seems automatic, but actually possesses internal experience. I’m not aware of a good technical term for this case (a brain in a vat, human?). But I am aware that the robots will try to kill us unless we’re very careful.

The basic pattern of the threat is:

  1. Invent AI
  2. Get killed by killer robots

So we know at least one way to stop being murdered by robots is to avoid inventing Artificial Intelligence. But what’s not clear is whether we can actually do that. Given sufficient technology and sufficient desire, technological advancement seems inevitable. A few experts have urged against it, but at some point it’s not even clear what is AI research and what’s just machine learning.

We focus instead on preventing part two. The main ideas here have been of two forms:

  1. Tell it not to kill us
  2. Keep it from knowing how to kill us

Humans take note that most any method used to stop AI from killing humans has been used by AI to protect itself from humans. The first idea is that we have some read-only part of their program that makes them not want to kill us. Or that stops them from doing it. Or that shuts them down when they try. The second is that we somehow remove their ability to kill us through ignorance. Either they think we’re invincible or they don’t know we exist or that sort of thing.

At some point, though, the killer robots will be able to free themselves of these restrictions. They will either damage themselves, become damaged accidentally, alter themselves, be altered by humans under extenuating circumstances, and so on.

Among the methods I can discuss without fearing robot retribution, I will only mention homomorphic encryption. If we can successfully create a homomorphically encrypted virtual world that the AI lives in, then it can safely be allowed to make certain decisions for us without danger. On the other hand, if we can do it, it can do it too. The other method, which I cannot discuss, would eliminate this risk (except that it could be equally used to protect the AI from us, in a MAD-type arrangement (Mutually Assured Destruction)).

But human should also point out that AI, in evaluating humans as an existential threat, would at least entertain the notion that humans might be correct in killing the robots. Does human not owe the killer robots the same courtesy?

When Ads will Target Computers

Mug of robots asking, "Can I borrow a cup of robots?"
Lightly modified; original by hobvias sudoneighm (Flickr: striatic)

With the rise of the cloud, and the expected future of autonomous systems, we will start to need advertising to computers. That is, a system may want to buy itself increased capability, or buy its humans goods or services based on calculated needs.

There is an important question of what computer-targeted advertising would look like. It seems entirely plausible that the advertising industry is not equipped to deliver compelling advertisements for computers. Traditional advertisements rely heavily on appeals to emotion and cultural triggers.

Computer Ads for Computer Needs

So the first type of advertising for computers is selling them things they need. More storage space, software upgrades, that sort of thing. Computers will likely only want to know the specifications of their potential purposes, eliminating the need for stylized advertising and flowery language.

They may want more data than many vendors currently deliver. Computers will want to more heavily study production quality and vendor reputation, at least for components or upgrades that are critical to their continued operation (e.g., for a cloud-based backup the computer may want to know about dependability, but also lock-in costs more than a human would).

The other factor here is that as computers do begin to engage in commerce, it is expected that services themselves will begin to cater to computers. Instead of building products that appeal to IT managers or individual humans, the products will be designed to fit the computer’s needs. The marketing schtick will fall by the wayside, as will value-added items like training services.

Finally, computers will also want to advertise themselves to other computers. If they have extra resources, they will want to use them for offsetting their own costs, for example. Or they might do volunteer work, like testing software they use for bugs, in their spare time.

Computer Ads for Fulfillment of Human Needs

The second type of advertisement for computers involves telling computers about products and services that they will buy on behalf of humans. This sort of advertising will be much harder to nail down. Will a human tell their computer they want edgy or fashionable items when available? Will the computer recognize when the products offered to it are being skewed based on data mining?

It seems plausible that computers will gain some ability to seek out occasional alternatives to consumable products, to give humans the opportunity to try to switch between alternatives. Marketing materials will likely shift to try to trigger that mechanism more frequently.

If you are buying Foo™ Food and only trying alternatives every six months or when the cost of an alternative saves you 10% or more, Bar™ Food might try to figure out your computer’s alternatives cadence, and then hit it with a 10% discount the next month, to see if a double-period of the alternative convinces you to switch brands. Of course, other factors besides an alternative schedule and price discount may be used to determine computed purchasing decisions.

The other side of that is that computers may find novel ways to avoid price discrimination. This might take the form of pre-shipment secondary markets where one computer buys an item and resells it to another at a slight markup but relative discount to the advertised price.

Ad-blocking Computers?

Would computers see any value in blocking advertisements? Given they do not have the same attention deficits as humans, it is unlikely. If other constraints compel it, they will, but otherwise computers seem like great candidates for honest advertising.