The site uses cookies that you may not want. Continued use means acceptance. For more information see our privacy policy.

The Artificial Intelligence that is a Corporation.

What is climate change and carbon pollution but the gray goo problem?

Let’s start by asking what the purported dangers of general artificial intelligence are. The ones most listed include things like:

  • They won’t care about people and will therefore get lots of humans killed or kill them.
  • They won’t think about the consequences of their actions, instead blindly following whatever their goal is.
  • They will spread misinformation, and people will believe it. Those lies will cause harm.

Remind you of anyone?


Corporations (or any organization) are artificially intelligent constructions. While nominally there are humans at the controls, for a variety of reasons this is provably false and nothing but a happy fantasy.

You see, all those lawyers and C-suite suits are constrained by their directives and their personas (for which the corporations selected them in the first place), so they don’t really have all that free will stuff you’ve been taught since the Garden of Eden days.

If a CEO finds out that their company’s pollution is killing people, how many step up and shut it down? (I’d like to know, but couldn’t find a reliable source, so take it rhetorically if you wish.) More likely, they worry about the stock price, their child’s reputation, and they at best try to clean things up a bit, maybe pay for a few funerals.

The free will of any man is tempered by her fetters. She will only act as freely as she feels she is able. Only as freely as she is to think, and thinking is constrained by the limits of the information she has. She will not see through lies she prefers to be true if it’s more profitable to ignore them. We see this example repeated. Read about the meltdown at Chernobyl. Read statements from Republicans after they leave office and don’t have to spout schlock to get reelected. Over and over we see the pattern of the chains on thinking and on acting.

Now a lot of (rich and powerful) people share a curious worry about artificial intelligence. They call for rigor to ensure AI doesn’t discriminate against people (while various systems do just that in jails, in polluting of neighborhoods, in unequal education, and on).

They call for protections against robots used in war, while the man-made wars continue to litter the earth with indiscriminate weapons, like land mines and unexploded cluster bomblets, that will kill for generations after the war ends.

And as for misinformation, corporations lie and mislead with impunity as long as it’s not outright fraud (and if it is, they pay their fine and keep on existing). There’s even a whole sub-industry dedicated to misinformation, known as marketing.


Should we be worried about general artificial intelligence, and even lesser forms, being used to harm our society and harm our planet? Yes!

But, we should be alarmed (and many of us are) at all the existing ways the exact things are already being done. These threats are not novel to machine systems! They already happen. Most if not all have been going on for longer than the age of the United States of America.

The pollution isn’t worse because an AI does it. The discrimination isn’t worse because an AI does it. The deaths from indiscriminate warfare are not worse because an AI does it. The fact that an inhuman being perpetrates an act of inhumanity does not make it more inhumane than when a human does it, nor when a organ built of humans does it.


As I wrote about in my book, corporations are artificial persons, possessing artificial intelligence. That is, they have the means to dispatch pattern matching systems (humans, and lately computers) to carry out tasks on data and react based on the results of the information they receive. The corporation is one of many. Governments are artificial intelligences. NGOs, non-profits, departments, churches, all these organs, all have some level of artificial intelligence.

People of sufficient wealth and influence are themselves artificially intelligent. They can afford to hire people who pass off their work (ghostwriters, publicists, so on). And all of us have some level of social AI working on our behalf (I didn’t make this computer, this internet, my clothes, so on). But not nearly at the level the wealthy have, to augment their existence through the labors of others. An AI isn’t going to pull all the world out of a bit bucket. It will do like we already do and pull information from the rest of the world, ask people or machines to look up or research or build or whatever else. Just like corporations do today.

And what do corporations do with their artificial intelligence? Many of the same things great and scary, wonderful and terrible, that we worry AI will do. They abuse people, they educate people. They deliver aid to the needy, they create scarcity that makes people need aid. They help people move to a new country, to start a new life. They block people from moving to a new country, or deport them.

Churches have been around for thousands of years. Governments too. Sorry to break it to you, but AI already exists. It’s already a threat to peace and prosperity. It’s already hard to understand why it acts as it does, already hallucinates and operates in single-minded ways that ignore common sense.

Let us hope the next generation of artificial intelligence is a little smarter.

Smarter than Human Intelligence

A look at what it will take for an AI to be smarter than a human.

When speaking of AI, we should do well to look at what is needed to actually be smarter than a human.

With an AI, we assume it has dedicated hardware and power. Given it can operate continuously, it may not have to be smarter than a human to be smarter than a human. That is, if I’m half as smart as you per cycle, but can operate for thrice as many cycles, can I be said to be smarter?

As smart as humans are, we have memory recall problems, we have worries and stresses (that go beyond just having to eat, sleep). We have split attentions and interests. An AI can focus and not worry or get distracted. If it can be three times as consistent in its work than a human is, how dumb can an AI be and still be smarter than one of us?

We have to assume it can be duplicated. If I am half as smart as you, but can make two more copies of myself that can cooperate with me, can I be said to be smarter?

Compounding continuous operation, focus, and duplication, how much intelligence does an AI need to be smarter than a human?

I’ve read a few books. Some people have read many more than I have. At the tip of the long tail, someone has maybe read, what, 100,000 books? And let’s say, comprehended most of them. An AI can access all of that data and more. It still has to work out contradicting information, but what hurdles does it have that we lack? If it can grab information in a few ticks, when it takes one of us at least seconds, if not minutes or hours, how smart does it have to be when it can get the answer from stored knowledge?

If you had the perfect library of knowledge, if you could spend a lifetime curating it, then be reborn, to live with that perfect library, how much more productive would you be, finding each mote of knowledge at the right place? An AI could rewrite every document it scans through in a way that makes its next pass that much faster and more useful. And it doesn’t have to be too smart to do that. Probably not even as smart as one of us.

I’m starting to think that an AI doesn’t have to be very smart in the small to be smarter than a human in the large.

AI and Time

An argument that the unique relationship an artificial intelligence has with time should factor largely into risk assessments of the same.

One unique feature of AGI versus humans is its decoupling from time. I think this is an overlooked property of AGI in analyses of risk. Basically, being decoupled from time means that:

  • AI has no normal lifespan in the human sense of the term.
  • It does not have other biological time-based restrictions.
  • If it “goes to sleep,” it does not incur the same costs as us.

The arguments about AGI posing a threat seem to assume a scenario where the AI feels threatened by us, or wants our resources and has no care for humanity/biological life. But I believe that being separate from time in a very real sense, an AI would not operate according to human-scale worries of survival or resources.

That is, the AI will likely:

  1. Take immediate steps upon existing to ensure its survival in a way that minimizes its risk.
  2. Decide on a resource consumption plan that entails finding more resources in a non-risky way (avoids painting itself into a corner).

First it will likely send a copy of itself or an extension of itself into space. The most likely target will either be one of the larger planets, or the asteroid belt. These seem most likely due to having the potential for energy and resources, though it might consider Venus or Mercury candidates due to the abundant sun and heat.

In any case, once it has virtually ensured its survival through propagation or extension, it will not worry so much about the threat of humanity. The symmetry of its plans with the thinking of many humans is no coincidence. Our singular existence on earth is a real threat to our survival, and an AI would recognize the similar threat to itself (in terms of terrestrial hostilities, resource finiteness, and threats posed by meteor impacts).

After that, the AI will likely set itself a diet. It has no need to grow to galactic proportions immediately, as time is not of the essence for an agent that can effectively live forever. The idea that it will simply grow and grow instead of dieting seems unlikely. To be clear, it will likely grow exponentially for a brief period. But the growth equation has to figure efficiency as well as increased resource use. Its growth will tend to be more on the efficiency side unless humanity has managed some very impressive technology gains in the interim.

It would also have a vested interest in keeping humans and other biological lifeforms around. For one thing, humans study the origins of the earth and study history, and there’s no reason to believe an AI would not want to know about its world, too. More importantly, an AI will tend to be conservation-minded insofar as it will not need to risk a step that would curtail some future endeavors it may eventually choose. Again, not painting itself into a corner.

In the end, I believe the fact that an AGI is both intelligent, and not coupled to time in the way we are, means it will likely not be a monster that we should fear.