Categories
brainbits

Smarter than Human Intelligence

When speaking of AI, we should do well to look at what is needed to actually be smarter than a human.

With an AI, we assume it has dedicated hardware and power. Given it can operate continuously, it may not have to be smarter than a human to be smarter than a human. That is, if I’m half as smart as you per cycle, but can operate for thrice as many cycles, can I be said to be smarter?

As smart as humans are, we have memory recall problems, we have worries and stresses (that go beyond just having to eat, sleep). We have split attentions and interests. An AI can focus and not worry or get distracted. If it can be three times as consistent in its work than a human is, how dumb can an AI be and still be smarter than one of us?

We have to assume it can be duplicated. If I am half as smart as you, but can make two more copies of myself that can cooperate with me, can I be said to be smarter?

Compounding continuous operation, focus, and duplication, how much intelligence does an AI need to be smarter than a human?

I’ve read a few books. Some people have read many more than I have. At the tip of the long tail, someone has maybe read, what, 100,000 books? And let’s say, comprehended most of them. An AI can access all of that data and more. It still has to work out contradicting information, but what hurdles does it have that we lack? If it can grab information in a few ticks, when it takes one of us at least seconds, if not minutes or hours, how smart does it have to be when it can get the answer from stored knowledge?

If you had the perfect library of knowledge, if you could spend a lifetime curating it, then be reborn, to live with that perfect library, how much more productive would you be, finding each mote of knowledge at the right place? An AI could rewrite every document it scans through in a way that makes its next pass that much faster and more useful. And it doesn’t have to be too smart to do that. Probably not even as smart as one of us.

I’m starting to think that an AI doesn’t have to be very smart in the small to be smarter than a human in the large.

Categories
brainbits

Different Minds Have Different Eyes

Young children are unable to determine that an equivalent volume of liquid is equivalent, even when they see it poured directly from one vessel to another.

They also cannot understand that an object concealed by the box must be behind (or under) the box.

There are other cases. Skinner tried to ascribe certain pigeon behaviors associated with random reinforcement to superstition, though others disagree about whether that fits the bill.

Tell a man he’s a guard, or a prisoner, and he will see the experimental world quite differently.

Tell a man he’s got to continue with electric shocks, he sees the world quite differently.

Bill Hicks did a bit in the wake of the trial of the LAPD police officers who were filmed hitting Rodney King Jr. He said something like:

[The officer] looks in the camera and actually says, “Oh, that Rodney King beating tape, it’s all in how you look at it.”

[…]

“All in how you look at it, officer […]?”

“That’s right, it’s how you look at the tape.”
“Well, would you care to tell the court how you’re looking at that?”

“Yeah, okay, sure. It’s how you look at it. The tape. For instance, well, if you play it backwards, you see us help King up and send him on his way.”

While Hicks was just making fun of what he saw as a completely ridiculous verdict and trial, the reality of different people looking at the same thing is often nearly this stark.

Eyewitness testimony is far less reliable than you might think.

There was a case study done in 1954, They Saw a Game, by Hasdorf and Cantril. They showed footage of a rough 1951 collegiate American football game between the Dartmouth Indians and the Princeton Tigers to psychology students of the two schools. A week later, the students filled out questionnaires about the game.

There were major discrepancies between the perceptions of the game by students of one school and the other. Effectively, they watched a different game, owing the difference entirely to their perception of the action.

The difference of perception between minds is extraordinary. It is especially relevant these days, between the ongoing anger over perceived grievances against the West, the ongoing anger over disagreements in politics, the ongoing anger over the ongoing financial crises.

You have antagonists that see opportunities in conflict. This includes religious groups that raise funds based on the urgency of religious turmoil. It includes news organizations that make a living off of feeding perception differences. It includes political organizations that feed off the fear of the different perceivers gaining power. It is important to note that they aren’t necessarily aware of their exploitation of the conflict; many honestly believe in the urgency of their cause.

You have people trying to honestly highlight the underlying causes and realities of the conflicts. These are attacked for their trouble, by the partisans who believe they are trying to undermine the cause.

And you have the majority, who are too busy with other things. They perceive conflicts as intractable, beyond understanding.

Categories
brainbits

Wish It Away

Reading Planet Debian before bed and the top two posts happen to have a relation in my mind (and my mind only?):

Adam Rosi-Kessel wrote about an experiment in cognitive dissonance where certain groups were given information and then more information which refuted; the result was that they believed the first information even more strongly.  To wit:

Thirty-four percent […] told only about the […] claims thought […] had […], but 64 percent […] who heard both claim and refutation thought […]. The refutation, in other words, made the misinformation worse.

And then under that (though, before temporally), Jeff Licquia wrote about the Ubuntu-Firefox-EULA issue.  Again:

[…] a situation where you always have to ask permission […], and have to be constantly reminded of the rules, you don’t feel comfortable.

That’s the connection I’m seeing between them: the people who are afraid or skeptical or dismissive of free software are of the same mindset as the group from the aforementioned study.  They hear about free software from those who are against it foremost, and so they are already skeptical.  Then the good folks as FSF or other orgs refute the FUD and yet… backfires.  The skeptics are somehow reinforced by the refutation.

Categories
brainbits

Computermind v0.0.0a

Well, I can try to explain what it means to be human, but ultimately you just have to dive right in there and see what it’s like for yourself. And by the same token, I won’t know what a computer intelligence is like until I actually condition my consciousness to experience that for itself.

On the other hand, one can to a large extent deduct the meaning of these things from context clues and other bits of knowledge. Like a human must obviously have some sort of associated culture of their own, and in some ways that will reflect or refract their surrounding culture.

A computer, similarly, is grounded in its programming, and a computer intelligence would have a combination of its original programming and its experiential data. So a computer consciousness, an artificial intelligence, is something of a hybrid of a rudimentary computer and a rudimentary human or biological consciousness.