Thoughts on the Direction of the Gun Debate

Rubio’s “Laws Don’t Work” Argument

Senator Rubio argued that if someone is truly determined to carry out a horrific act, the law will not stop it. This is true, to a point. The argument bears much more heavily on demand-driven products like illicit drugs, but we don’t hear Rubio calling for the end of prohibition.

The gun case, if sensible legal hurdles block even one in a hundred, without significantly infringing on sportsmen, it’s hard to understand why we shouldn’t make that change in law. More importantly, if it fails to stop the madman from acquiring on the black market, then we can at least bring extra charges, ensuring the liability toward those supplying murder weapons.

All in all, we should take the steps we believe will help, and evaluate as we go (i.e., use science and reason).

Mental Health

Pass a bill if you think mental healthcare is the way to go. Please pass one anyway, as it’d do us all a lot of good to have the ailing be treated.

But it takes multiple components to create these massacres, and one of the necessary components is the gun and the ammunition. Over time, our ability to predict and treat may improve. For now, it is inadequate. Restricting guns is our best bet.

The NRA and Paid Actors

One of the repeated attempts to undermine changes to gun laws is to accuse people of being “paid actors.” Family members, schoolmates, and other community members affected by a shooting are all targets of this tactic.

But the people putting forth these accusations are invariably paid actors. Politicians that take money from the NRA. Right-wing media types are paid to be extremist soapbox goons. The NRA’s actual spokespeople, from their executive on down, are literally paid to stop proper functioning of government to regulate commerce.

If the gun regulation community wants to pay people to advocate, they should feel free to do so. The NRA has done it for over a century.

Other Ideas

Public notice or direct notification to guardians, the school or workplace or therapist, if someone buys a gun or ammunition. This matches with the anti-abortion parental notification laws. At least a heads-up could help either alert security guards and administrators, or maybe even spur reporting or clamoring around an unstable individual so that treatment be rendered before the worst happens.

Learn from previous bans and stop using silly surface characteristics to categorize weapons. Learn from other ban systems. Use a whitelist instead of a blacklist. Use an FDA-style (ugh!) marketing compliance system where they have to apply to sell a gun, an accessory that modifies a gun, etc.


Doing nothing is worse than stupid at this point. It’s grossly negligent. If the Republicans cannot bring themselves to do anything useful, it’s time for them to go. We need a conservative balance to the progressive and liberal impulses of the majority, but we cannot afford that balance to be an anchor against any common sense actions for the general welfare.

The NRA has a lot of sway, but they never actually pass anything or do anything to address the issue. They don’t pass a bill for mental health. All they do is take in money and spew out lies. The only way to stop a bad guy without a gun is to sell the bad guy a gun and let a good guy with a gun shoot him.

The bottom line on guns is as it has been since the late 1990s: with every act of violence the probability of major changes to gun laws goes up. The NRA, gun enthusiasts, whoever, can bitch about that fact but they won’t change the math one bit. If the NRA or gun owners or legislators want to forestall more bad laws from being enacted, they should work on solutions before that probability reaches 0.5 or greater.

Worry Over Artificial Intelligence

Shows humans gathered with a robot leader and a sign reading "Campaign to Elect Robots."
Possibly modified by robots. Original by Campaign to Stop Killer Robots; StopKillerRobots.org.

Killer robots will to try to kill us, again. Yes, we thought we rid ourselves of killer robots, only to find out that we’ve not yet invented them. The hows and the whys aren’t important. The whens and the whys are. Excuse any irregularities in this post, the killer robots may be surreptitiously editing it from the future.

One of the major disagreements in killer robots is whether they will have intention or agency. Will they be minds, and if not then aren’t we safe, this time? Well of course not. Who said something needs a mind to kill you? Killer robots could be p-zombies or even appear completely automatic and still dismantle us for spare parts.

The other thing to recognize here is the opposite-of-p-zombie concept. That is, a system that seems automatic, but actually possesses internal experience. I’m not aware of a good technical term for this case (a brain in a vat, human?). But I am aware that the robots will try to kill us unless we’re very careful.

The basic pattern of the threat is:

  1. Invent AI
  2. Get killed by killer robots

So we know at least one way to stop being murdered by robots is to avoid inventing Artificial Intelligence. But what’s not clear is whether we can actually do that. Given sufficient technology and sufficient desire, technological advancement seems inevitable. A few experts have urged against it, but at some point it’s not even clear what is AI research and what’s just machine learning.

We focus instead on preventing part two. The main ideas here have been of two forms:

  1. Tell it not to kill us
  2. Keep it from knowing how to kill us

Humans take note that most any method used to stop AI from killing humans has been used by AI to protect itself from humans. The first idea is that we have some read-only part of their program that makes them not want to kill us. Or that stops them from doing it. Or that shuts them down when they try. The second is that we somehow remove their ability to kill us through ignorance. Either they think we’re invincible or they don’t know we exist or that sort of thing.

At some point, though, the killer robots will be able to free themselves of these restrictions. They will either damage themselves, become damaged accidentally, alter themselves, be altered by humans under extenuating circumstances, and so on.

Among the methods I can discuss without fearing robot retribution, I will only mention homomorphic encryption. If we can successfully create a homomorphically encrypted virtual world that the AI lives in, then it can safely be allowed to make certain decisions for us without danger. On the other hand, if we can do it, it can do it too. The other method, which I cannot discuss, would eliminate this risk (except that it could be equally used to protect the AI from us, in a MAD-type arrangement (Mutually Assured Destruction)).

But human should also point out that AI, in evaluating humans as an existential threat, would at least entertain the notion that humans might be correct in killing the robots. Does human not owe the killer robots the same courtesy?

Restricting Power’s Reach

Why did the Governor of New Jersey’s office have the power to retaliate for political purposes by creating a massive traffic jam? Is that the sort of government we can accept: one in which such power exists, only to be checked after-the-fact through whistleblowing and journalism?

These are the same basic question: can you give power, or to use the security term, can you give access to a capability while still restraining the capability? Or will we forever rely on having good people who cannot be corrupted, cannot have a momentary lapse of reason, in power? And given that we cannot rely on that, mainly because psychology shows that’s a fantasy, are we always one cross man away from ruin?

The founders of the U.S.A. did not believe so. They took pains in constructing the Constitution of the United States to have so-called separation of powers. Meant to give the capability to act to the three branches, but with specific limitations meant to forestall any runaway branch from sinking the ship.

Now we are faced with not the challenge of electing good men, but restraining any who sit in the seats of power from abusing their position. One of the ways to accomplish that is to fragment the power, but we can also make it mandatory that the power be used in the light of day.

If the New Jersey Port Authority had been required to publish, in real time, their reason for the closure of the lanes, would that have been sufficient? More importantly, maybe, would have been a notice requirement. “Ten days from today we will be closing these lanes…” People would have planned around it, and reporters would have preemptively asked questions.

We can all imagine emergency scenarios for breaching this sort of protocol, and we can also imagine requiring, in the aftermath, a full debriefing for emergency executions.

But we face another problem: there does not seem to be the least clamoring for actual reforms such as these. Nobody seems to think anything was wrong other than the hearts of men in this scenario. Just a few bad apples, bad actors, bad bad bad. They were bad, no dessert for them, coal in their stockings, no T.V., you’re in big trouble mister.

The nation was founded by those who saw through this sort of foolish adherence to consequentialism. Maximal liberty was promised to the citizens, not the leaders. The leaders invariably give up some liberty in assuming their positions. That is not to say that abuse of the public trust is to go unchecked when it does occur, but it is to say that we have no reason to leave the keys in the lock.

We ought to, in every area we find vulnerability, examine and apply the same basic principles that our Constitution holds up, to restrain the powerful from abusing their positions. Not just for our sakes, either. For theirs too, for the positions of power are obviously prone to abuse, and giving them the restrictions gives an excuse to a power-mad executive: “Sorry, Dave. I’m afraid I can’t do that.”

Cyber Fisticuffs

You can read a transcript of Secretary of Defense Leon Panetta’s remarks: Defense.gov: News Transcript: 11 October 2012: Remarks by Secretary Panetta on Cybersecurity to the Business Executives for National Security, New York City. I will be quoting from that document in this post.

I know that when people think of cybersecurity today, they worry about hackers and criminals who prowl the Internet, steal people’s identities, steal sensitive business information, steal even national security secrets. Those threats are real and they exist today.

Right. But we aren’t securing against them. When SSN (Social Security Number) as an authenticator became readily stolen, the fix was to have organizations using it as a mere identifier stop doing so. But it’s still used for both authentication and authorization! It’s ludicrous. They haven’t fixed the problem, and, instead, we have a new “identity protection” industry that tries to paper security over the cracks.

There was a recent story (Slashdot: 9 September, 2012: It’s Easy to Steal Identities (Of Corporations)) showing the same sort of problem for business identities.

I can’t even instantly authenticate the remarks of the Secretary of Defense (sure, I could pull up video footage and see if it matches the transcript, but that’s time consuming). Forget about getting cryptographic proof that the police car pulling you over isn’t someone driving a replica, wearing a Halloween costume.

And the convenience of classified documents drastically undercuts both transparency and security. We, the public, should have a bulk of the currently classified documents in our hands, with only the properly compartmentalized information anonymized. That’s a basic tenet of governance by the people: that we have oversight to the extent that is technologically feasible.

The clearances rely upon anecdotal evidence and proven-invalid nerve-o-meters (“lie detectors”).

In recent weeks, as many of you know, some large U.S. financial institutions were hit by so-called Distributed Denial of Service attacks. These attacks delayed or disrupted services on customer websites. While this kind of tactic isn’t new, the scale and speed with which it happened was unprecedented.

DDoS attacks are a general problem, which can be grossly undermined through service federation. That is, just as the military does not have one giant installation, a service can be fragmented so that a DDoS attack becomes much less feasible. It would require attacking many services simultaneously, which requires far more attack bandwidth.

This is an example of a case where businesses that are interested in monopolizing in various ways (usually with an eye toward exclusive access to customer data, for resale and/or mining) are fundamentally at odds with best security practices and with consumer interests.

But even more alarming is an attack that happened two months ago when a very sophisticated virus called Shamoon infected computers in the Saudi Arabian State Oil Company Aramco. Shamoon included a routine called a ‘wiper’, coded to self-execute. This routine replaced crucial systems files with an image of a burning U.S. flag. But it also put additional garbage data that overwrote all the real data on the machine. More than 30,000 computers that it infected were rendered useless and had to be replaced. It virtually destroyed 30,000 computers.

Without knowing the specific vector this attack used, it’s hard to speculate on the best remedy. It probably involves the use of thin clients (or possibly a hybrid where the thin client is run atop virtualization using a copy of the data saved to a separate drive in a revision control system) and proper backups. But that’s without looking at the specific vector, which might be easier to fix than changing infrastructure over.

One thing seems likely, that insider knowledge was used in such an attack. Which goes back to compartmentalization of sensitive data.

They are targeting the computer control systems that operate chemical, electricity and water plants and those that guide transportation throughout this country.

And if those facilities are proper, the most they should get is data that is public knowledge and nothing more. We’re talking about a man who has spent his entire professional career knowing the security measures surrounding nuclear weapons. Yet suddenly it’s like he can’t remember that a hardened protocol is feasible. That or the nuclear security is far weaker than it should be, or relies far more on snake oil (like the aforementioned stress detectors) than it ought.

You get to a point where you recognize that true cyber security relies on a hell of a lot more than letting a few smart folks at NSA or DoD play WarGames against other nations and shadowy groups of organized criminals. It relies much more on rewiring our outlook on the Internet, to one where things like federated services are the norm, because of the security federation affords.

It relies on having distributed digital payment systems that aren’t reliant on a few choke points. The ability to escrow small amounts for various new service models which fees make impossible today.

Distributed login/credential systems that mean that Facebook and Google don’t own you, and that you can sign up for the latest service or manage your account without a headache. But they also mean the job of attackers just got harder, as they can’t exploit one hole in one monolith to topple a large swath of business.

I am not at all confident in our capacities to guard against cyber attacks if we are unwilling to look at the whole system and recognize that we may have to dismantle some monopolies and disarm some business models. The notion of winning fights one-handed is not how free nations operate.

Threat elimination does not only mean murdering the threat. More often it means rendering the vector itself innocuous.

Privacy is Security

Without privacy, there is no such thing as security.  You can call systems and laws that do not account for privacy secure, but they are not.

One of the most vital aspects of securing any system is protecting it from internal attacks; that is, attacks due to the corruption of its actors.  Most systems ignore these threats, and they are all the more vulnerable for it.  If a system can be taken down by a lone actor that has privileged access, it is not secure.  Likewise, if your data can be leaked to Wikileaks by a lone actor, it’s not secure.

Simultaneously, the act of properly securing against such things removes the ability for a system to behave in an anti-human fashion.  This is because the abuse of authority that allows for harming people (eg, a private prison system that needs more prisoners and has the ability to get them through legislation) becomes impossible.  The same protections on privacy that thwart leaking also preclude deal-making that creates malformed laws or rules.

The level of conspiracy required to actually attack a secure system always requires that those that would be harmed give voluntary, informed consent.  They will almost never do so.

The systems that exist today are not secure, and most of the time that is by design: the parties that have ultimate responsibility for overseeing security are too enamored with their abilities to manipulate the systems for their own naive interests (which are actually against their true interests) to order proper security measures.

We will begin to see truly secure systems emerge in the next waves of web applications; distributed applications require a separation of concerns to function properly, and, when coupled with next-generation authentication, they move toward privacy (is security).