The site uses cookies that you may not want. Continued use means acceptance. For more information see our privacy policy.

Future Software

A look at some ideas for better computers.

Occasionally you read interesting ideas for the future of software. Here are a few to ponder.

docopt

docopt is basically a domain-specific language. The language is the standard Usage output of a command-line program. By simply documenting a program’s usage, it aims to provide the functionality of parsing the program’s invocation.

This is a great step forward in finding ways to reduce duplicated effort in an elegant way. Before, you might programmatically add or modify the argument handler, and that might generate the Usage. But it was still tedious, as it required keeping what was a messy bit of code as clean as you could. Messy because you had to do a lot of thinking about the basic form of your command line API, and then translate it to code.

Instead, with docopt you should be able to purely focus on the form of the Usage.

I think that this will be a common theme for future computing, without relying on more complex systems like SOAP.

Why do we need modules at all?

Erlang.org: Erlang-Questions: 24 May 2011: Joe Armstrong: Why do we need modules at all? was an interesting post I remember reading a few years back. It questions the basic assumption of building software hierarchically and statically.

This is akin to the age-old (at least since birth of UNIX) desire to have programming be a set of pipes thrown together at will, with the elegance and wisdom of a seasoned hacker behind it.

What it really seems to suggest though is a move toward a way to let computers program themselves. The desire to let evolution play a key and direct role in how a computing system operates. If some given function is replaced by a more efficient function, all software depending can be faster. If you have some chain of functions A->B->...->ZZ, and someone writes a better chain, the whole can be replaced.

We already get some of this with packages/modules and dynamic linking, so the question (short-/mid-term) is really whether a kind of function-level alternative would replace modules/packages, or work alongside them.

Why do we need GUIs at all?

This is a spin on the previous. It’s the basic question of whether larger elements of the GUI can be easily reused. At present the basic unit of the GUI is the widget. The widget is some semi-small piece of the GUI, like a button or a text input.

If you want a more complex GUI, it’s a matter of assembling widgets together. This is equally true for a native application as it is for a webpage or web application.

Reuse is dependent upon some human brain recognizing the opportunity, and wiring the whole thing up for reuse. That’s opposed to something like the hypothetical functions database above, which in theory would have enough metadata to be able to say, “takes in a list, outputs a sorted list, stable, memory use, complexity.”

But if you can do it with functions (which seems likely enough), you can do it with larger pieces, including GUIs. So you can have a GUI that’s designed separately from the consuming code, and it can say, “presents a list, allows reordering, sorting, adding, deleting,” and you might specify some constraints on the adding/editing portion, so it could select another GUI component (“unique (name, number) pair”, or “unique name, non-unique number pair”, or whatever).

The initial versions of this can probably be written off of docopt, where by the same magic of parsing Usage a GUI could be created for any command line API. This is a project I’ve been meaning to work on, but haven’t done it yet.

Common theme

The common idea is reducing duplication of work, increasing productivity, removing friction to create. Making the computer figure out more of the work. Automating more. Instead of having to spend so much time doing analysis of tradeoffs and pondering why something broke when it shouldn’t, we should be building more.

That leads to the last one: moving away from pure-text programming. That doesn’t mean we write it out to binary; the text layer can persist. But it does mean that to some extent we stop doing everything as text. We can already extract interaction diagrams from text, but we currently do the editing in text and only get the diagrams when we want a conceptual view.

It will take some time before we see these last sorts of changes take place. But it’s likely that they will, if for no other reason than the benefits are there. I’d rather write the Usage once and know everything was wired up for me, just like I’d rather not muck about with every last bit of CSS and markup every time I want to add something to an application.

And as fun as it is to find the extra comma and find out that I wrote it correctly except for that typo, I’d rather that footgun go away entirely.

Steam on Linux: Half-Life

It’s been 15 years, but native Linux Half-Life is now here.

It was the mid-to-late 1990s. Computers were becoming more popular, and computer games with them. In the early 1990s there was Myst. It was about the story, something like Zork but first-person. In 1996, there was Quake. It was about battling baddies, like its predecessors, Doom and Wolfenstein.

Around that time, Valve software must have licensed the new Quake Engine (the underlying software that created the Quake world on your computer). In 1998 they released Half-Life. It was in many ways a closer marriage of the first-person shooter with the story games and puzzle games that came before. Around a two-to-one ratio. Lots of action, but a bit more story than before. You had Non-Player Characters (NPCs), which are presences you don’t kill, either neutral or allied with you. You had movable boxes.

It’s a game that paved the way for a lot of the modern games.

Every discussion I saw prior to the announcement came to the conclusion that we probably wouldn’t see this happen. Most of those centered around Counter-Strike 1.6, which uses the same GoldSrc Engine as Half-Life. The feeling was that Valve would focus on their newest titles first, and worry about these oldest games later, if ever.

A few years back, Valve began opening up to the Apple Macintosh systems, and most of their new games made their way over. But never the old ones. With this release, those systems now have these oldest games too.

One wonders why. When the first news of Steam coming to Linux arrived, it was published that their title Left 4 Dead 2 was their vehicle of experimentation.

When the beta began, it was instead Team Fortress 2. That made enough sense, in that it’s free-to-play. It meant they didn’t have to give away a game that beta testers might have bought. It wouldn’t be costly to give the game to a few, in a small, closed beta. But when you open it up in the large, to a largely untested audience, it risks some loss.

Valve is very committed to the Linux platform, especially with the announcement of the forthcoming Steam boxes, basically set-top computers. They want to be as catalog-complete to help drive adoption. They also had the opportunity to hit two platforms at once, which wasn’t there when it was only Apple Macintosh.

Finally, with their flagship game sequels coming, they want to be able to have people play the original. There is a certain aspect to human psychology that values completeness. People want to have read every book, seen every episode. They want every achievement, to have left no stone unturned.

The question now is when we will see the rest of the Valve catalog for Linux. My guess is by summer. They probably don’t have as much work with the newer games which have all been ported to Apple Macintosh. There is some work, yes, but a lot of it will be simply replicating previous work. They are likely targeting those releases for the time when the Steam platform leaves beta.

Other tasks will take longer, including their plans to release their SDKs for Linux. That will mean porting work that hasn’t been done for the Apple Macintosh systems. These will be very welcome, as they will mean both new blood into the mod/mapping/development community and faster compilation of assets.

Firearms, Violence, and Society

Instead of debating guns, we should be discussing society. A short post about that.

Guns make money. According to Statistic Brain: Firearm Industry Statistics, annual revenues of $11 billion. Moreover, prominent media events (including the election of democrats and acts of violence) drive impulse buying of weapons, due to the threat of new regulations.

Violence makes money, too. We spent over $600 billion in 2010 (Wikipedia: Military budget of the United States), and we have spent over $3 trillion on the actions in Iraq and Afghanistan.

When you add in the money spent on police and private protection, prison, and the legal system, the numbers grow even further. Opportunity costs for all of these things, and you’re talking about vast amounts of human capital and funding that could propel society far into the future.

It costs us all something, to have these overgrown industries. And in the wake of tragedy our instinct is that it’s not enough. We need more guns, we need more police, we need more security. We need to double down on violence. It’s a loser’s bet, though.

What we need to double down on is science. On societal transformation beyond simply barring or allowing the presence of weapons. We need to recognize that we can and will move past violence (or the world will move past us). It’s only a question of when and how.

We need to have a serious discussion about… guns? Really? We need to have a million serious discussions about society. But it’s always a bait-and-switch. Nobody can be bothered to reimagine society writ large. It’s always, “what can we do about these damn guns but keep everything else the way it is?” Or, “how can the government pay its bills without decreasing services or raising taxes?”

What we call that in Computer Science is an overconstrained problem. Professors like to cite the Kobayashi Maru (Wikipedia: Kobayashi Maru), from the original series of Star Trek. This was a fictional test at the Star Fleet Academy. It was a rock and a hard place proposition where you either attempt to rescue the crippled Kobayashi Maru and risk provoking war, or leave it to certain destruction.

On his third attempt, James T. Kirk reprogrammed the simulation to allow a successful outcome. The point being, you shouldn’t always rely on initial constraints; don’t take a perceived mountain as truly immovable.

And we shouldn’t do that with our society, particularly the leaders. They have aides and colleagues telling them what won’t work, leaving them with a very narrow path to take. They look like utter schmucks, or at least untrained mimes, trying to walk a tightrope down a wide path. They never attempt to engage the people beyond some short-sighted resolution to avenge the deaths of the innocent. Never attempting to avenge the lives of the innocent, who currently want and need a real, functional government.

That is, the people of the Kobayashi Maru, that can still be saved.

It’s our choice, whether we succumb to the test constraints, deciding either not to risk saving them, or to risk it and face certain death, or take the third option, toss out the constraints and find some other way. It’s plain which path I think is best. What about you?