Categories
design

False Futures in Technology

One of the big selling points for certain companies isn’t the actual product, but the cachet it carries. Some of that is and will be social exclusivity or stylistic. But some products have a futuristic quality.

The problem is that society can tend to gorge itself on false futures and avoid any real social progress. Futures where the only thing that changes is that our shoes buckle themselves, or we colonize other planets with our white picket fences.

It can be harder to see real future, in part because it’s so easy to forget the past. By the time I came around, and probably you too, segregation was gone. That’s a hard thing to know: that if I aged backwards through time instead of forwards, society would have developed segregation by now.

There is still progress on social justice, though, and it doesn’t come with a wall charger.

False futures like digital media with severely restricted markets. An electronic book reader would be a great thing, if the market were there for it. If the publishers competed on price, quality, anything but exclusivity and DRM.

The real future comes first in places where industry beckons it, as industry does not feel impeded like many individuals do, by laws made to prefer industry over people.

Drones are being touted as revolutionizing search and rescue operations. I have no doubt they will. Overall I do see drones and robotics as positives for the future, both for safety, but for efficiency and versatility as well.

But we need the laws and systems to avoid abuse, and we need to ensure that their use does not numb us to risks we shouldn’t take.

The same can be said for object printers or three-dimensional printers. As they grow larger, and we begin structural printing, we need to empower people with the tools to make sure the structures they build remain safe.

Taken together, within a few decades we may have the ability to build a new city in a matter of weeks without a person on the ground there.

But will that new city be subject to the same limitations we feel in society today? And will its newly relocated denizens still find the future in their mobile device or screen? In their ability to teleconference or have a robot vacuum their carpets?

I hope not. That sort of kick is false future to me. Technology that makes you feel like you’re in the future, instead of fading into the background, seems like a waste.

Literature and music can give you the same kick, without the high price tag. And it does a better job, because a song never crashes. A good story can take you places we will never go, places you are glad to visit but wouldn’t want to live in.

But as long as the device makers and technology companies have limited visions for the future, they will try to sell you a device to make you feel like you’re holding a piece of the future. All you’re really holding is a ticket to a false future.

Categories
software

Future Software

Occasionally you read interesting ideas for the future of software. Here are a few to ponder.

docopt

docopt is basically a domain-specific language. The language is the standard Usage output of a command-line program. By simply documenting a program’s usage, it aims to provide the functionality of parsing the program’s invocation.

This is a great step forward in finding ways to reduce duplicated effort in an elegant way. Before, you might programmatically add or modify the argument handler, and that might generate the Usage. But it was still tedious, as it required keeping what was a messy bit of code as clean as you could. Messy because you had to do a lot of thinking about the basic form of your command line API, and then translate it to code.

Instead, with docopt you should be able to purely focus on the form of the Usage.

I think that this will be a common theme for future computing, without relying on more complex systems like SOAP.

Why do we need modules at all?

Erlang.org: Erlang-Questions: 24 May 2011: Joe Armstrong: Why do we need modules at all? was an interesting post I remember reading a few years back. It questions the basic assumption of building software hierarchically and statically.

This is akin to the age-old (at least since birth of UNIX) desire to have programming be a set of pipes thrown together at will, with the elegance and wisdom of a seasoned hacker behind it.

What it really seems to suggest though is a move toward a way to let computers program themselves. The desire to let evolution play a key and direct role in how a computing system operates. If some given function is replaced by a more efficient function, all software depending can be faster. If you have some chain of functions A->B->...->ZZ, and someone writes a better chain, the whole can be replaced.

We already get some of this with packages/modules and dynamic linking, so the question (short-/mid-term) is really whether a kind of function-level alternative would replace modules/packages, or work alongside them.

Why do we need GUIs at all?

This is a spin on the previous. It’s the basic question of whether larger elements of the GUI can be easily reused. At present the basic unit of the GUI is the widget. The widget is some semi-small piece of the GUI, like a button or a text input.

If you want a more complex GUI, it’s a matter of assembling widgets together. This is equally true for a native application as it is for a webpage or web application.

Reuse is dependent upon some human brain recognizing the opportunity, and wiring the whole thing up for reuse. That’s opposed to something like the hypothetical functions database above, which in theory would have enough metadata to be able to say, “takes in a list, outputs a sorted list, stable, memory use, complexity.”

But if you can do it with functions (which seems likely enough), you can do it with larger pieces, including GUIs. So you can have a GUI that’s designed separately from the consuming code, and it can say, “presents a list, allows reordering, sorting, adding, deleting,” and you might specify some constraints on the adding/editing portion, so it could select another GUI component (“unique (name, number) pair”, or “unique name, non-unique number pair”, or whatever).

The initial versions of this can probably be written off of docopt, where by the same magic of parsing Usage a GUI could be created for any command line API. This is a project I’ve been meaning to work on, but haven’t done it yet.

Common theme

The common idea is reducing duplication of work, increasing productivity, removing friction to create. Making the computer figure out more of the work. Automating more. Instead of having to spend so much time doing analysis of tradeoffs and pondering why something broke when it shouldn’t, we should be building more.

That leads to the last one: moving away from pure-text programming. That doesn’t mean we write it out to binary; the text layer can persist. But it does mean that to some extent we stop doing everything as text. We can already extract interaction diagrams from text, but we currently do the editing in text and only get the diagrams when we want a conceptual view.

It will take some time before we see these last sorts of changes take place. But it’s likely that they will, if for no other reason than the benefits are there. I’d rather write the Usage once and know everything was wired up for me, just like I’d rather not muck about with every last bit of CSS and markup every time I want to add something to an application.

And as fun as it is to find the extra comma and find out that I wrote it correctly except for that typo, I’d rather that footgun go away entirely.

Categories
science

2020: When Computers Will Look “Over There”

Wherever you are right now, go somewhere else in your head. If you’re in your home, think about a different room. If you’re not at home, think about a room in your home. You can probably look around the room a bit, remembering all the parts of it. Now you can probably go to some shelf or drawer and look around there.

You don’t have a perfect replica, but it’s good enough that you can remember, right now, where some remote object is. And if you went there right now, you would find that object where your brain said it would be.

This is a highly developed skill of the brain. So developed, in fact, that quite a few people use mnemonic device called the Memory Palace (Wikipedia: Method of loci) to allow them to memorize information rapidly and recall it with ease. But our computers currently rely upon the use of textual bits to demarcate things like files and folders.

If we’re forced to give the location for something, we tend to speak relatively: “down the hall, third door on the left, the big bookshelf in the corner with the taco bookends, second shelf from the top.” We don’t say, “the room called var, the piece of furniture called games, the shelf called board games, the object called dominoes.” (Okay, we do say roughly the last part.)

That’s all going to change. Let’s say you have a next-generation, non-invasive Brain-computer interface (Wikipedia: Brain-computer interface). Suddenly the computer can listen for you to “say” something like, “open that thing over there.” It can store a mapping for what “over there” means, and it can use your reference to it to trigger the mapping and get the data you want, without you needing to remember the location in the computer’s terms.

This will allow the computer to manage more of the problems that are currently shared between computer and user. And it will make computers easier to use.

But it will do some other things, too. It has the potential to overturn education, by having the computer help in your learning in a way that only the best teachers currently do.

Take, for example Kickstarter: Zombie-Based Learning: Geography taught in Zombie Apocalypse by David Hunter. This looks to be a great example of traditional teaching. The teacher uses their creativity to generate a compelling narrative for the material, setting the proper pacing, activities, etc. so that the kids all learn and retain the knowledge.

With computers, and a BCI (Brain-computer interface), the computer can help people to store memories in ways that will maximize their recall. This will initially happen in some rudimentary ways, like flashing pictures that are composed from a variety of images thought to be uncanny enough to help in memory. For example, a clown in a fish bowl, next to a fish in scuba gear.

But that may give way to better schemes where the computer has image representations of some of the places it knows you remember things, and it could suggest you add the memories in those places. It could also quiz you by flashing the location and asking you to show you remember what you should.

Passwords might consist of a challenge/response, where the computer flashes an image and you have to recall another image at some place in the same sequence.

Brain-computer interfaces represent a major leap forward in what computers will be able to do for humanity. They are on the horizon, and they are undoubtedly an epoch. Just as there is the world before the Internet and after, there is the world before widespread BCI and after.