The site uses cookies that you may not want. Continued use means acceptance. For more information see our privacy policy.

2020: When Computers Will Look “Over There”

Wherever you are right now, go somewhere else in your head. In some near-tomorrow you will be able to do the same with your computer.

Wherever you are right now, go somewhere else in your head. If you’re in your home, think about a different room. If you’re not at home, think about a room in your home. You can probably look around the room a bit, remembering all the parts of it. Now you can probably go to some shelf or drawer and look around there.

You don’t have a perfect replica, but it’s good enough that you can remember, right now, where some remote object is. And if you went there right now, you would find that object where your brain said it would be.

This is a highly developed skill of the brain. So developed, in fact, that quite a few people use mnemonic device called the Memory Palace (Wikipedia: Method of loci) to allow them to memorize information rapidly and recall it with ease. But our computers currently rely upon the use of textual bits to demarcate things like files and folders.

If we’re forced to give the location for something, we tend to speak relatively: “down the hall, third door on the left, the big bookshelf in the corner with the taco bookends, second shelf from the top.” We don’t say, “the room called var, the piece of furniture called games, the shelf called board games, the object called dominoes.” (Okay, we do say roughly the last part.)

That’s all going to change. Let’s say you have a next-generation, non-invasive Brain-computer interface (Wikipedia: Brain-computer interface). Suddenly the computer can listen for you to “say” something like, “open that thing over there.” It can store a mapping for what “over there” means, and it can use your reference to it to trigger the mapping and get the data you want, without you needing to remember the location in the computer’s terms.

This will allow the computer to manage more of the problems that are currently shared between computer and user. And it will make computers easier to use.

But it will do some other things, too. It has the potential to overturn education, by having the computer help in your learning in a way that only the best teachers currently do.

Take, for example Kickstarter: Zombie-Based Learning: Geography taught in Zombie Apocalypse by David Hunter. This looks to be a great example of traditional teaching. The teacher uses their creativity to generate a compelling narrative for the material, setting the proper pacing, activities, etc. so that the kids all learn and retain the knowledge.

With computers, and a BCI (Brain-computer interface), the computer can help people to store memories in ways that will maximize their recall. This will initially happen in some rudimentary ways, like flashing pictures that are composed from a variety of images thought to be uncanny enough to help in memory. For example, a clown in a fish bowl, next to a fish in scuba gear.

But that may give way to better schemes where the computer has image representations of some of the places it knows you remember things, and it could suggest you add the memories in those places. It could also quiz you by flashing the location and asking you to show you remember what you should.

Passwords might consist of a challenge/response, where the computer flashes an image and you have to recall another image at some place in the same sequence.

Brain-computer interfaces represent a major leap forward in what computers will be able to do for humanity. They are on the horizon, and they are undoubtedly an epoch. Just as there is the world before the Internet and after, there is the world before widespread BCI and after.

Web Monsters and Configuration

Web Applications roll their own interactions. And they don’t necessarily do it in a way that matches the platform or browser norms. The result being an uncanny valley in Web Apps, where they look and act close enough to either web pages or regular applications, but are a sufficiently mashed version of both as to offend.

It’s worth talking about Web Applications. They are the latest greatest grand vision for the Internet Web, and have been for some time now. But they have problems, too.

One of the biggest problems with Web Applications becomes clear if you install a browser add-on that operates just fine on a normal page, but fails on these special, modern “application” pages. For example, SourceForge: Pentadactyl is such an add-on. It lets you use the Mozilla-type browsers using interactions familiar to users of the vi and vim editors (it’s actually a fork of the add-on called Vimperator).

It works pretty well, and I was well on my way to adapting my use to it (ie, relearning browsing using those idioms). Then I tried using it with a few web applications (particularly Google Reader), and had computer collision deja vu. This is a common phenomena for people that use computers enough: you run into the same wall you’ve run into before.

Namely, Web Applications roll their own interactions. And they don’t necessarily do it in a way that matches the platform or browser norms. The result being an uncanny valley in Web Apps, where they look and act close enough to either web pages or regular applications, but are a sufficiently mashed version of both as to offend.

This is especially true when the roll their own widgets, which is unfortunately going to become more common. We have long seen that problem in Flash applications, where authors regularly broke expectations.

It’s not merely a problem with insensitive authors, though. It’s a general problem.

But when you take a step back, it’s not even specific to the web. It’s specific to Human-Computer Interaction.

One of the big gripes about the GNOME desktop is how they limit configurability. You get the same kind of problems with browsers offering different levels of configurability. And, anyone that’s tried to use an Apple Inc. Macintosh computer while using either a computer running Microsoft Inc. Windows OS, a GNU/Linux OS, or other OS knows the pain of switching between radically different keymappings.

The problem comes down to being able to interact with a system in an expected way. A lot of that comes down to being able to configure a system to use what you’re accustomed to. And a lot of the problems come from breaking the expectations of that possibility.

Take, for example, the sites that impose a copy notice into anything you copy from their pages. When you select and copy text, they have a script that appends or prepends some text about where you copied it from. That totally breaks the expectation that what you copied will be verbatim. And they don’t offer any simple way to turn it off, meaning you must resort to either manual removal of the addendum or block loading of the script.

The answer to this problem (disregarding the fact that a company may desire not to give you the power to use its service or device as you please) is abstracting the interaction and configuration so that you can plug in your environmental desire.

It’s a very difficult problem to manage, of course. We have long seen the error of the web in this regard: pages put on so much makeup you’d think they were running for president. The original web was one of serving documents. The original web grew to allow some minimal styling, to allow images. That grew to the web of today, where you can make a webpage that uses WebGL to run a full-on videogame.

It does add to the versatility of the tool, but at a certain point you start feeling like what was a great knife is now one of those Swiss Army demonstration pieces with 100-odd tools. The glory of computing is that you can still eschew the majority of them and stick with a knife without the unwieldiness. But that’s if you’re building the page or site.

It’s trickier to disentangle the sites you visit from their webs of scripts, extraneous content, etc.

Sure, you can write a configuration conversion tool so that your interaction preferences can be synchronized between devices, but it’s harder to get the developers to agree that there’s an abstract set of preferences that two or more applications can feed from without rolling their own.

Ask any gamer that doesn’t like the default bindings and interactions of games, how many times have they inverted the pitch controls? How many times have they remapped their keys? On every single game.

Steam, the Valve game distribution platform, might actually be a prime ground for building a configuration abstraction. You download a game, it asks Steam what your preferences are, and you don’t have to make modifications except where the game has options that are not in the set that Steam provides.

Anyway, at least some areas are seeing progress. Mozilla’s BrowserID project, OpenID, among other open projects attempting to fix one of the most common duplicate configuration systems, the basic service signup requiring a password for every site.

But that’s just one piece of the larger puzzle, which will eventually be recognized.

There’s another side of the coin, of course. Sometimes learning a new system is good. If every piece of software adapts to you, how can you ever learn a better system? That objection’s answer is in the very concept of abstraction. You can simply change your interaction, like you can change your clothes.

And it works better when every piece of technology you touch obeys. If you want to learn the metric temperature system, it’s far easier if every source of temperature data gives you the metric once you check or uncheck a box.