Categories
analogies

Analogies for Tech: The Web as Houses

One of the historical patterns we see is where a specific field or part of life moves from being expert to common.  That’s been true with reading over time, for example.  It’s been true with automobiles in some countries.  There was a time when only women had babies….okay, you got me there.

But over time there’s an expectation in the computer industry that average people will learn technology to a greater degree, even if not to the same depth as a computer scientist or computer engineer.

I am examining potential analogies for explaining technology of various sorts to laypersons in the hopes they will grasp the relationships of the world they use every day.  I’ve already given part of the explanation of why, but here’s the other part:

Until you see the cracks in the walls with the sunlight slicing the darkness, and until you see the bubbles rising to the edge of the universe and ask what if it isn’t the edge at all, you have very little reason to jump out of the water or break into the day.

With that I hope to, from time to time, examine potential analogies for bits of technology.

The Web

The metaphor for the web is moving to a house you are building. In this, HTML is a set of special boxes.  You have some like title that are meant for very particular contents. You don’t put your china in a box with your hammers.

You have other boxes like html itself, which are there to hold everything you put in them. You put your china in one box, and your hammers in another, but both of those boxes can fit in a third, bigger box. That bigger box is actually the truck, in this case, but you might have palettes that hold many smaller boxes, as with something like div.

Then you have CSS, which are tags you attach to the boxes to tell the movers where they go. “This is a very dark brown room.” Or, “all of the windows should be blue, but after you have looked through one, it is purple.”

If you’ve seen that last bit, it’s the style applied to links using the default styles of most web browsers.

That’s right. There are default styles that come with the browser. They are there so that if you don’t specify, there’s a good base to work from.

Now, additional styles let you override those defaults, but there are also some amount of styling implied in the way you pack your boxes.

If you put some text in one box, then it will end up together in the house unless the styles applied are very explicit.

You also have peculiar boxes like script, which tell the builders that they contains fixtures or robots that will respond to visitors to the house in some way. They might be faucets that will, when turned on, create or delete whole rooms. They might be spy cameras to watch the visitors and tell the owners of the house what they did in the house.

Extending the metaphor out, the creator of the document packs everything up in their boxes with their blueprints and send them up to a server. Then you visit the server and it spits out the boxes with the blueprints, which your builder, the browser, assembles.

Some of the documents aren’t made in that way. Increasingly, the houses of the web are made in factories called applications. Think about some service like Google Search. They have thousands of computers working to find the content all over the internet, and when you search those computers shove that content into the right boxes with the blueprints and styles and deliver them to you.

Anyway, I guess that’s enough about the web for today. Did this analogy make it clear how the web works?

Categories
hyperweb

Comment on Comments

I’ve been following the Knight-Mozilla Challenges (Drumbeat: Knight Mozilla News Technology Partnership), and the second challenge is about improving online commenting and discussion.

Comments are Normal

In thinking about that challenge, the first thing that occurs to me is that the quality of any given comment will follow a normal distribution.  Most comments will be of an average quality.  Some will suck.  Some will rock.

Many of the solutions that exist today focus on changing the output.  They basically seek to filter the whole set of comments so that the shape of the curve for the visible comments changes.

I question if that’s the best approach.  Can and should sites seek to improve the quality of the average post instead?  That is, if they could constrain the input in some ways, to elicit better comments, would that be better?  If so, what would that look like?

It might take the form of a series of explicit prompts for comments.  Instead of having (only) a general discussion, you would have some particular aspects that you could comment on and discuss.  The idea would be to frame particular discussions around more specific aspects, to avoid the drift that occurs in more general discussions.

You might give users a choice to create a directed discussion topic or participate in an existing one.  You would give only a small group of users the opportunity to create directed discussions to avoid an overabundance of them and have them simply become roots of threads.

Non-comment Participation

If you seek to participate online, that is almost entirely restricted to a two basic tasks.  One is commenting directly.  Another is curating comments of others (via either direct voting, rating, moderation, or linking).

It seems like that misses some opportunities.  My hunch is that there are other participatory means that would enhance commenting, but also some opportunities aside from commenting that could be added.  One possible idea in that vein is paraphrasing.

In visiting a story, I read and comprehend it, and any comment on it will reflect my own interpretation.  One opportunity that is separate to comments but related is paraphrasing or summarizing the original content.  That would give users alternative interpretations to read before commenting.  It might better-inform the discussion, provided the summaries were strictly non-commentary.

As above, the idea would be to only have a small fraction of the users to provide summaries.

Commonality in These Ideas

Both of the ideas above express a common theme of moving away from general discussion threads.  They focus on subtly shifting the discussion toward less self-directed content.  This isn’t to say that people can’t self-direct their comments, but it’s an extraneous step in many cases.

By moving a small number of users out of the comment pool and into a different layer of the discussion/participation system, you provide an opportunity for different behaviors to emerge.  That harkens back to the initial point I made: shifting the quality of the comments rather than filtering the whole pool.  There are too many users stuck in the same behaviors of commenting, and the web offers an opportunity to focus on diversity of activity.

I’m encouraged that the Knight Foundation and Mozilla are working on enhancing the role of journalism on the web, and I am definitely looking forward to finding out what sort of innovations come from it.

Categories
Firefox

Welcome to… Firefox 6?!

There’s a new Firefox development process (PMO: Mozilla Firefox Development Process (draft)).  Firefox 4 will come out next week, but Mozilla is looking forward to several more releases this year.  A lot of people are whining about browser version number hyperinflation, but that’s not what this is about.  It’s about a better browser.

The reasons for the change are several:

Web Growth

The web is growing as fast as ever.  New technologies are rolling out, and the landscape keeps changing.  This calls for more active browser development than before.  It’s not just about enhancing the workflow you’re used to, but about making the browser fit with the changes to your browsing habits as the web changes.

Better UX

More releases means more refinement in existing browser design and in new design.  As GNOME and Ubuntu prepare to release reimagined desktops, one of the big results is going to be the fallout from the shock of major changes.  Users adjust better to gradual changes in their software, as it moderates the learning curve and increases their ability to have moments of discovery.

Improved Consistency

One of the major issues with long release cycles is that some features just aren’t ready in time, but the organization is so invested that it wants to hold the release for them.  Other features aren’t ready, but they just gather dust for a year or two.  Faster releases means more features, because there’s less pressure to get that one big feature that isn’t ready.

Happy Developers

One of the best arguments for faster releases is that community contributors are happier.  The patch they just landed can actually see daylight in a reasonable timeframe.  That means they get more reinforcement from contributing and will do so more often.

Less Work

And the final reason to push for tighter loops is that it’s less work per loop.  It’s the difference between driving under an overpass and through a tunnel.  When you enter a tunnel it feels like time has stopped.  You’re just seeing the same thing over and over.  There’s no feedback.  It can even cause a feeling of despair and remorse for having ever entered the damn thing.  Will it ever end?  This tunnel… it’s eating my soul.  When you have a shorter tunnel, you can see daylight before you’re even halfway in.  It feels good.

I’m very glad to see this change in Mozilla.  It has every sign of making their browser much better, and the whole web will benefit.

Categories
software

Ideas: Packages, Locales

This year I’m going to try to get back to writing a bit more.  One of my ideas of how to do that is to dump random ideas I have from time to time.

Today here are two ideas that seem good in theory, but will probably require a bit of work to make real, and both can use a bit more design and polish before I’d consider moving forward with them.

Package System Hooks

This idea comes from my experience with both Ruby and Python on Debian systems, but could certainly apply to other languages as well (Perl comes to mind as an obvious one, but there are likely others).

Background

Debian (and most GNU/Linux and other free operating systems) has a package manager (apt in the case of Debian), and it works great.  Most users can and should use their package manager as the gatekeeper to their system.  Even servers get a lot of benefit here, and most system administrators should be rolling their own packages for the meat of their servers just to make their lives easier.

The disconnect comes when you use HLLs like Ruby and Python, which have their own repository/package systems (eg, gems in the case of Ruby). Some of the libraries available via the languages’ systems are also available as packages, but many are not.

The Idea

The idea is that the operating system’s package system can provide a way to include the information from the other package systems.  This would make it much easier to manage all the packages from one source, and it would ease the pain when there’s a system package for a language’s library.

Further, this could even be leveraged for web browser extensions, locales for specific applications, etc.

Ideally, in the case of the browser, the OS package system would need additional functionality for managing per-user packages (so that extensions could still be registered even if they only belong to the current user).

Having this additional functionality available would probably lead to even better interoperability.  For example, a Chromium extension could be registered, and if the user opened Firefox, Firefox could talk to the package manager and then prompt the user to install the same extension for Firefox.

Additionally, data could eventually see the same sort of management available.  Bookmarks could be considered a “data package” that could be decoupled from the browser.

Locales like Google Does Translation

This idea comes from my experience with reading websites with translation tools and localization with my Firefox extension.

Background

One of the triumphs of free software is the ubiquity of localization.  Community-driven software means that users can readily assist in getting high quality translations to their own language out to the rest of the world.

The web itself follows a different model, where third parties like Google offer translation tools.  One of the benefits of the web’s approach is that translation services can provide JavaScript that lets users give translation feedback.  If you visit a page translated by Google, you can click on a translated sentence and offer an improved translation.

The software model pulls the strings out from the code, and these are translated into other languages.  This approach is usually higher quality than the web translations, but it requires volunteers, and it can be cumbersome for people that don’t want to spend too much time.  (My assumption is that the easier you make it for others to help you, the more likely they will do so).

The Idea

The idea is to let the applications themselves have the same kind of “click and write” translation facilities that Google and other translation services inject into the web.  It’s a bit trickier in applications, as the user would have to more explicitly select and translate, but I believe it is technically feasible.  It’s most definitely feasible for Gecko-based applications, and GTK+ applications can probably handle it as well.

The process would be in two parts: suggesting translations, which get spooled by a server, and validation by trusted reviewers.  The fact that validation would only require reading two strings and affirming means that it should be easier to enlist helpers there, and the fact that translators would be drawn from the users without the need for volunteering or signing up means that they would be more likely to send word.

I really like this idea, because it seems (to my mind anyway) to be based on a strong separation of duties that will elegantly allow a solution to a problem.

Bonus Idea

This is an offshoot of the same idea, though it actually occurred to me before this idea.  Transcripts for videos, based on the same sort of behavior.  A user is given a short clip of video and types what they hear, and then later other users are given the same to review.

Prior Art

Obviously these ideas have some precedents in the real world.  I pointed to the Google translation services, but ReCAPTCHA is another example, as is Wikipedia.  Over time I expect to see more examples emerge that use this same pattern, and it will eventually penetrate the governmental models, with better separation of powers and improved checks and balances.

Thanks for taking time to look at these ideas, and I’ll be happy to hear any criticisms you may have.

Categories
hyperweb

web time

The w3c has various time specifications for time & date, but there seems to be a lack of use and/or implementation.

There’s just no good excuse, given that a browser should recognize time values when present, and have awareness of the locale information of the operating system/user, for anyone to see “5:00 PST” or the like.

There’s no reason that today’s lunar eclipse times posted on the Wikipedia entry should include a table of various timezones.

Okay, I’m a little off with that statement. If you are planning to view in a timezone other than your own, or to relay that information to someone in another timezone. But, even then I believe you should be responsible for the conversion.

So what’s the alternative, everything in UTC/GMT? No.

The alternative is responsible implementations that allow aware browsers to display ALL time values converted to your local time.

In other words you should always expect a time value to be local to your current time locale.

So how does that work? It’s dead simple and requires only one change. It works by having well-formed time values with accompanying tags or markup that designate they are time values.

Given some string which is marked as time, the browser makes a best-effort parse to understand that string, and then displays in its place whatever preference the User (you) have for time display.

Many websites today do this by a few methods. A majority of them probably use javascript, whilst some (given you are registered) have a setting and use the server’s time +/- your setting’s offset.

Both of these are hacks. No one should need to have javascript enabled or login just to have times displayed “correctly” and even then the sites display them in the form they want, not in a user-specified, browser-profile format.

It is trivial to do this correctly, yet it’s not done.