The site uses cookies that you may not want. Continued use means acceptance. For more information see our privacy policy.

The Less You Code, the Better

Post about reducing code complexity for the good of the web and its developers.

It’s a well-known fact that the less code one has to maintain independent of other projects, the better the project, all things being equal. In some fields, it’s even considered a liability to use your own code, such as cryptography. But on the web there is a major tendency for every site to have its own front-end, back-end, CSS, and so on.

Some of that goes away with new versions of the HTML specification, such as where new widgets like <input type="number"/> supplant the need for different platforms and libraries to create their own version. But a lot of it doesn’t, as the bigger picture isn’t tackled.

This creates problems, such as the well-documented use of CSS vendor prefixes breaking the mobile web. In this famous case, a ton of mobile sites use WebKit-specific CSS rules, and non-WebKit mobile browsers don’t render those sites as nicely.

There are varied proposals for how to fix that situation. My favorite is for Vendor Prefixes to have a built-in timestamp after which they do not function for any browser. But even the best proposals are working around the larger problem of websites having too much of their own code.

The more code, the more problems. The harder it is to create the web, the harder it is to maintain it, and the harder to evolve it.

The more code, the more duplication of effort by countless developers the world over to do the same things.

According to studies, the average page weighs in at about a Mibibyte (2^23 bits), with an average of nearly 100 objects (mostly graphics), and the growth continues.

It’s clear that this entire paradigm for creating programs and sites will have to change at some point. We’re still in an era where humans and raw data touch way too much, rather than having managed data objects that humans can handle more effectively.

Take, for example, writing a style sheet. Oops, you made a typographical error. Your page doesn’t work right, but is that because of a failure of understanding, a bug in the browser, or some unseen error in: your style sheet, your JavaScript, your markup? Maybe it’s a caching problem.

There are some IDE-type editors, that might give you suggestions as you type a selector, but they don’t know exactly what your intention is, so they invariably defer to your mistakes.

A similar problem occurs when you want to recreate one piece of a page in another project. You are digging out the markup, the JavaScript, and the styles separately. But you have to update them all to fit with your other project, and that requires an omniscience of the namespace and behavior of the other project.

This is ignoring the cargo-cult programming phenomena, which is another by-product of too much code. Too much code in that case means that a novice who might otherwise understand the code will stare at the screen, give up, search for code, copy, paste. It fails, so maybe they paste a bunch of other things. Finally it works, and they move on, leaving a horrible mess just waiting to fail in some new and interesting way.

The question is how to fix it. Syntactic sugar goes a long way. Python tends to be much better than other languages simply because of its focus on readability. But it can still be turned into a garbled mess with enough ignorance and/or effort and/or haste.

So, I say again, the real solution lies in making data into managed objects. Look at the filesystem abstraction. It’s very rare that someone corrupts their filesystem through normal interaction. Open a file. Copy a file. The user isn’t touching the data. They’re touching an interface that touches the data, just like nobody manipulates the voltages running into an arcade game directly. They touch the controls of the game, which handles the electrical signals to the game.

One good example of a tool that works toward this is Inkscape, the vector editor. You can manipulate the SVG (Scalable Vector Graphics) file directly as XML (eXtensible Markup Language), but you can also let a tool like Inkscape manage the data objects for you.

But in the long term we need to be more willing to say that certain behaviors aren’t available from the default interface. To create more lite versions of things that can predominate where the complexity isn’t needed and certainly isn’t adding anything.

My favorite example of that is for online banking. I really do not believe that online banking should happen in a browser or on the web. It makes much more sense to have a dedicated protocol with a dedicated client for that. Something that doesn’t expect to be loading all sorts of complex data in a complex manner.

That isn’t to say a bank shouldn’t have a website. Banks should have websites, but they shouldn’t be the means of account access. They can be used for informational and advertising/brand purposes, sure.

Welcome to… Firefox 6?!

Firefox 4 will come out next week, but Mozilla is looking forward to several more releases this year. A lot of people are whining about browser version number hyperinflation, but that’s not what this is about. It’s about a better browser.

There’s a new Firefox development process (PMO: Mozilla Firefox Development Process (draft)).  Firefox 4 will come out next week, but Mozilla is looking forward to several more releases this year.  A lot of people are whining about browser version number hyperinflation, but that’s not what this is about.  It’s about a better browser.

The reasons for the change are several:

Web Growth

The web is growing as fast as ever.  New technologies are rolling out, and the landscape keeps changing.  This calls for more active browser development than before.  It’s not just about enhancing the workflow you’re used to, but about making the browser fit with the changes to your browsing habits as the web changes.

Better UX

More releases means more refinement in existing browser design and in new design.  As GNOME and Ubuntu prepare to release reimagined desktops, one of the big results is going to be the fallout from the shock of major changes.  Users adjust better to gradual changes in their software, as it moderates the learning curve and increases their ability to have moments of discovery.

Improved Consistency

One of the major issues with long release cycles is that some features just aren’t ready in time, but the organization is so invested that it wants to hold the release for them.  Other features aren’t ready, but they just gather dust for a year or two.  Faster releases means more features, because there’s less pressure to get that one big feature that isn’t ready.

Happy Developers

One of the best arguments for faster releases is that community contributors are happier.  The patch they just landed can actually see daylight in a reasonable timeframe.  That means they get more reinforcement from contributing and will do so more often.

Less Work

And the final reason to push for tighter loops is that it’s less work per loop.  It’s the difference between driving under an overpass and through a tunnel.  When you enter a tunnel it feels like time has stopped.  You’re just seeing the same thing over and over.  There’s no feedback.  It can even cause a feeling of despair and remorse for having ever entered the damn thing.  Will it ever end?  This tunnel… it’s eating my soul.  When you have a shorter tunnel, you can see daylight before you’re even halfway in.  It feels good.

I’m very glad to see this change in Mozilla.  It has every sign of making their browser much better, and the whole web will benefit.

Google Chrome: N Browsing Technology

Just a short note about the forthcoming Google Chrome browser (to be launched tomorrow). Looking at the Google Notebook leak, no Linux-y Goodness ™ just yet. 🙁

Looks like tomorrow Google will release their own open source browser called “Google Chrome.”

It features:

  1. Prominent tabbing (the top of the browser is composed of tabs)
  2. Threading (each tab is its own process)
  3. Integrated Googleness (Gears, integrated Google searching, etc.)
  4. V8 Javascript VM (faster? remains to be seen)
  5. Webkit rendering (a la Safari, Konqueror)

I called it “N Browsing Technology” because of the threading aspect.  It will be interesting to see how they handle that.  The comic they put out admits to higher memory use initially, but it’s not yet clear how small each thread will be and how much sharing is possible between the instances (that’s partly OS dependent anyway).

It will be interesting to see whether this is a Windows-only release or Windows+Mac or if they’ll actually hit us with a Zero-Day Linux Release ™.

Not holding my breath on that one, but that would be a pleasant surprise.

It will also be interesting to see if they have taken a note from Mozilla and made this extension-friendly or not.

Anyway, it’s exciting even if it doesn’t replace Firefox as my browser of choice.  Certainly will try it out tomorrow (if there’s a linux version, of course).

Bah.  Some more news seems to suggest the initial release will be Windows-only.  I am not installing Windows, not even to try Google’s browser out.

Update: The other notable about this release is the fact that they are using what appears to me to be some kind of temporary 404 that isn’t part of their regular system.  My guess on this is that whatever their webservers are (probably some distant cousin of Apache at this point?) they have a redirect level that allows them to redirect to a static file and someone threw it together just to have the ease of waiting until X:xx o’clock and “flipping the switch” (probably just swapping out a conf somewhere).

Also, I find the choice of name interesting as Mozilla heavily uses the term chrome to refer to its UI.