End of Bookstack, but Looking Forward to Firefox 57

Back in 2007 I was a Firefox user and wrote my extension Bookstack, which is now dying due to the changes to Firefox. But I am looking forward to the improvements Firefox brings, even though this seems like the end of an era of extensibility in the browser.

Why is Bookstack done?

My own browsing habits have changed since I wrote it. In recent years, I’ve continued to use Bookstack, but more as a speed dial than as it was originally intended as an inbox system for links. I’ve thought about writing a new sidebar to do something that suits my current usage, but for now I’ll see how life without Bookstack is before I embark on another extension.

There are some users of Bookstack out there, and I’m sorry I won’t be able to support them, but the source is available if anybody wants to take it up. The fact is that under the changes to Firefox, Bookstack would require a full rewrite anyway, and it would lose features in the process. The main painpoint would be the UI.

In the early years, Bookstack did most of its own work to build the sidebar until I worked in XUL long enough to realize I could piggyback on Firefox itself for a lot of that code, which reduced the maintenance burden on Bookstack considerably. With the change to webextensions, that’s no longer the case.

I enjoyed the project while it lasted. Ten years is a good time for it to retire.

Why Firefox will still rock

The change that Firefox is making is the first step toward a next-generation browser in terms of speed and memory use. I haven’t tested the 57 beta yet, but it’s purported to be fast. That’s great, and the changing to webextensions reduces the burdens on Firefox to let it continue to improve much more in the years to come.

End of an era

But that change comes with a cost, as mentioned with my own EOLing of Bookstack. The customizability of the browser is being limited. It’s not the Fisher-Price Apocalypse some might fret over—that won’t happen as long as the underlying browsers and protocols have open source roots—but it is limiting.

Browsers are supposed to be agents for the user. They are supposed to do the user’s bidding. Limiting the ease of modifying the agent isn’t great, but other limitations have always thwarted some types of user choice, whether it’s each browser keeping its own data (with some ability to import/export between them), or browser security getting in the way of the user (there’s an inherent clumsiness in trying to interact with iframes in userscripts, for example).

Return of the User

The next act for the web will hopefully be a resurgence in users finding new ways to work around the limitations of browsing and webextensions. There are always new threats to the dream of a web that serves users, and Google Chrome has invited a certain amount of complacency among the multitude. With a bit of luck, a resurgent Firefox will help to ignite a new generation to work for an open web again.

Inline Grading Politicians

In my effort to diversify my political news reading, I’ve been occasionally seeing articles from conservative sites. Some of them have a pretty neat feature: they tell you right after an elected politician’s name whether you should like or hate them via a site called “Conservative Review” with the feature being called the “Liberty Score.”

Now, political reporting has long included a tag of loyalty (“Jon Smith (R-America)”), but this new-fangled tag shows just how committed to everything conservative an individual is, in the form of a percentage. So you can see, reading an article on a site using this that, for example, Ted Cruz is 97% conservative. They don’t say what he’s 3%, but we’ll just assume it’s bad. Or, you can see that Bernie Sanders, at 17%… wait, 16% (did it just change while I was typing this?) is practically an unperson by conservative standards.

They give the letter grade, too, if you hover over it. Sanders gets an F, which is basically a participation trophy. Liberty-lovers are supposed to hate participation trophies, though. But there it is: Sanders <(Participation Trophy Recipient) right there by his name, when you hover.

All of this is a sophisticated method for avoiding phrasing like, “Bernie Sanders (pinko) said…” or “Hillary Clinton (infidel) …” That sort of stuff, outright saying what your side thinks of the other, happens, but there is a risk that people will have to read what you write. With the little fancy number tags, which will probably be replaced with signal-strength-style bars soon, they just have to look at that bit. Maybe happy-face, frowny-face. I’m sure they’re focus-grouping it.

Now, I know what you’re thinking: is this the forehead (or back-of-the-hand) stamp that we were warned about by that fancy book with the talk of dragons and God? The mark of the beast? Don’t worry! I am sure there’s some eschatological site that is currently using similar technology to markup their texts and the Liberty Score probably only rates about 10% as a sign of the end of days (they get a participation trophy for their participation trophies).

Point is, this is great for journalism. You’ll soon be able to log on, click a donkey or an elephant, and have all your news done with emoticons. You’ll be given either a rifle mouse cursor (for the conservatives) to shoot the enemy, or a picket mouse cursor (for the liberals) to protest the enemy long enough that they flee.

Maybe they could give the Clinton and Sanders supporters some validation-failed stamps for their latest circling on who isn’t qualified to be president.

On a more honest note, though, boiling the totality of a person down to a number is best left to the financial industry. It has no place in political reporting. So we shouldn’t be surprised to see it being done.

The Closing Web

Taking a break from discussing the FDA’s proposed deeming regulations to talk about the now-released FCC proposal for regulating ISPs and the announcement by Mozilla that they will ship EME (Encrypted Media Extensions).

EMEs in Fx

First, what will Firefox include? They will include the W3C’s EME standard for HTML5 video. This standard effectively says that an implementing browser includes a plug or a mount for DRM. The browser doesn’t have to include DRM directly (though it appears a browser vendor could ship it directly).

Think of it like a car, and because of car theft, a trade group passes a rule requiring members to include remote-controlled self-destruct mechanisms in their cars. Except they didn’t require the car makers to build-in the actual explosives. They just have to provide a place to put the explosives and the remote-detonation functionality to blow the car up if someone installs the explosives.

And then let’s say that all the fast food drive-thrus said you can’t buy our food unless you have the self-destruct system enabled. That’s you going to ACME Entertainment and streaming the movie, getting the popup that says, “please install this EME plugin.”

We’ve seen this before, with codecs. Mozilla resisted including H.264 because it’s a proprietary codec that isn’t available for all systems. But other major vendors paid for it and shipped it without blinking, and sites put videos out in H.264. Mozilla did what they felt they could, but eventually began relying on operating system support for H.264.

Mozilla is a large organization, risk averse. They do not want to see other browsers force them into a less influential position, potentially causing even more harm to the web. So they run the numbers, hold their nose, and compromise if they think it’s a bad path that may let them get to a better place to fight tomorrow. In other words, they see the risk of DRM entrenchment as less likely or less harmful than Firefox being left behind by users who increasingly watch video in a browser.

DRM serves no real purpose, and at-best represents a gris-gris for parts of the entertainment industry that do not innovate adequately. Valve Software and some other video game creators, are just starting to recognize the economic benefits of openness and artistic community. These are promising signs. As the lines blur of the lines between video games and film/television, it is expected that other industries will follow and that DRM will become rarer and rarer.

FCC’s NPRM: “Protecting and Promoting the Open Internet”

The actual proposal (FCC: PDF: 15 May 2014: Protecting and Promoting the Open Internet) only contains a few rules:

  • Transparency
  • No Blocking
  • No Commercially Unreasonable Practices

The rules that aren’t yet proposed have raised the public’s ire. The proposal requests comments on a variety of issues, taking a “we’ll make the rules later” approach. Early on in the proposal (p. 3) the FCC acknowledges two paths seem viable (sec. 706 and Title II) and they want comments on the best way forward.

Currently the FCC classifies ISPs as information services, and the court that struck down the previous rules said, obiter dictum, that they did not believe section 706 would allow for certain regulations unless the FCC reclassified ISPs. This is not a binding ruling, but should be taken as weight against merely trying to shoehorn non-common-carriers into regulations under section 706.

If you read the definitions of both “information services” and “telecommunications services” I think it’s clear which ISPs should be classified as. Despite the claim of ISPs that they will refrain from innovation if classified as common carriers, they should still be so classified.

If we need “fast lanes” they can be done through some alternate arrangement that is voluntary by the information service, rather than mandated by an ISP (similar to how you can have expedited shipping by a common carrier). Or the ISPs can negotiate for a new classification by statute that will include, e.g., mandatory progress and innovation, restrictions on operating as an ISP and line owner and media company simultaneously, etc.

Currently, the only meaningful way forward seems to be for the FCC to classify ISPs as telecommunication services subject to common carrier rules.

Killing Comic Sans

Sure, you could just uninstall the Microsoft core fonts (they are non-free, after all), but they’re nice to have around (I guess?). Or you could just remove Comic Sans itself, but maybe you’ll one day want to use it for good or ill (who knows?) So instead you might turn to fontconfig.

First you might try a substitution rule like:

<alias>
    <family>Comic Sans MS</family>
    <prefer><family>DejaVu Sans</family></prefer>
</alias>

The prefer families specified should (?) be used before using the matched font family, even if it exists. But in testing, that didn’t work for me. Don’t force it, use a bigger hammer.

So I switched to a match/edit rule like:

<match target="font">
    <test name="family" compare="eq" qual="any">
        <string>Comic Sans MS</string>
    </test>
    <edit name="family" mode="assign">
        <string>DejaVu Sans</string>
    </edit>
</match>

This worked, but was too big of a hammer for my taste. For example, in gedit font selection it no longer says, “Comic Sans MS.” It just says, “DejaVu Sans.” What we’re after is substitution of the face, not the whole entry.

As I’m not in the habit of using Comic Sans by choice, the target of the exercise is the web. Ah, but it’s much easier to replace the font for the web. So we walk away from fontconfig and walk over to Stylish (or userstyle.css if you don’t want an add-on to help).

Now we just need a rule that tells the browser, “replace Comic Sans when you see it.” In comes @font-face. We can use this to define, for the browser, what the meaning of a particular font is:

@font-face {
    font-family: "Comic Sans MS";
    src: local("DejaVu Sans");
}

Great! Well, great-ish. We can’t specify the alias “sans-serif” because it’s an alias. That means if you change which font your alias uses (in this case, away from DejaVu Sans), it will require you to update your style rule.

We have limited options here. You could specify the font-weight, but that will interfere with the site’s own weighting. The best case is to use a distinctive replacement font. Or just give up (my choice in this case). Defeating Comic Sans is enough, no need to gloat.

What is a Website?

The question comes to mind of real-world places, like the Grand Canyon, libraries, street corners you know, museums. And institutions, great institutions (in the abstract, anyway) like the U.S. Congress and the U.S. Supreme Court and grand institutions of learning like M.I.T. and Harvard University.

We have a certain outlook for real-world places that root abstract concepts. But on the web we still refer to the greats as mere websites.

Wikipedia is a website, yes. But is it not one of several behemoths, great beasts of the modern netscape (err, not the company obviously, though they did loom in their day). Great institutions with all signs of the lasting legacy of the Harvards and M.I.T.s and so on.

There is a certain leveling and democracy in Alice’s Blog being on the same footing with a Wikipedia. But at the same time, it seems we should be looking for new names for great Internet-based institutions. That we should be able to call Wikipedia a website, but also call it something which evokes its importance and lasting nature.

We have another term, web application. It fits certain sites. But when I think of an application, I think of a shell that provides functionality. I don’t associate the data of the application with being what it provides. If Wikipedia is an application that provides encyclopedic articles, well, where’s the competing application that relies on the same data set?

And there can be, don’t get me wrong. You can download Wikipedia’s database and write an application (web-based or platform-based) and pull those articles up (you can also download MediaWiki, the software powering Wikipedia). Others have come up with some innovative ways to, e.g., pull articles over DNS. The main application-like part of Wikipedia is its editing functionality.

So maybe Wikipedia is both a website and a web application. At least in part. But that still doesn’t account for the community behind it. Or that its most essential nature is as a repository of articles.

You could try portal or property or destination after web. Maybe some other term. But I think an important step, one that will eventually happen, is to drop the web. That at some point the articles of Wikipedia will be the headliner, and whatever built-in editing and display they want on the web will be the website. There may be platform-based alternatives (or alternative web applications) to provide the editing and display.

This is already partially true for how Wikipedia and the other Wikimedia Foundation (the organization behind Wikipedia) sites handle things like images. Image files and other media that are embedded in Wikipedia actually live on the Wikimedia site and may be reused across language versions and on other Wikimedia Foundation sites.

But that trend can be extended to other uses, and once enough uses for a system exist, the web frontend is truly a frontend rather than the raison d’être for the backend. It reminds me of the story behind the GNOME Sudoku application; apparently the author wrote a solver for Sudoku puzzles, and it grew an interface up around it. Sometimes that process works in the other direction.