Categories
hyperweb

The Closing Web

Taking a break from discussing the FDA’s proposed deeming regulations to talk about the now-released FCC proposal for regulating ISPs and the announcement by Mozilla that they will ship EME (Encrypted Media Extensions).

EMEs in Fx

First, what will Firefox include? They will include the W3C’s EME standard for HTML5 video. This standard effectively says that an implementing browser includes a plug or a mount for DRM. The browser doesn’t have to include DRM directly (though it appears a browser vendor could ship it directly).

Think of it like a car, and because of car theft, a trade group passes a rule requiring members to include remote-controlled self-destruct mechanisms in their cars. Except they didn’t require the car makers to build-in the actual explosives. They just have to provide a place to put the explosives and the remote-detonation functionality to blow the car up if someone installs the explosives.

And then let’s say that all the fast food drive-thrus said you can’t buy our food unless you have the self-destruct system enabled. That’s you going to ACME Entertainment and streaming the movie, getting the popup that says, “please install this EME plugin.”

We’ve seen this before, with codecs. Mozilla resisted including H.264 because it’s a proprietary codec that isn’t available for all systems. But other major vendors paid for it and shipped it without blinking, and sites put videos out in H.264. Mozilla did what they felt they could, but eventually began relying on operating system support for H.264.

Mozilla is a large organization, risk averse. They do not want to see other browsers force them into a less influential position, potentially causing even more harm to the web. So they run the numbers, hold their nose, and compromise if they think it’s a bad path that may let them get to a better place to fight tomorrow. In other words, they see the risk of DRM entrenchment as less likely or less harmful than Firefox being left behind by users who increasingly watch video in a browser.

DRM serves no real purpose, and at-best represents a gris-gris for parts of the entertainment industry that do not innovate adequately. Valve Software and some other video game creators, are just starting to recognize the economic benefits of openness and artistic community. These are promising signs. As the lines blur of the lines between video games and film/television, it is expected that other industries will follow and that DRM will become rarer and rarer.

FCC’s NPRM: “Protecting and Promoting the Open Internet”

The actual proposal (FCC: PDF: 15 May 2014: Protecting and Promoting the Open Internet) only contains a few rules:

  • Transparency
  • No Blocking
  • No Commercially Unreasonable Practices

The rules that aren’t yet proposed have raised the public’s ire. The proposal requests comments on a variety of issues, taking a “we’ll make the rules later” approach. Early on in the proposal (p. 3) the FCC acknowledges two paths seem viable (sec. 706 and Title II) and they want comments on the best way forward.

Currently the FCC classifies ISPs as information services, and the court that struck down the previous rules said, obiter dictum, that they did not believe section 706 would allow for certain regulations unless the FCC reclassified ISPs. This is not a binding ruling, but should be taken as weight against merely trying to shoehorn non-common-carriers into regulations under section 706.

If you read the definitions of both “information services” and “telecommunications services” I think it’s clear which ISPs should be classified as. Despite the claim of ISPs that they will refrain from innovation if classified as common carriers, they should still be so classified.

If we need “fast lanes” they can be done through some alternate arrangement that is voluntary by the information service, rather than mandated by an ISP (similar to how you can have expedited shipping by a common carrier). Or the ISPs can negotiate for a new classification by statute that will include, e.g., mandatory progress and innovation, restrictions on operating as an ISP and line owner and media company simultaneously, etc.

Currently, the only meaningful way forward seems to be for the FCC to classify ISPs as telecommunication services subject to common carrier rules.

Categories
biz

Mozilla’s Advantage in Mobile

One of the major technology spaces still up for grabs is mobile. Apple led out with the i-series of mobile devices (iPhone, iPad), running iOS, while Google came back with third-party manufactured Android and their own Google-designed Nexus devices. Of course, Microsoft has their devices and their mobile operating system, but they are playing catch-up.

Mozilla has come in late with the FirefoxOS, and without plans for their own hardware. Yet they have a distinct advantage.

One of the frustrating things about new technologies from the big three (Apple, Google, and Microsoft) is lack of integration. Especially if you don’t standardize your technology choices on one of them, but even then.

For example, you can subscribe to various publications or buy certain media from these technology vendors (and others, like Amazon), but you don’t necessarily get equal access from all your platforms. Indeed, some of your platforms may be wholly excluded.

That’s the most common case for me, as a Linux user. There isn’t a native client for accessing media on Linux, and the web offering is usually inferior (example, with the streaming music services). In some cases the web offers no solution, mostly in the case of video. A few video providers utilize Adobe Flash, but these require an obsolete library, HAL, to support their copy protection schemes (“DRM”).

But that’s why Mozilla has a strong position: the native web. It lacks some features, but it can gain them. As it develops, it will provide the strongest point for integration between platforms.

Google recently announced their “Play News Stand” application for Android. It’s an application to deliver news to you, and some of the content is purchased. But there’s no web version. There is less incentive than ever for users to buy content that’s only accessible on one device.

Consumers don’t want to switch all their device profiles and operating systems to one vendor simply to gain the marginal benefit of equal access. The economics aren’t there. They don’t get cheaper access. All they get right now is access to one shop per device.

Credit card companies would not be the force they are today if their cards only worked at just one vendor, or even a handful of vendors. True market capitalism requires open markets, and that’s what the web represents, what the web (and any viable replacement for the web) must evolve into.

Mozilla’s road may be rocky in establishing FirefoxOS and its benefits. The web as a platform has much growing up to do (especially in things like having a common user interface for applications developed by different vendors), but it has every sign that it will.

Mozilla is playing the long game here.

Categories
linux

Looking Forward to the Future of Iceweasel

Mozilla Persona

The biggest feature that I really hope takes off will be Mozilla Persona, which is will bring the ability to replace all the Login with Facebook and OpenID and remembering a million passwords with a system that allows you to have multiple managed identities with your choice of identity provider.

In many ways BrowserID is an evolution from OpenID, but as it gets built-in to the browser, it should bring an easier adoption curve with it. This can’t happen soon enough, with more and more major sites being cracked and the user data being strewn across the web.

Australis

The visual refresh of the browser (see mockups: Mozilla: shorlander: Australis Design Specs for Linux) is going to be great. My favorite part of this is the non-active tabs having low-volume to their appearance. This gives a much nicer feel and the impression that they are truly in the background; your active tab is what everything below it is about.

That will be improved over time as other features of the browser enhance that contextual choice.

GCLI — the Graphics Command Line

This is a developer tool, which allows you to easily access developer tools and commands so that you can try things out while developing on the web platform. It should allow for things like color manipulation, screenshot creation (great for documentation), and other tasks. Documentation at MDN: Tools: GCLI, which leads me to the final thing about Mozilla I’m really looking forward to:

MDN Kuma Switch

This isn’t really in the browser itself, but it’s just as important: the Mozilla Developer Network (MDN) is moving to a different platform, which will allow it to boldly go where no wiki has gone before. I use MDN a lot for learning about the browser and the web, and I’ve even made some minor edits to it before. But the current (soon-to-be-old) platform has what I consider a clunky interface for editing articles, and it’s likely it’s deterred some people from contributing.

The new MDN is based on the same codebase as the awesome Mozilla Support (SUMO), which means that further progress can be shared between both sites.

Really, some great work is afoot. I look forward to seeing what’s next.

Categories
hyperweb

Badges and the Social Fabric

Mozilla started a project called Open Badges; they propose to develop of something of a cross between a human-readable Geek Code and traditional Scout badges.  They recognize learning on the internet, so that if you put forth the time and effort to learn about a topic, you earn a badge that displays that ability to others.

Screenshot of Google News, Politics Section showing a story about Conan the Barbarian
Screenshot of Google News, Politics Section showing a story about Conan the Barbarian

Google News has initiated its own Google News Badges, where by reading stories about a given topic you can show off your subject prowess through a badge.

Today’s post delves into the social fabric of the internet, and looks at the pitfalls that these badges try to bridge and how to improve the efforts.

Google works hard to make their news more relevant, so please do not take this as a criticism of their efforts.  The problem they look to solve holds its ground, and the Google News site still beats the other, non-user-driven news aggregation sites I’ve seen.

Media quality varies wildly, so reading a lot of articles does not necessarily make one informed.  Also, for a lot of stories the headline tells the tale, but users receive no credit for a story they understood at a glance.  But, possibly worst, taxonomical issues devalue the results.

For the last example, the current Google News results suffice.  The second headline under Politics for me is about the movie Conan the Barbarian.

Bad enough in itself, there are two other problems, namely the Murdoch Empire bookending the Conan story.  This, despite my best efforts to rid my Google News sections of those sources which I consider too biased to bother with.

Subjectivity abounds in badges for news and similar pursuits, and Google News’ categorization attempts have not been dependable to date.  I would not want a badge based on reading those stories, and would not trust someone’s badge based on them either.

This symptom simply represents the larger problem with crafting badges, namely taxonomy.

Someone possessing a given skill, in name versus practice, might meet, exceed, or fall short of expectations.  The fact that I read a book or watched a film does not mean I understood it, and the fact that I did not, I could know it implicitly from cultural references (eg, Citizen Kane).

The delicate art of communication riddles us to decipher who knows what in an efficient manner.  Our ability to solve large problems depends on such things, and yet we often fail to uncover the knowledge pool.

Studies reveal that groups with more women tend to have higher group intelligence.  For example, quoting “Collective intelligence: number of women in group linked to effectiveness in solving difficult problems,” from Science Daily:

When it comes to intelligence, the whole can indeed be greater than the sum of its parts. A new study co-authored by MIT, Carnegie Mellon University, and Union College researchers documents the existence of collective intelligence among groups of people who cooperate well, showing that such intelligence extends beyond the cognitive abilities of the groups’ individual members, and that the tendency to cooperate effectively is linked to the number of women in a group.

While the studies tend to cite sensitivity to the emotions of the group members, it seems plausible that the type of communication, beyond simple sensitivity, holds a key.  More social groups construct better social taxonomies (ie, recognition of the role capabilities for the members) and do so more efficiently.  A study purely about discovery of the social taxonomy would probably reveal as much.

Badges may improve discovering the group ability.  Chiefly, badges should assist in motivating learning and crediting it.  But to truly uncover the promise of the internet, both pieces are needed.

One of the ways to improve badges might be to grant special statuses, like teacher, atop the regular badges.  Teaching refines existing knowledge, as it challenges you to present information in different ways and to approach the subject differently than as a learner or user.

Most specifically, teaching relies on formalizing the models of a subject, taking them from primordial form to crisp edges and smooth, consistent surfaces.

The other major challenge and improvement involves ascertaining the existing skills.  Some websites already work toward that end.  For example, the Reddit community, AskScience currently marks members with scientific training so that readers may measure the reliability of their answers on a given topic.  If new-web initiatives like badges take hold, those acknowledgments may be transformed into true badges.

The internet’s potential remains untapped, but with all of the experimentation going on, results will come.

Categories
hyperweb

Comment on Comments

I’ve been following the Knight-Mozilla Challenges (Drumbeat: Knight Mozilla News Technology Partnership), and the second challenge is about improving online commenting and discussion.

Comments are Normal

In thinking about that challenge, the first thing that occurs to me is that the quality of any given comment will follow a normal distribution.  Most comments will be of an average quality.  Some will suck.  Some will rock.

Many of the solutions that exist today focus on changing the output.  They basically seek to filter the whole set of comments so that the shape of the curve for the visible comments changes.

I question if that’s the best approach.  Can and should sites seek to improve the quality of the average post instead?  That is, if they could constrain the input in some ways, to elicit better comments, would that be better?  If so, what would that look like?

It might take the form of a series of explicit prompts for comments.  Instead of having (only) a general discussion, you would have some particular aspects that you could comment on and discuss.  The idea would be to frame particular discussions around more specific aspects, to avoid the drift that occurs in more general discussions.

You might give users a choice to create a directed discussion topic or participate in an existing one.  You would give only a small group of users the opportunity to create directed discussions to avoid an overabundance of them and have them simply become roots of threads.

Non-comment Participation

If you seek to participate online, that is almost entirely restricted to a two basic tasks.  One is commenting directly.  Another is curating comments of others (via either direct voting, rating, moderation, or linking).

It seems like that misses some opportunities.  My hunch is that there are other participatory means that would enhance commenting, but also some opportunities aside from commenting that could be added.  One possible idea in that vein is paraphrasing.

In visiting a story, I read and comprehend it, and any comment on it will reflect my own interpretation.  One opportunity that is separate to comments but related is paraphrasing or summarizing the original content.  That would give users alternative interpretations to read before commenting.  It might better-inform the discussion, provided the summaries were strictly non-commentary.

As above, the idea would be to only have a small fraction of the users to provide summaries.

Commonality in These Ideas

Both of the ideas above express a common theme of moving away from general discussion threads.  They focus on subtly shifting the discussion toward less self-directed content.  This isn’t to say that people can’t self-direct their comments, but it’s an extraneous step in many cases.

By moving a small number of users out of the comment pool and into a different layer of the discussion/participation system, you provide an opportunity for different behaviors to emerge.  That harkens back to the initial point I made: shifting the quality of the comments rather than filtering the whole pool.  There are too many users stuck in the same behaviors of commenting, and the web offers an opportunity to focus on diversity of activity.

I’m encouraged that the Knight Foundation and Mozilla are working on enhancing the role of journalism on the web, and I am definitely looking forward to finding out what sort of innovations come from it.