The site uses cookies that you may not want. Continued use means acceptance. For more information see our privacy policy.

Diffserv: Can it coexist with neutrality?

Diffserv is a traffic classification scheme. Can it coexist with network neutrality? Not unless the data services industry commits to public audits.

One of the managers at AT&T recently blogged about (AT&T Public Policy Blog: The Danger of Dogma) an IETF working group’s RFCs regarding Diffserv, a traffic classification scheme, as evidence that neutrality is not a fundamental principle of the internet.  I’m pondering, in brief, whether Diffserv and neutrality can coexist, and if so, how.


Briefly, Diffserv adds a small tag to each datagram in order to classify its type of service.  The default is the standard best-effort, and other options are for various priorities in order to help guarantee that specific traffic will reach its destination in a useful manner.  The scheme depends on intermediaries respecting the tag on each datagram, and being able to actually carry out the request according to its meaning.  The old scheme, best-effort, simply means that each intermediary treats each datagram the same, and tries to faithfully route it on toward its destination.

A Mixed Bag

So, if you have a bag of marbles, and they are all the same size, then reaching in, you will be equally likely to grab any of them.  It could be said they have neutral weight with respect to one another.  The Diffserv bag contains many marbles that are equal, but some are bigger, and some are sticky.  You are more likely to pull some of those marbles out.  That much is obvious, and there will be an inherent bias there.  Is that necessarily a problem?

I am willing to admit it is not necessarily a problem.  If you have upgraded your marble scoop to be faster (ie, you can guarantee the same treatment as always to the standard marbles), then there’s really nothing wrong with the bigger marbles being mixed in.

The Rub.

We are talking about the data services industry.  They are not known for their dependability, and even if they swore on a stack of IETF STD docs that their new marble scoop would not only guarantee the old level of treatment to standard marbles, but would actually do better, I would not believe them out of hand.  That much is clear.

So, we have a question to ask them.  What, exactly, will be the technical measure that will guarantee the continued progress of the network, and how will you be deterred from simply keeping the infrastructure static while wink-nudging customers into adding stickum to their datagrams?  That’s what it will take to make such a change acceptable to those of us that aren’t in your pockets: undeniable proof, based on well established standards of evidence that are employed in this little thing called science.

What Proof Could Look Like

Proof that they would not discriminate against best-effort packets would have to come in the form of infrastructure guarantees.  That is, they would have to make public commitments to their infrastructure development.  Furthermore, random, public audits showing consistent improvement in the delivery of best-effort datagrams would be required by law to show a lack of bias.

The penalties for cheating or non-participation in the audits would have to be very stiff and actually enforced.  As the government doesn’t enforce the law for extraction of resources from ecologically sensitive environments, Diffserv doesn’t look very promising.

The downside of streaming

Streaming is just a bad idea except when it’s absolutely necessary. Really.

There are basically two ways that you get video online. One is streaming which is ‘online’ in that the bits are immediately scurried up the networking layers to the application layer for demux and display. The other is the more familiar downloading which is closer to ‘offline’ in that the data is buffered anywhere from a small part to the whole video. Streaming sucks and downloading FTW.

Streams in an ideal world

In an ideal world streams work okay. There is low enough latency and enough throughput on the network that transporting in a stream is basically indistinguishable from downloading. Currently this kind of service can be found in intranets, Internet II, and if you’re paying probably at least $150/month (okay, maybe $80/month) for a connection.

You basically want to use streaming iff (you only want to use the connection you’re on for that OR the stream is small enough PER PACKET that any other traffic (will not be delayed OR delay doesn’t matter))

That’s seldom the case, so more times than not you want to use a download architecture.

Corps want to use Streams

It’s in the corporate, pro-excessive-intellectual-property-whoring agenda to use Streams. They are harder to capture and redistribute (particularly when encapsulated to that end). They also have the perks of being ‘instant on’ such that the average joe can click and immediately begin watching. This is heavily favored in a world where joe might decide to do something else if you don’t give him what he wants now.

It’s one of the reasons for the net neutrality debacle. Corporations want to send giant streams on-demand to the customer with assured quality. Other corporations want to charge for that privilege. Still other corporations and individuals want all data to be treated blindly. It’s not part of the internetworking architecture for routers and routes to know much less care what is on their wires. They’re just supposed to ship it as they get it. They’re supposed to attempt to ship it, rather. If it gets lost or they can’t ship it they should send back an Internet Control Message saying why, if they can.

Local is better

Unless your network is fictional there’s a high probability that the data on your hard drive is a faster access than on the wire. That being the case, any operation on a video will be faster from the buffer than sending a request and waiting for that data.

Pause? If you’re watching a downloaded or downloading video no problem. If you’re watching a stream then everything already sent just gets chucked (or if you’re lucky buffered), but once the buffer is full you’re either eating away from what you just watched or not getting any more data.

Rewind? If you’re lucky there’s still some of the old data in the buffer.

Fast forward? All that was sent intervening probably gets dropped as the buffer fills with the new time-frames.

With downloading you keep all that data and if you pause the future data keeps coming with no problem.

But the main point about downloading is that you don’t need the data to be fetched WHILE you download it. There was assuredly some point in the past that you could have gotten the data and now is just when you want to watch it. The exception is a live event, and the caveat is that you may not have known you wanted to watch it until just now.

And that’s where the motivation for things like my upcoming bookstack extension come in.

“Right Now” isn’t as big of a deal as you think it is

Just because you want something right now doesn’t mean you can have it right now. This is a lesson we all know from birth onward. Crying in our cribs, we wanted something NOW. We wanted our asses cleaned or we wanted milk or to burp or attention. But no matter how fast our parents were they were not instantaneous.

If you want “leisure” right now then there can be a stack of leisure available to you. If you want intellectually stimulating, a stack. And so on. My extension focuses on (for now) just making one stack with all the links you encountered and said, “I want to look at that maybe, but right now I’m just gathering stuff to look at,” or, “I’m in the middle of this and someone has sent me this link and I don’t want to fool with it NOW, so I put it in a LATER folder that handles it for me NOW.”

With video online the goal is like Netflix. You say now that you want the complete series of Mr. Belvedere in your Netflix queue and later you get it in the mail and watch it. Right now you can say you want to watch the latest webisode of Cooking With Backbone Routers and at any point after it’s done downloading you can watch it.

Allocation is the key

The main gap to fill is in allocating the connection properly. If there’s nothing on it you can take it all. If someone joins then you should throttle back to accommodate. If there’s enough on it then new folks should be queued up and wait for the resource. The traffic should coexist peacefully and one of the main steps towards that is that for static data that isn’t needed NOW it can be deferred (or preemptively grabbed, preferably). There’s plenty of time when the wire sits silent and that’s the perfect time to be grabbing what will clog it later.

Anyway, just some thoughts about the stupidity of streaming video (of non-live events).