The site uses cookies that you may not want. Continued use means acceptance. For more information see our privacy policy.

Twitter’s Mixture

Dang, shoulda written about an edit button.

Although I first used Twitter around 2008, I didn’t start using it regularly until a few years into the dark ages of Donald John Trump, looking for some light. In general, using social media without a set or clique or cohort or whatever word you want isn’t that easy. A lot of the informal rules and norms aren’t written down, can vary by subculture, and unless you have people to help you understand them, it’s pretty bewildering.

Social media has rough edges that don’t seem to be getting smoothed out. This post looks at one of them: bad mixtures.

Social media (and, to a lesser extent, the web in general) has a blender effect. Instead of a nice balanced meal of an entree, some sides, a hunk of bread, a glass of water, and maybe some dessert, Twitter dumps them all into a blender and end up with a nutraloaf-style mixture.

All the various accounts are coming at you with a bunch of different contexts and tones. Their avatars are the same regardless of what they are saying. You see the cute puppy dog, the business headshot, the cartoon, or whatever the hell my avatar is supposed to be (?), telling you some tale of humor or outrage. It’s very body-snatcher-esque, a constant branding clashing with the highs and lows of content.

There are some modest ways to tweak the blender. Twitter Lists let you toss particular accounts together by some common quality. It works well for accounts that stick to one type of content. But every individual’s account has its own blender effects, so while Lists might help on the average day, they inevitably fail from time to time, when you find the feeds inadvertently conspire to produce another info-sip of yuck. And if you have enough coverage of different accounts, it’s likely at least one subset will be having a bad-news day.

Some of this mixing works to Twitter’s favor. The diamonds in the rough reinforce you to keep scrolling away in hopes of finding the next one: a random reinforcement schedule. But it’s awfully jarring in between. You don’t get a balanced diet without significant efforts to curate your experience. On any given day, the compounding of various accounts posting bad news can doom your scroll. Or you can be in a serious mood and suddenly see a bunch of fluffies to distract you.

Compared to the newspapers, where editors dealt with multiple sections and worked to balance the content and ads, cutting to fit or padding to fill, Twitter is a Pacific Gyre of awful bits, where we hope to glimpse the rare dolphin or whale.


But here’s the thing: the model has an answer. Social media runs on user contribution. The users do most of the work already, adding, amplifying, and filtering content. They can do more. Add category and tone or mood options to tweets. Let people who retweet or like a tweet add their own curation on top of it, in case the “Cool” thing they retweet is really “Lame” to their mind. Is it sad funny pretty ugly hot stupid wild depressing far-fetched down-to-earth food-for-thought or whatever (pronounced what-ever)?

And then let users choose to see the good news together, the bad news together. Give us the choice. Let us separate the Sports from the Politics (until we hit a story about a sporty politician or what have you). Hashtags are a good way to search, but they don’t put the tweets into buckets and most tweets have no hashtags.

The idea is that social media, that Twitter, does not have to be a mixture of whatever these accounts we follow happen to surface, rather than something with an extra layer of filtering atop that. Let users do what they do best and help each other out.

Perhaps as machine learning matures, automatic classifiers that fit users needs will become available, but until then, shouldn’t Twitter users have moderation and filtering tools that are at least as effective as Slashdot’s were 20 years ago?

How to Fight Fake News

First, a proper definition of the problem. The problem of democracy is always about the electorate choosing the people who will best-advance government, given the difficulty of figuring who that is, the complex tradeoffs at hand, and limited information.

The Russian Federation Fake News and any other rogue propaganda from any nation state agent are therefore just a subset of the problem of a dirty information stream flowing to the electorate. Trying to solve the de-Putinification of social platforms and the larger web, even if that were possible by itself, would not solve the larger problem.

So, we look to traditional noise problems for inspiration.


From Wikipedia: “Signal-to-noise ratio”: Improving SNR in practice:

It is often possible to reduce the noise by controlling the environment. Otherwise, when the characteristics of the noise are known and are different from the signals, it is possible to filter it or to process the signal.

From Wikipedia: “Combined sewer”:

This type of gravity sewer design is no longer used in building new communities (because current design separates sanitary sewers from runoff), but many older cities continue to operate combined sewers.

From Wikipedia: “Ad blocking”: Methods:

The more advanced ad blocking filter software allow fine-grained control of advertisements through features such as blacklists, whitelists, and regular expression filters.

From Wikipedia: “Bug bounty program”:

These programs allow the developers to discover and resolve bugs before the general public is aware of them, preventing incidents of widespread abuse.


Unless you can eliminate the source of contamination, you must rely on some sort of filter. It can be a complete sequestration of the contaminant (in the case of separating wastewater from runoff) or it can be a processing filter as with ad blocks or some radio noise removal systems.

The platforms that act as inlets of pollution may have their own cases against adopting of appropriate filters here, which makes it a harder problem.

But some combination should work to reduce the noise.

Separate the streams

In the vein of sewerage, social sites can make a hard break between reputable periodicals and up-and-comers. This should not present a barrier to entry, but should be based upon independently-verifiable indicators such as readership, credential-issuance by major organizations, and other factors. They should likely separate opinion and commentary from reporting for similar reasons.

This is in line with what companies often do. Newspapers separate opinion from reporting, and Valve Software, maker of the Steam game platform, separates humorous reviews from serious reviews for similar reasons. It’s something social sites should do, too.

Strength-in-numbers

Google and other search engines have long sought to fight against those gaming their rankings. Many of those techniques can be employed to de-rank noise, including looking for multiple, independent submissions that give credence to a source before spreading it. This is also similar to Wikipedia‘s notability requirement for article creation.

While this technique will not eliminate much, it does raise the bar for cranks to inject their swill, as it will be easier to identify when a group is colluding to post noise unless they expend considerable effort to make their fake accounts seem credible.

Check for divergence

Most credibly-sourced news content contains a chunk of background that isn’t new, with a small supplement that is new. Fake news tends not to follow that rule, and looking for that can be useful. Again, the enemies of signal may work to change their formats to avoid this detection, but it raises their costs considerably.

Make ads public

Finally, micro-targeted advertising creates the problem that it is not readily subjected to many eyeballs who can debunk it or call it out. If advertising platforms were required to maintain records of all the ads they serve, allowing for independent review, it would help guard against abuse.

Alternatively, if regulators and advertisers are opposed, browser extensions that automatically upload copies of ads to a non-profit service could enable this practice.

A brand opportunity

Apple has tried to brand themselves privacy-conscious. Google attempts to tout speed and security. Mozilla, openness. Microsoft… has a marketing problem, because I’m not sure what their salespitch even is now.

But the point is that all these browser and OS vendors can work on the problem of fake news and try to brand themselves the one that gives you the tool to quash the invasion.


These are just some ideas of how to combat propaganda in our news feeds. The problem is worth working on. It’s not impossible, as we have had noise problems in other areas and have done a lot to minimize them.