The site uses cookies that you may not want. Continued use means acceptance. For more information see our privacy policy.

Twitter’s Mixture

Dang, shoulda written about an edit button.

Although I first used Twitter around 2008, I didn’t start using it regularly until a few years into the dark ages of Donald John Trump, looking for some light. In general, using social media without a set or clique or cohort or whatever word you want isn’t that easy. A lot of the informal rules and norms aren’t written down, can vary by subculture, and unless you have people to help you understand them, it’s pretty bewildering.

Social media has rough edges that don’t seem to be getting smoothed out. This post looks at one of them: bad mixtures.

Social media (and, to a lesser extent, the web in general) has a blender effect. Instead of a nice balanced meal of an entree, some sides, a hunk of bread, a glass of water, and maybe some dessert, Twitter dumps them all into a blender and end up with a nutraloaf-style mixture.

All the various accounts are coming at you with a bunch of different contexts and tones. Their avatars are the same regardless of what they are saying. You see the cute puppy dog, the business headshot, the cartoon, or whatever the hell my avatar is supposed to be (?), telling you some tale of humor or outrage. It’s very body-snatcher-esque, a constant branding clashing with the highs and lows of content.

There are some modest ways to tweak the blender. Twitter Lists let you toss particular accounts together by some common quality. It works well for accounts that stick to one type of content. But every individual’s account has its own blender effects, so while Lists might help on the average day, they inevitably fail from time to time, when you find the feeds inadvertently conspire to produce another info-sip of yuck. And if you have enough coverage of different accounts, it’s likely at least one subset will be having a bad-news day.

Some of this mixing works to Twitter’s favor. The diamonds in the rough reinforce you to keep scrolling away in hopes of finding the next one: a random reinforcement schedule. But it’s awfully jarring in between. You don’t get a balanced diet without significant efforts to curate your experience. On any given day, the compounding of various accounts posting bad news can doom your scroll. Or you can be in a serious mood and suddenly see a bunch of fluffies to distract you.

Compared to the newspapers, where editors dealt with multiple sections and worked to balance the content and ads, cutting to fit or padding to fill, Twitter is a Pacific Gyre of awful bits, where we hope to glimpse the rare dolphin or whale.


But here’s the thing: the model has an answer. Social media runs on user contribution. The users do most of the work already, adding, amplifying, and filtering content. They can do more. Add category and tone or mood options to tweets. Let people who retweet or like a tweet add their own curation on top of it, in case the “Cool” thing they retweet is really “Lame” to their mind. Is it sad funny pretty ugly hot stupid wild depressing far-fetched down-to-earth food-for-thought or whatever (pronounced what-ever)?

And then let users choose to see the good news together, the bad news together. Give us the choice. Let us separate the Sports from the Politics (until we hit a story about a sporty politician or what have you). Hashtags are a good way to search, but they don’t put the tweets into buckets and most tweets have no hashtags.

The idea is that social media, that Twitter, does not have to be a mixture of whatever these accounts we follow happen to surface, rather than something with an extra layer of filtering atop that. Let users do what they do best and help each other out.

Perhaps as machine learning matures, automatic classifiers that fit users needs will become available, but until then, shouldn’t Twitter users have moderation and filtering tools that are at least as effective as Slashdot’s were 20 years ago?

Politics, Networks, and Identity Fragmentation

Social networks fragment users by classification and then fragment themselves by offering up personalized content.

With the recent questions about vaccine misinformation on Facebook, part of the issue is really figuring out what these platforms are, and who people are when using them. At the core of social networking is the idea that people can be classified or grouped, which is true and false. Many businesses rely on enough people being similar in certain ways to allow for classification. Banks do it for loans, insurance companies for figuring out how much to charge based on expected payouts, media does it for advertising—delivering expected demographics, and so on.

And it turns out we can classify people in lots of ways, and some of that makes sense. But that kind of classification can also cause us big problems. Social media is itself a fragmentary system. It changes how it acts based on how the user is classified, and it does it in a way that it isn’t paying attention to what the classifications are, what they mean to real people.

Banks don’t classify people to figure out, based on certain account activity, who is having an affair. They probably could. They could send your spouse a notice, we think you’re being cheated on. People wouldn’t be happy about that. The bank would be in big trouble. But the bank doesn’t want to piss off too many people too much. It avoids that kind of thing.

Social media doesn’t even know what they classify on. The signals they classify on are mostly subliminal to the company. They simply look at the output, whether more users see more ads, and adjust on that blind criteria. YouTube is just as happy to show you videos of static or of opera or of right-wing blowhards. Facebook seems to actually excel at spreading misinformation, while doing poorly at spreading information. But they have to pretend otherwise to their advertisers.

So you have fragmented user identities, where they are trying to classify users into as many discrete categories as they can, then nudge the users to the ones that are most addictive and profitable. Again, without knowing what those categories are. And you have the fragmented social network, which is trying to generate content that is more addictive and profitable.


The system isn’t designed to care. That’s the basic takeaway from social networks, and why they can properly be called anti-social networks. Social systems are based around caring, around holistic evaluation of peers and knowing their strengths and weaknesses and adjusting our behavior to fit and shift in our various cohorts. Anti-social networks have no such goals or fabric.

Their fragmentary design decides to push and shove the users toward whatever signals look profitable. All the while, the social network you use may be so divorced from that of your neighbor, you might not even recognize them as being the same system. It would be like riding the subway and having completely different stops available. For you, maybe the train lets you off at shopping, banks, a park. For your friend, it might be bars, nightclubs, and speakeasies, if your friend has a drinking problem and the subway can profit off it.

There are ways to improve how these websites operate, but it’s clear that they won’t try unless they have a real reason to. Regulatory policies focus on privacy or on misinformation, but it’s not clear those limited changes will fix the problems. They seem more likely to simply push malicious parties who profit from the anti-social to find less-bemoaned misinformation or do what spammers attempt: fuzz their bad content enough to avoid filters and crackdowns.

Instead, what’s needed is a new generation of interactions that rely on stronger identity signals divorced from niche interests. Too much of the basic currency of social interaction online is through easily falsifiable signals like sports teams or pet preferences. We need better ways to establish online identities that aren’t meme-based.

Facebook, Lies, and Politics

Thoughts about Facebook’s decision to allow lying in political advertising.

Twitter is banning political ads. Facebook is banning political ads from people they believe lie about being politicians. But Facebook will allow bona fide politicians to lie in ads.

What is the role for a platform, both in ads and in moderating?

That’s the wrong question.

The question we must ask is not what is the shape of a proper social network. Why not?

  1. There may be several, and they may coexist.
  2. The shape may change over time, including in cyclical ways (e.g., during an election cycle versus outside of it).
  3. These networks span the globe, so fighting for changes in domestic rules won’t help the most vulnerable overseas.
  4. We don’t know what we don’t know, per Donald Rumsfeld.

The proper question about social networks is: How do the people gain enough leverage to serve as a forcing-function to shape social network behavior, rather than merely being shaped by it?

Traditionally, the answer to that question has been money, and the answer to how to influence them through money has been competition. That is, if their income is threatened by the easy choice of users to go next door, then they don’t do things that harm users enough that they go next door.

In the case of Facebook, their money comes from advertisers of all sorts, including politicians, scams, major brands, and in the case of President Trump’s campaign, all three at once! (Gotta take the cheap shots as they come.)

But Facebook is global. It has diversity of users, including people who think their small business depends on it, including media types who think their traffic depends on it, including politicians who think they’re connecting with constituents, and, yes, including grandparents and such who feel social connection because of it.

Competition doesn’t seem to make sense in social networks, in terms of the need to maintain copies of one’s social graph in several services simultaneously. Instead, either you have several social graphs that look different per service, or you are migrating from an old social graph on one service to a new one on a new service.

But in what substitutes for competition, if you want to move people off of Facebook, you’re basically saying that those benefits need to flow to those users. You have to engage with the politician on Service X, so that their office recognizes that people are there, so that they care more about Service X. You have to let your grandparents see you responding to them on Service X. And so on.

That is how these networks function. People go where the people are. And a site like Facebook will respond only when they see that movement, or some other threat to their revenue. Lacking a brain, a heart, and courage, that’s all that can convince them that letting politicians lie for money is dumb as hell.