With the recent questions about vaccine misinformation on Facebook, part of the issue is really figuring out what these platforms are, and who people are when using them. At the core of social networking is the idea that people can be classified or grouped, which is true and false. Many businesses rely on enough people being similar in certain ways to allow for classification. Banks do it for loans, insurance companies for figuring out how much to charge based on expected payouts, media does it for advertising—delivering expected demographics, and so on.
And it turns out we can classify people in lots of ways, and some of that makes sense. But that kind of classification can also cause us big problems. Social media is itself a fragmentary system. It changes how it acts based on how the user is classified, and it does it in a way that it isn’t paying attention to what the classifications are, what they mean to real people.
Banks don’t classify people to figure out, based on certain account activity, who is having an affair. They probably could. They could send your spouse a notice, we think you’re being cheated on. People wouldn’t be happy about that. The bank would be in big trouble. But the bank doesn’t want to piss off too many people too much. It avoids that kind of thing.
Social media doesn’t even know what they classify on. The signals they classify on are mostly subliminal to the company. They simply look at the output, whether more users see more ads, and adjust on that blind criteria. YouTube is just as happy to show you videos of static or of opera or of right-wing blowhards. Facebook seems to actually excel at spreading misinformation, while doing poorly at spreading information. But they have to pretend otherwise to their advertisers.
So you have fragmented user identities, where they are trying to classify users into as many discrete categories as they can, then nudge the users to the ones that are most addictive and profitable. Again, without knowing what those categories are. And you have the fragmented social network, which is trying to generate content that is more addictive and profitable.
The system isn’t designed to care. That’s the basic takeaway from social networks, and why they can properly be called anti-social networks. Social systems are based around caring, around holistic evaluation of peers and knowing their strengths and weaknesses and adjusting our behavior to fit and shift in our various cohorts. Anti-social networks have no such goals or fabric.
Their fragmentary design decides to push and shove the users toward whatever signals look profitable. All the while, the social network you use may be so divorced from that of your neighbor, you might not even recognize them as being the same system. It would be like riding the subway and having completely different stops available. For you, maybe the train lets you off at shopping, banks, a park. For your friend, it might be bars, nightclubs, and speakeasies, if your friend has a drinking problem and the subway can profit off it.
There are ways to improve how these websites operate, but it’s clear that they won’t try unless they have a real reason to. Regulatory policies focus on privacy or on misinformation, but it’s not clear those limited changes will fix the problems. They seem more likely to simply push malicious parties who profit from the anti-social to find less-bemoaned misinformation or do what spammers attempt: fuzz their bad content enough to avoid filters and crackdowns.
Instead, what’s needed is a new generation of interactions that rely on stronger identity signals divorced from niche interests. Too much of the basic currency of social interaction online is through easily falsifiable signals like sports teams or pet preferences. We need better ways to establish online identities that aren’t meme-based.