Tyrone Kidney

View Original

The Great American Fetish

The Weakness That Fuels Algorithms.


Social media platforms originating in the US all play upon the deep-rooted, cultural fetish that is ‘being popular’.

I for one am tired of this asinine fixation and find myself relying less and less on the big names in social media, and looking to other online platforms to learn and share.

The algorithms that run Facebook, Twitter, Instagram and others, take full advantage of the North American obsession with popularity. They direct us to like and share things said by complete strangers, to openly request the explicit approval of others, to endorse and follow people, and aim to be well-followed ourselves.

As a European living in North America I find this whole popularity thing bizarre and distasteful. For us, the term ‘popular’ simply refers to a relative amount. It’s never a compliment, it certainly never implies quality, and it can even be used as an insult.

Yet, these globally used algorithms were devised in a culture where popularity is regarded as a pre-requisite for success and happiness.

Social media platforms universally instruct new users to tap-in to this principle, before scooping them into groupings of what the algorithm says are like-minded accounts. It creates for them a cozy bubble of selected posts, news, opinions and promotions, all based on what they (and people like them) already identify with and are likely to support.

These principles and groupings then form the foundation of the complex advertising and promotional products the platforms offer.

This baked-in popularity principle means that social media amplifies what we already know and want, as well as magnifying other phenomena traditionally associated with being popular offline.

Which obviously isn’t news, and certainly isn’t a good thing.

The Effects.

Firstly, people’s behaviour on social media regularly demonstrates that we don’t like to stand out from the crowd. Conformity with the norms of the group helps individual members keep hold of any popularity they might already have, but it can also result in aggressive ‘othering’ of non-group members, and the fierce defence of the beliefs which shape their group identity.

Secondly, those who can afford to promote their social media content perpetuate their existing privilege by doing so. Whether they’ve paid for social media training, or hired someone to handle their content, or paid for fake engagement, they are investing in their brand. This (they feel) adds to their ‘worth’ in terms of both cash and popularity.

Thirdly, personal accounts with a large following benefit from popularity-focused algorithms as soon as their content is posted, no matter how inaccurate or mundane it might be. Poor quality ‘A-list’ chit-chat gets fast-track priority on their followers’ timelines while content from smaller, newer accounts just won’t be shown.

Fourthly, businesses and organisations that are able to pay for social media promotions are paying to obscure smaller or newer operations whose reach and visibility are already algorithmically stifled by their lower follower count. This ensures the ongoing prominence of larger brands leaving smaller accounts to struggle — even if they’re not in direct competition.

Being Both Popular and Wrong.

My interest (as someone with a background in e-democracy and public engagement) is in how these popularity-loops and uneven privileges block access to new information, and why false and out-dated information still has traction despite better information being readily available.

The answer lies partly in the effect (and some might say the manipulation) of social media algorithms.

When fresh, useful and insightful content emerges from a small or new social media account in our bubble, it will either be buried by content that has been algorithmically pre-ordained to be ‘popular’, or ignored as a marginal view from an unpopular account.

Either way, its chances of being seen or engaged with are slim.

This lack of reach isn’t limited to small accounts. It applies to any views or topics that aren’t already popular within a particular bubble.

An example of this can be seen in the opposition to wearing face coverings during the Covid-19 pandemic. And I’ll separate this from the example of vaccine hesitancy as there are better clinical writers than me who can offer a better analysis.

We wonder why some anti-maskers constantly regurgitate debunked pseudo-science and woefully out-dated information, and why they haven’t read anything more up to date. Well it’s partly because they haven’t come across it.

Yes, there’s the issue of ineffective leadership on facemasks during the pandemic, and we know that behavioural models explain some of the anti-mask attitudes, and we understand that there are malicious people out there trolling for kicks.

But we also know that there’s a large, conservative base of people whose preferred source of information and news is social media.

Acknowledging this single-source means we should also acknowledge the fact that social media algorithms have influenced opinion by restricting and pushing the information that shapes it.

Loudly quoting new pro-mask research to those people whose social media bubble is full of trusted accounts confidently testifying otherwise, will (as you know) get you nowhere.

These people blew all of their cognitive budget when they decided who to follow. New information from outside this group is considered to be an identity attack, not a learning opportunity.

As infuriating as it can be, we have to accept that social media is unlikely to be a reliable tool for persuading people about facemasks, vaccination, climate change, racism or any other topic that becomes tainted by popularity and populism.

Footloose and Algorithm-Free.

On top of this algorithmic point, social media platforms have several issues relating to misinformation, propaganda, bias and poor quality reporting and dubiously targeted content.

Which makes me and many others doubt the likelihood that social media content could ever be wholly reliable.

Not that I’m paranoid about anyone’s motives or messages (if we don’t appreciate that everyone’s got an angle then we really are lost), but I’ve learned that there’s more and better content out there that I’m just not encountering via social media.

I also know that as a small business owner, I can do better by promoting myself elsewhere.

I extracted myself from Facebook and Twitter for a month earlier this year and found that many of the social media accounts I follow also have entertaining, informative, algorithm-free blogs and websites. So I spent time developing a similar platform for myself and it’s great. Actually it’s still a bit ropey and I have plenty to learn, but it feels solidly worthwhile, and all mine.

It might seem nostalgic or even conservative to dismiss social media and look back to the good old days of websites and blogs, but it’s ultimately a practical rejection of a sector that offers little value, not some kind of protest.

I appreciate that I have to engage occasionally with platforms whose algorithms I’m uncomfortable with, and sure, I’ll share this story with folks in my online writing bubble. But nowadays the return on investment for using social media organically is pretty much nil.

Remembering that there are other ways to share and learn online is extremely liberating, and I’d encourage others to connect with the algorithm-free content of people they follow, and generate some themselves.

If you’d like to know more about the cultural biases within algorithms, you may like to read some of Cathy O’Neill’s output. Cathy’s blog can be found here, and her various TED and NPR resources can be found here.