Sorry Elon, “open source” algorithms won’t improve Twitter

Placeholder while loading article actions

Commenting on his own quest to take control of Twitter, Elon Musk suggested he could share with the public the code that determines what content is promoted on the platform and what gets removed. That might sound exciting and it might just boost profits for Twitter, but average users will likely find it makes their experience worse.

To be clear, Twitter’s design is what primarily drives its user experience and culture – i.e. the wide selection of tweets you’re most likely to see (e.g., those from people you follow ) and those you probably won’t see (for example, those of people who follow you).

Unfortunately, even a carefully curated feed can contain a lot of unwanted content – ​​semi-interesting dunks on other tweets and snippets of provocative but possibly false information – mixed in with really interesting and timely nuggets. The recommendation algorithm that decides the order in which these items appear has a profound effect on whether the user is delighted, confused, or bored.

But “opening up” this algorithm could bring people a lot more unwanted content.

It probably wouldn’t make it easier, as some have suggested, for other platforms to compete with Twitter. The recommendation algorithm is likely a variation of a standard model that data scientists learn in school rather than actual intellectual property. And the biggest barrier to competition is Twitter’s network, not how it’s moderated. It’s nearly impossible to get enough people addicted enough to your new platform to create a great ongoing conversation (just ask Donald Trump).

In any case, the vast majority of Twitter users won’t be able to use the open source code, because the code is usually difficult to read even by those who wrote it. Trust me.

However, there’s a third group that might be eager to see the open-source Twitter code: people who hope to use it to cheat the system so their tweets go viral. They, too, may well be disappointed, as using the code will require real-time metrics that they won’t have. They won’t see the ever-changing data that the code relies on.

Even so, the open-source code can spell out the kinds of things that cause a specific tweet to be promoted. It may turn out, for example, that the number of followers you have matters, or that the number of retweets in the last four minutes matters, or that what matters is the product of these two things. But even knowing this information, it would take a lot of work, and possibly a lot of money, to game the system in an effort to make a tweet go viral. In any case, once the code is opened and run through, expect its weaknesses to be exploited for this type of explicit play.

Given that Musk has so far only floated the idea of ​​open-source code, it’s unclear just how public Twitter’s algorithms would be. Would the open include things that could be hard-coded but could also be considered “data”, such as the list of words or phrases that, if used, are automatically censored or result in the disabling of accounts ? Such a list would be particularly interesting for people who want it to be shorter or deleted entirely. I suspect that’s exactly the conversation Musk wants to encourage on the basis of free speech. If so, it will be a fight.

If the platform is open to more use of offensive language, it probably won’t improve the atmosphere for the average user. It will primarily appeal to those seeking attention and outrage – Twitter users like Musk himself.

This column does not necessarily reflect the opinion of the Editorial Board or of Bloomberg LP and its owners.

Cathy O’Neil is a Bloomberg Opinion columnist. She is a mathematician who has worked as a teacher, hedge fund analyst and data scientist. She founded ORCAA, an algorithmic auditing firm, and is the author of “Weapons of Math Destruction.”

More stories like this are available at bloomberg.com/opinion

Sharon D. Cole