Facebook flagging articles from Axios, NY Post as spam

Facebook is trying to stop spam from getting on user feeds. The idea isn’t a bad one because spam and fake accounts infest the website like flies on a rotting carcass. But what happens when Facebook decides to set up an arbitrary algorithm to declare almost everything spam? The answer is a lot of headache for users.

Advertisement

I first noticed it this morning when Jeremy Kolassa from Cato wrote a link to Axios was flagged as spam by Facebook. He then got another spam notification – this time on a link from Mediate. Right Online co-founder Eliza Vielma had a personal blog post commemorating her mother removed and marked as spam.

New York Post columnist Salena Zito wrote on Twitter her column was briefly flagged as spam on Facebook.

https://twitter.com/SalenaZito/status/1032627858290745344

Other people pointed out links from AirForceTimes.com, radio station contests, and YouTube links to songs were getting the spam notification

One friend of mine had a post in a private group pulled down. The irony? My friend’s group is all about combating spam and fake accounts on Facebook. Way to fail, Zuckerberg!

I reached out to Facebook on what was going on, but have yet to hear back from them. I’ll update this post if I do get a statement – although who knows if they’ll say anything.

My theory is they introduced a new algorithm to crack down on spam, but the program is going haywire because it was improperly put together. I doubt it’s an act of censorship because websites like Mediate tend to skew middle of the road, and the link itself was on someone from Cato Institute frustrating Tucker Carlson and Air Force Times focuses on military issues. It’s always possible Facebook *is* trying to go after conservatives and libertarians, but this strikes me as more of an honest mistake than anything else.

Advertisement

The solution – at least for Facebook – is to stop relying on computer algorithms to determine what is or isn’t spam. Someone sharing a link to their personal blog on how they’re dealing with their mother’s death probably isn’t problematic. The same goes for someone posting YouTube video of songs or a link to a news story. Computer algorithms can be programmed to specifically look for words and end up catching innocent people instead of actual spammers (see the NSA list of words which may flag you as a terrorist). Humans can easily figure out whether someone is spamming links or not. Whether Facebook actually fixes the issue by hiring human monitors is anyone’s guess, and I doubt it will happen because algorithms, in theory, save more money than people.

I do want to point out Facebook fixed the issue after people started complaining about it – instead of some politician threatening government action. It shows the free market does work because Facebook has to answer to its users and its stockholders. The company won’t survive if people get so irritated by its limits and move to WeMe or some other site. Facebook’s reticence to address why the problem has been going on is probably another reason why users are getting frustrated.

Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Advertisement