The Algorithms are Broken

No popular social media platform is willing to hire enough humans to monitor what their users post in real-time in order to prevent anyone from ever encountering content which they might find offensive. That's a good thing because I can't imagine that anyone would want to use such a platform. It would likely be so sterile that anyone using it would quickly lose interest. At the same time, no popular social media platform is willing to fully embrace the free expression of ideas and stop trying to police the speech of their users entirely. If they did, their platform would become a haven for child pornography in a matter of hours. Nobody wants that either.

In an effort to find a middle-ground, Twitter and Facebook use algorithms to detect content which violates their unacceptably vague and inconsistently-enforced policies. And yes, both the vague policies and the haphazard manner in which they are enforced are problems. I think we have all seen countless examples where these algorithms have led to people having their accounts suspended for no good reason. In fact, it is rarely clear to these users why their accounts have been suspended or how to navigate the bizarre appeal process to have them reinstated.

As someone who values the free expression of ideas, there are many things about this state-of-affairs I find troubling. For example, it seems like an inordinate number of atheists have their accounts suspended if they become what someone deems overly critical of religion. This seems especially common when the religion involved is Islam. It is not clear to me if this is happening because Muslims are quicker to complain than other groups (seems unlikely) or because the companies involved are extra concerned with appeasing Muslims (seems more likely). In any case, atheists and other unpopular groups against which bigotry is still socially acceptable are likely to be penalized by this sort of thing.

I think what bothers me the most about this, aside from the notion that almost everybody seems to find it desirable to police the speech of those with whom they disagree, has to do with the procedures by which it is implemented. Specifically, what I object to the most is how unclear these companies are about identifying the problem and providing a path for resolving it. Imagine that you are a parent of a young child. The child has done something of which you disapprove. You punish him or her but refuse to explain what he or she did that prompted the punishment. How effective is that going to be in deterring similar behavior in the future? The child has no idea what to change because you will not explain it. This is no different from how these companies work.

I have not personally run into any trouble with Facebook or Twitter (yet), but I have encountered this nonsense from Google a few times. One example involved receiving an email from AdSense telling me that my entire blog was about to be removed from their service because my content violated their terms. After way too much back-and-forth because they would not tell me what the problem was, I was finally able to determine that they were talking about one post. I read the post repeatedly, and it did not contain anything remotely objectionable. It dealt with a topic some find controversial (i.e., male circumcision), but it was clean, non-inflammatory, etc. I read it alongside their policies and found nothing approaching a problem.

I temporarily removed the post in order to continue to be able to use AdSense. Once things seemed to be okay, I restored the post minus one inappropriate comment that had been left on it. I suspect that this entire incident might have happened because of one inappropriate comment that was left on the post in question. But the problem is that this was just a guess. Google would never give me a clear explanation of what they had a problem with or how I could fix it. That's a problem.

I just went through an even stranger situation with Google that had nothing to do with Atheist Revolution. I have a Google My Business listing for a local service provided to the community through the university where I am employed. I set it up to make sure we would appear in the local business directory, local search engine, Google Maps, etc. I never received any communication from Google about this, but my Google+ page was identified as having been suspended when I accessed it. There had been no warning, and there was no explanation of what the problem was. After some investigation, all I could find was some vague mention that it had violated "quality standards." How helpful! I submitted an appeal, and everything was restored within two days.

I understand that algorithms are likely necessary; however, these platforms need to provide users with clear information about the nature of the problem and what is necessary to fix it. Ideally, the algorithm should trigger a review by a human who is capable of explaining the problem and the solution. Of course, consistent enforcement would be nice too.