content moderation in Facebook It’s still a sensitive topic and in the past few hours he’s added a new polemic. As published The edgethe social network failed which has resulted in the normal release to the public of many messages that would not normally pass the moderators’ filter, such as fake news and violent content.
The extent of the inconvenience was such that six months have passed since it was found until it can be solved. The aforementioned media reports that Meta engineers detected the issue in October, but they weren’t able to fix it until March 11.
According to Facebook’s internal investigation, the bug impacted the system that categorizes posts on the food main. Instead of unpromoting posts that moderators had already flagged as inappropriate, the bug caused them to display normally. There was even a 30% increase in viewing of these posts globally.
But what is really striking is that this problem was said to have been present in Facebook’s systems for a long time before it was discovered. The bug apparently dates back to 2019 but went unnoticed by developers until it started causing noticeable hiccups.
Another fact from the report indicates that the increase in the viewing of inappropriate content has not been limited to fake news. This has also impacted the technology responsible for detecting photos and videos containing graphic violence and nudity; and made the blocking of Russian media publications after the invasion of Ukraine less effective than it should have been.
Facebook content moderation has been affected by a bug
The report of The edge quotes a Facebook spokesperson, who acknowledged the bug and said it had already been fixed. According to the spokesperson in question, the bug had no “significant and lasting” impact on the statistics manipulated by the social network founded by Mark Zuckerberg.
However, this drawback has revived the debate concerning the methods and systems that Facebook used to downgrade the visibility of posts it deems inappropriate. The platform has long assumed that it has more and more technology to combat misinformation; however, it has also come under heavy criticism for the serious failings it has shown in recent years in the face of fake news and malicious content related to politics (Trump and the US elections, for example), to health (COVID-19 and anti-vaccines) and many other sensitive issues that have come to light with the Facebook Papers.
It should also be remembered that the moderation of content that is done manually in the social network is carried out by outsourced workers; the same people who, in recent years, have called for better working conditions. And those complaints don’t seem to have yielded much either, regardless of Meta’s commitment to relying more on artificial intelligence for those tasks.
It is true that the case we are discussing in these lines does not seem to deserve to be qualified as negligence or malice. After all, there isn’t a company that hasn’t addressed a bug that affects their services. But it is also a reality that with the information flows that Facebook manages, and with the technical and economic means at its disposal, I should be better prepared than anyone to deal more quickly with a shortcoming of this type.