KG LEGAL \ INFO
BLOG

Screening algorithms used by Facebook – 2% of effectiveness. Is it an open legal problem?

So far, users have uploaded an enormous amount of 350 billion photos, and nearly 35 million people update their status every day on Facebook platform. With such numbers, it is very difficult to verify the content posted on the platform, which in a considerable number of cases violates community standards. In order to stop this dangerous process, the company had to undertake more technologically demanding solutions, so that it operates in accordance with legal and ethical standards.

Facebook Community Standards

The aforementioned Community Standards are the key for fighting with abuses on the platform. They are simply guidelines developed by Facebook’s employees, which list what is allowed and what is forbidden. They are based on user feedback and expert advice in areas such as technology, public safety and human rights. They apply to all users, are valid worldwide, and apply to all types of content. They have been divided into categories concerning: violence and illegal behaviour, security, objectionable content, integrity and authenticity, and even protection of intellectual property. Each of them describes specific behaviours/content that are strictly prohibited, and the catalogue itself grows with emerging socials problems. The most significant and widespread problem with violations of these standards is hate speech.

Classifiers – screening algorithms

The primary weapon of company’s content – moderation system are screening algorithms, called classifiers, which are based on artificial intelligence systems. They comb through billions of posts looking for items that might match the company’s definitions of content that violates its rules. It is sorely important to “teach” classifiers whenever specific post violates policies and needs to be removed. In some areas, Facebook’s classifiers work relatively well. But they often fall short in sensitive and controversial areas, especially when Facebook’s rules are complex and cultural context matters. In these cases, the company relies on thousands of reviewers around the world to enforce the Community Standards as well as resolve users’ complaints.

In general, Classification algorithms are used to categorize data into a class or category. It can be performed on both structured or unstructured data. Classification can be of three types: binary classification, multiclass classification, multilabel classification. Some types of classifiers include perceptron, naive bayes, decision tree, logistic regression, K-Nearest Neighbor, Artificial Neural Networks/Deep Learning, Support Vector Machine.

Platforms such as Facebook use often machine learning classifiers, namely classifiers being the algorithms themselves – the rules used by machines to classify data. Supervised and semi-supervised classifiers are fed training datasets, from which they learn to classify data according to predetermined categories.

Are screening algorithms reliable?

There are hopes that the artificial intelligence system is able to combat the problem of growing hate speech and violence, as well as underage users off its platforms, but the situation is not entirely optimistic. The company’s AI system cannot consistently identify first-person shooting videos, racist rants and even, the difference between cockfighting and car crashes, so when algorithms are not certain that content violates the rules, the posted material is showing less often, but it still remains unpunished, so as the author. Statistics show that such cases happen very often. In 2019, one of the Facebook’s senior engineers estimated, that the algorithm removed posts that generated only 2% of the views of hate speech on the platform that violated its rules. The efficiency at the level of only a few percent leaves the question of the proper legal protection of the users of social networking sites. Such a level of effectiveness creates additional arguments for both strengthening the protection of private users’ data on the Internet, and provides additional justification for the legal problem of violations of privacy on the Internet and proper protection and management of data entered by users into applications, which by their nature should be safe for all users.

Sources:

1. Deepa Seetharaman, Jeff Horwitz, Justin Scheck, Facebook Says AI Will Clean Up the Platform. Its Own Engineers Have Doubts, Wall Street Journal 17th October 2021, online: Facebook Says AI Will Clean Up the Platform. Its Own Engineers Have Doubts. – WSJ, access: 05.11.2021,

2. Matt Ahlgren and WSHR Team, 35 + Statystyki i fakty na Facebooku dla 2021 18th August 2021, online: https://www.websiterating.com/pl/research/facebook-statistics/#chapter-1,       

access: 05.11.2021

3. Facebook, Standardy społeczności Facebooka, online: https://transparency.fb.com/pl-pl/policies/community-standards/ , access: 05.11.2021

UP