fbpx

Video-Sharing Platforms Censure Content

Legal - September 4, 2024

The Audiovisual Media Services Directive imposes on video-sharing platform (VSP) providers certain obligations regarding content.  These include protecting minors and the general public from harmful content in programmes, user generated videos and audiovisual commercial communications (ACC).

Additionally, VSP providers need to comply with the obligations regarding the ACC they control (market, sell or arrange), and those controlled and uploaded by others.

In particular, ACC shall be readily recognisable as such, so that surreptitious ACC are forbidden; they shall not use subliminal techniques, disrespect human dignity, cause discrimination or encourage behaviour prejudicial to health or safety or the environment, including tobacco products and electronic cigarettes.

ACC for alcohol shall not be aimed specifically at minors and shall not encourage immoderate consumption.  ACC for medicinal products available only on prescription shall be prohibited.

Finally, VSP providers must further protect minors from ACC and reduce the exposure of children to ACC for foods and beverages containing fat, trans-fatty acids, salt or sodium and sugars (HFSS).

This monitoring on behalf of VSP providers is widely known as the “Good Samaritan” approach, where exercising editorial control over user-generated content does not create liability for the VSP provider due to unlawful content posted by users.

In order to comply with such obligations to monitor, demote, block, remove, or exclude content, VSP providers can implement technological algorithms for online filtering or moderation.  In September 2020, the Directorate-General for Internal Policies published a study on the impact of such “upload filters”.

Automated filtering technologies include:  metadata searching, hashing, and fingerprinting; blacklisting; advanced techniques for natural language processing; AI-based techniques to identify text or images.

Moderation includes rejection vs. admission, editing of content, commenting, prioritizing vs. demoting, and synthesizing.

Filtering can be centralised through a single unit and/or uniform policies executed throughout a VSP, or else decentralised.  In terms of time, it can be ex-ante (before the content is available on the VSP) or ex-post.  If it takes place after an issue with content has been raised by a recipient, it is reactive; but it can also be proactive.

Filtering systems use probability as their methodology; therefore, errors admitting illegal content or demoting valuable content are possible.  However, the ground truth underlying the algorithm allows for human responsibility and control.

In order to minimise the effect of errors, algorithms should include appeal and redress mechanisms (contestable filtering vs. non-contestable).  For these to be realistic, content creators should be informed of the existence of algorithms, on how they operate and, eventually, on their own content having been impacted.  Furthermore, end users should also be informed of the existence and modus operandi of applied filtering technologies.

A good regulation for filtering must also take into account the situation of smaller players that, either for economic and/or technological reasons, cannot have the same access to automated filtering algorithms.

Four European-level court decisions, three of them by the Court of Justice of the European Union, plus one from the European Court of Human Rights, have confirmed the Good Samaritan approach:  the three former are the 2014 Google-Spain decision, the 2017 Ziggo decision and, most interestingly, the 2019 Glawischnig-Piesczek decision, which confirmed admissibility of injunctions ordering providers to remove and block unlawful information.  The fourth decision is the 2015 Delfi decision by the European Court of Human Rights, where an Estonian journal was punished for failing to remove online expressions of hate.

Questions arise as to whether VSP providers should go further than what is strictly required by the Audiovisual Media Services Directive, for instance, demoting content which is not unlawful.  This opens the gate to capricious arbitrariness.  Needless to see, the line of these obligations imposed by EU law with freedom of expression is very thin and grey.

Source of image:  Riverside