EU copyright reform: Filters fail frequently

Let’s not put dumb robots in charge of what we can and can’t say online, writes Julia Reda.

Julia Reda | Photo credit: European Parliament audiovisual

By Julia Reda

26 Mar 2018


Should we force internet platforms like Facebook or Wikipedia to automatically pre-screen all of their users’ acts of expression for copyright infringement? This question lies at the heart of the controversy around Article 13 of the proposed EU copyright reform, which would establish ‘upload filters’ to do just that.

Several platforms already employ similar technologies voluntarily. Investigating their record lays bare the shortcomings and pitfalls of such upload filters.

Just this month in Germany, ‘PinkStinks’, a campaign protesting the perpetuation of gender stereotypes in children’s toys, saw one of their videos removed by an automated system. It was claimed to be infringing on the copyrights of a TV station. What the filter wasn’t able to tell was that it was in fact the TV station that had used an excerpt from the campaign video in one of their broadcasts - not the other way around.


RELATED CONTENT


Why, then was the campaign video removed? YouTube, the platform the activists had used to share their video, had given the station direct control of their upload filter: The station was able to add, without any oversight, to the database of supposedly copyrighted material that the filters would then block from appearing online. The station used this opportunity to automatically register everything that went on air - including third-party works such as the campaign video.

This case demonstrates how easily systems that lack human oversight make serious mistakes, and how freedom of expression is threatened by the systemic bias that upload filters inevitably have in favour of large corporate rightsholders. In the court of the upload filter, individual creators such as the PinkStinks campaigners are guilty until proven innocent.

They are forced to expend considerable effort in having their legitimate free expression reinstated, while rightsholders face zero consequences for mistakes and thus have no incentive to ensure they don’t accidentally order the deletion of legal content.

A similar incident from January makes for an even more ridiculous tale: An upload of white noise, a hissing sound made by a random signal and thus ineligible for copyright protection, was ‘identified’ by YouTube’s upload filter to contain no fewer than five copyrighted works. Here, too, companies that had been allowed to ‘feed’ the upload filter with blueprints of protected works were misusing that access, yet nobody caught it until the affected user publicly complained.

In another case, a recording of a Harvard Law School lecture on no other subject than copyright was taken down by a filter, because the professor had illustrated some points with short extracts of pop songs. Of course, the professor’s educational use of these samples was perfectly legal - but the filter was unaware.

The lesson: There is no way for an automated filter to determine whether a specific use of copyrighted material is covered by an exception. Copyright exceptions and limitations, of course, are essential in ensuring we can exercise the human rights of free expression and of taking part in cultural life.

Exceptions allow us to quote from works, to create parodies and to use copyrighted works for educational purposes. Upload filters make our ability to exercise these fundamental rights a mere afterthought to the enforcement of intellectual property claims, even when those claims are dubious.

The list of examples of filter failures goes on and on: Automated systems have removed documentations of human rights abuses in Syria for resembling extremist content. They’ve claimed to find musical works in a recording of a purring cat. They have even taken down a video of a speech here in the European Parliament for no defensible reason.

To make things worse, those examples are all from the best filtering technology money can buy: Google has reportedly invested tens of millions of euros and years of engineering efforts into the software that made all these erroneous judgement calls.

Hundreds of European start-ups and other smaller platforms would also need to implement filters. It is foreseeable that they will end up with technology that’s even more prone to mistakes - assuming the required costs don’t put them out of business altogether, ceding even more market share to the US internet giants that can afford to comply.

These cases should serve as warnings to those pushing for upload filters, namely the European Commission, several member state governments in Council and some of my colleagues in the Parliament. We can’t let automatic filters become the arbiters over what we can and can’t say on the internet.

Filters will fix far fewer problems for rightsholders than they will create for the people in Europe. Even where they work as intended - which, as demonstrated, they regularly don’t - they are a wildly disproportional measure, effectively functioning as censorship machines.

 

Read the most recent articles written by Julia Reda - Copyright reform: 'Neighbouring right' will get in the way of quality journalism