Pirates open up CleanIT. But has it changed?

Pirates open up CleanIT. But has it changed?

CleanIT is a public-private partnership project by several European governments, internet industry firms, and NGOs that has been subject to strong criticism. Opponents fear CleanIT endangers civil rights and internet freedom, while keeping their work hidden from public oversight. This secrecy has now been reduced.

Last weekend, the project officially published the latest draft of their working document. This was achieved due to the efforts of Sven Clement, President of the Pirate Party of Luxembourg, and Pascal Gloor, Vice-President of the Pirate Party of Switzerland, who takes part in the CleanIT meetings. Pascal and the Pirate Times have talked about the project before. The released document will be the basis for discussion at the upcoming meeting of the group on Monday 5th of November in Vienna. The document, as of last Sunday night, is open for comments from the public. The publication of the draft no doubt increases transparency. especially since it has been published before the meeting. This is a considerable improvement achieved through public and Pirate pressure.

Of course, transparency is merely a means to an end. Publication does not automatically improve the document. However it does allow the content to be known and allows for more direct democratic input through more qualified comment and criticism by an informed public and civil society.

A reading of the current document reveals that CleanIT actually has had a few of its teeth pulled. Real name enforcement is gone. The section on filtering reads a lot less like central censorship infrastructure. The education (or “awareness”) approach, which is arguably the only sustainable solution to prevent people from falling for antidemocratic ideologies, has gained in importance as the number of total points was reduced.

Also, many of the ideas, which Cory Doctorow named the “stupidest set of proposed internet rules” ever, for their tendency to induce utter disbelief caused by the cluelessness that spoke from the text to every tech-savvy person, have been scrapped. For example, the part about responsibility of internet companies has been toned down. In the leak from August, the idea had been floated that it should effectively be considered economic aid to terrorism if a terrorist happened to use a company’s network or online service. Now it explicitly says that there is no direct responsibility, but companies should help where they can.

Still the discussion is far from over as many central points of criticism remain.

The group explicitly tried to respond to criticism for not properly defining the terms they are using. In particular the vague definition of what constitutes terrorism has been seen as a loophole for boundless civil rights restrictions. However, the paper still contains two definitions of the term terrorism which partially contradict each other. The question if violence is necessary for something to count as terrorist activity, as it is said at one point, is interesting. One might argue, that this negates the whole point of the project. Violence over the internet is not possible – unless you count Denial-of-Service attacks and trolling, which would be very far fetched. In the end, CleanIT still fails to offer a single clear definition.

The section about filtering is still there, though it is not part of “best practices” anymore. It seems to have a clearer opt-in approach than the previously leaked draft. The paragraph now states that “Internet users should have the means to avoid being subjected to terrorist use of the Internet“. The acting party seems to be the internet user, who doesn’t want to stumble upon “terrorist” content, or have the kids of the household do the same.

Users who want to censor their own internet access are not within the scope of criticism of CleanIT opponents. However, it is still unclear whether these “means” shall be implemented at infrastructure level or within the user’s home, voluntarily installed by the user him- or herself. Filtering on an infrastructural level, for whatever well-meaning reason, opens the door for abuse, as it is currently happening in Russia. The way it is now, the demand for an explicit exclusion of all blocking or filtering measures on an infrastructure level will remain.

In a very important aspect, the current draft of the paper still hasn’t changed: It still entitles private enterprises to determine the lawfulness of content. Only if the company behind a social media platform or a hosting provider is unclear about the legality of something one of their users uploaded should an official legal opinion be called upon. While this discretion may be granted when it comes to a company’s own terms of service, the same discretion with regards to actual law is highly problematic and will surely still be seen as an assault on the rule of law. The document also mentions an approach that does work via the terms of service. However, the need to do something in this respect is very questionable, as many popular internet platforms already prohibit racist or other discriminatory hate content. Those that don’t prohibit anything, do so on purpose. Those companies will not have their non-compromising stance on free speech turned around by any non-legislative approach.

This last aspect is also true for flagging systems, which CleanIT proposes as a way to help users defend themselves against terrorist content. Popular platforms like Facebook, Twitter or even many obscure forums, have the option to report a post to a moderator. Again, those sites that do not moderate probably do so on principle or for lack of resources. Netizens will probably ask the question of what effect is writing down the status quo supposed to have. Still, the “obligatory red button inside the browser”-approach is gone, which eliminates another point on Cory Doctorow’s popular list. However, in the discussion points for Vienna, the flagging option for Voice-Over-IP systems is still there and will still be subject to ridicule by the internet community, as it implies that people may accidentally find themselves in a phone call with a terrorist and therefore be in need of such a panic button.

Also, some aspects clearly show that the group still is not really comfortable with the new, more open approach. The document proclaims that each partner’s participation shall be published. Albeit on their own respective websites. While this technically does make member data public, it reads like an intentional attempt to obfuscate the matter and to make it harder for the public to get a complete list of participants.

In addition, the most important part of CleanIT remains intransparent: the very reason for its initiation in the first place. That is, the claim, that recruitment and radicalisation of children by terrorists over the internet actually constitutes a problem, and more so than in the offline world. It remains a simple statement. It is not backed up by criminal statistics, independent studies or other primary sources.

As of today, the comment sections on the OpenCleanIT website remain relatively empty. Now that the current text is known, critics should take to the comment threads and voice their specific concerns. There are still ample reasons to have such concerns with the current text, and not making them heard even though the opportunity exists might give the wrong idea about the size and resolve of public opposition to the project.

Featured image: CC-BY-SA  jeff_golden