Do not travel abroad unless absolutely necessary!

Wednesday, Nov 11, 2020

Yesterday the Court of Justice of the European Union heard case C-401/19 Republic of Poland v European Parliament and Council of the European Union in which the Polish government asks the CJEU to annul the upload filtering provisions in Article 17 of the DSM directive. While we had not really taken this case seriously (it seemed more a domestic political gesture of the Polish government than a serious effort to protect fundamental rights) and did not pay much attention to it the case has recently become more interesting as it forces the parties involved to openly position themselves in the ongoing disputes about the correct implementation of Article 17). In that sense the hearing did not disappoint as i have written up here.

Writing this report turned out to be a little adventure. Given the stubborn refusal of the court to stream hearings and other public sessions, it required someone to be present at the hearing in Luxembourg. Given the pandemic related travel restrictions it became pretty clear that among the people working on Article 17 on our side i was the best positioned to go to Luxembourg, which still meant a 4,5 hour drive in each direction. In the end i drove down the evening before, stayed overnight in a hotel1 and drove straight back to Amsterdam after the end of the hearing.

Wile was quite a bit of trouble (i really do not like driving!), in the end it was worth the effort. Had i not been there to report on the hearing the only reporter would have been a writer for a subscription only business intelligence platform (plus a handfull of lawyers observing for corporate stakeholders on the public tribune). The CJEU would certainly do itself a big favour if it would stream such hearings and other public sessions. In the long run having the European public being represented by a hack writing for a business intelligence service and a civil society operative with a dog in the fight does not seem good for transparency of the judiciary.

  1. Staying in a hotel for work is something i have not done for what feels like a very long time. And while the whole experience with every item that you could possibly touch being wrapped in protective paper enveloppes felt slightly surreal the mere act of sleeping in a hotel bed made me feel like a real human being again. ↩︎

Barbed wire on the Internet prairie

Tuesday, Oct 27, 2020

The French approach to the digital policy space never ceases to amaze1! Yesterday i stumbled across a paper ("Barbed wire on the Internet prairie: against new enclosures, digital commons as drivers of sovereignty") published in July on the blog of the Digital Diplomacy team of the Ministère de l’Europe et des Affaires étrangères. As implied by the title this paper embraces the concept of the digital commons as mechanism for advancing Europe’s strategic sovereignty in the digital space:

Insofar as the development of digital commons is relatively absent from sovereignty policies at the European level, it is necessary to identify the resources likely to be jointly managed and exploited, while raising awareness among our partners, particularly European ones, of the strategic dimension of digital commons, in order to mobilize them accordingly.

The purpose of this article is therefore not to define the scope of digital commons in a technical, economic or political perspective, but rather to reflect on their strategic potential for Europe, within a digital world dominated by private monopolistic players, and driven by the structuring rivalry between China and the United States.

This is of course a much more pointed version of the argument that we have been making in our Vision for a Shared Digital Europe in which cultivating the Commons features as one of the core principles of building a shared digital Europe.

In this context it is intersting to see a paper published by the French Governement2 suddenly (and quite forcefully) position the digital commmons as a core element of a future EU digital strategy:

This logic of commons is perfectly aligned with the values and vision of the digital space defended by France and promoted to our European partners and beyond: a safe, open, unique and neutralspace. In addition,because they directly defend a model and priorities which are also those of the EU (preserving general interest, fair competition, net neutrality, personal data protection, environmental sustainability, etc.), digital commons should also become one of the pillars of a European sovereignty policy, from which they have so far been absent.

And it is even more welcome the paper also calls for investment into building a sustainable digital commons in Europe:

[…] This shows the urgent need to protect and therefore guarantee the sustainability, especially economic, of digital commons projects; their non-rival characteristic and lack of inclination to capital accumulation makes it difficult to finance them nor make them profitable. This would imply the creation of a support fund for existing digital commons, along the lines of the EU-FOSSA project. This fund could be fueled by European private and public players to start with, before being potentially extended to any other actor sharing our concerns.

[…]In addition, it may be possible to create a European foundation for the digital commons, an entity that would be responsible for managing the financing mentioned above, but which could also host and support new initiatives (through legal advice, labeling, hackathons and code sprints, calls for projects, etc.). In order to counter possible attempts at recapitalisation, looting or exclusive capture, it could ensure that licences are respected, but also establish possible transfers of ownership and therefore of responsibilities – financing, governance, optimisation, etc. – within itself.

Lastly, the European strategy in this field should include an international component. Our vision of digital sovereignty is non-hegemonic and this sovereignty must therefore show how it fits with a concept of international governance which guarantees a “free, open and safe” digital world through multilateralism – as a mutual and mutually accepted constraint. The commons are, here again, useful in guaranteeing open digital infrastructures – be it against attacks on confidence and security in cyberspace (according to the Paris Call wording) but also against risks created by political control, technological mastery or financial domination.

Unfortunately the paper feels a bit like a one-off effort to launch an idea which is a bit of a pity since the underlying idea is a sound one. Making a strong digital commons part of the EU digital policy would be a strategic choice that would set it appart from the current attempts that are not fundamentally different from the current (US dominated) approach to the digital space. And having the French government as an ally in this fight would certainly be welcome.

  1. Or for that matter delight: The paper discussed in the post ascribes to platform companies that “they phagocytize value creation”. Apparently phagocytization refers to the process of phagocytosis which (in biology) refers to “the engulfing and destruction of particle matter, such as a bacterium, by a cell↩︎

  2. Unfortunately the status of the Paper (which also does not indicate an author) is rather unclear. ↩︎

Data: a collective resource subject to democratic ordering?

Monday, Oct 26, 2020

I really enjoyed reading this short essay by Salome Viljoen about moving beyond property or dignity claims about data production and towards democratising data governance. This is an excellent primer for anyone interested in understanding discussions about the governance of (personal) data that does a pretty good job at describing the two prevalent schools of thought. On the one side the data as property approach:

Propertarian reforms diagnose the source of datafication’s injustice in the absence of formal property (or alternatively, labor) rights regulating the process of production.

And on the other side a an individual rights based approach that she calls “dignitarian”:

The second type of reforms, which I call dignitarian, take a further step beyond asserting rights to data-as-property, and resist data’s commodification altogether, drawing on a framework of civil and human rights to advocate for increased protections. Proposed reforms along these lines grant individuals meaningful capacity to say no to forms of data collection they disagree with, to determine the fate of data collected about them, and to grant them rights against data about them being used in ways that violate their interests.

I am definitely more in the “dignitarian” camp here but i also share her analysis of the shortcomings of the this approach and her proposal to transcend these opposing approaches in favor of an approach that rooted in collective rights:

Rather than proposing individual rights of payment or exit, data governance should be envisioned as a project of collective democratic obligation that seeks to secure those of representation instead.

[…] What these shortcomings suggest is that alternative conceptions of the data political economy are needed. Such alternatives must be resistant to private market governance of the data political economy, attentive to the structural incentives at the root of data extraction, and responsive to the wealth accumulation, privacy erosion, and reproduction of social oppression it facilitates.

One path forward reconceives data about people as a democratic resource. Such proposals view data not as an expression of an inner self subject to private ordering and the individual will, but as a collective resource subject to democratic ordering.

The framework that she proposes here (data as a collective ressource subject to democratic ordering) makes a lot of sense (maybe even more that the personal data as a commons approach that is fairly popular in my circles at the moment).

In order to understand what this would mean in practice one does not need to look further than the Facebook controversy du jour, where facebook tries to prevent researchers from understadning the impact of political ads in the name of protecting the privacy of its users. This controversy perfectly illustrates the limitations of an individual rights based approach to data ownership and provides a case study why treating personal data as a collective ressource subject to democratic ordering would make a meaningful difference.

Vergütung über alles

Tuesday, Oct 20, 2020

Turns out that i am not the only German who likes to make flowcharts about Article 17. Someone in the German Ministry for Justice and Consumer Protection (BMJV) has made a flowchart that depicts how the German implementation of Article 17 would work in practice:

Öffentliche Wiedergabe und Vergütungen (click for original)

As a flowchart I quite like this both in terms of execution (the use of ✋ and ⚙️ symbols to indicate automatic or human interventions) and in terms of the mechanism proposed, which comes pretty close to the ambition to avoid the use of automated blocking of uploads as much as possible. In line with the “vergütung über alles” principle that animates German copyright law the proposed mechanism seeks to avoid automated blocking via a cascade of remuneration mechanisms (licensing, remunerated minor uses, remunerated uses under the pastiche exception). While there remains the possibility for uploads to be blocked at the request of rightholders (if they are not covered by an exception) this mechanism is probably as close as a national legislator can get to turning the Article 17 right into a remuneration right.

Unfortunately there is a mayor problem with the accuracy of the flowchart. The flowchart published by the BMJV does not accurately depict the provisions of the implementation law proposal (Referentenentwurf) published by the same ministry. Contrary to what the flowchart suggests, in the Referentenentwurf the determination if a use qualifies as a minor use (§6) is not automatic and does not happen before the uploader can pre-flag the use of a matched work as legitimate. Without an automated minor use check, the whole proposal loses most of its appeal1.

So while it is clear that the mechanism depicted in the flowchart is much preferable to the one described in the Referentenentwurf, it seems prudent to assume that the text of the proposal is what eventually counts2. In the meanwhile the BMJV’s ongoing stakeholder consultation (open until 6/11) provides an opportunity to let them know that we are more impressed with their flowcharts drawing skills than with their legal drafting skills.

  1. Even worse, without an automated minor uses check the problems of the “match and flag” approach become much more pronounced since uses that fall under the minor uses exception would also be affected by retroactive removals via upload filters. ↩︎

  2. Intrestingly the BMJV spokesperson described the flowchart and not the text of the proposal when defending the new implementation proposal in Monday’s Bundespressekonferenz. ↩︎

Controlled demolition

Monday, Oct 19, 2020

Wired has an short write-up of one of the books that i have on my reading list: Subprime Attention Crisis by Tim Hwang. The books central thesis is that targeted behavioural advertising ~is a scam~ does not really work any better than many other forms of advertising and as a result the whole ad-tech market has become a bubble akin to the housing bubble that lead to the last financial crisis:

So if Hwang is right that digital advertising is a bubble, then the pop would have to come from advertisers abandoning the platforms en masse, leading to a loss of investor confidence and a panicked stock sell-off. After months of watching Google and Facebook stock prices soar, even amid a pandemic-induced economic downturn and a high-profile Facebook advertiser boycott, it’s hard to imagine such a thing. But then, that’s probably what they said about tulips.

This is not something to be cheered. However much targeted advertising may have skewed the internet—prioritizing attention-grabbiness over quality, as Hwang suggests—that doesn’t mean we ought to let the system collapse on its own. We might hope instead for what Hwang calls a “controlled demolition” of the business model, in which it unravels gradually enough for us to manage the consequences.

How might that work? Hwang proposes a publicity campaign by researchers, activists, and whistleblowers that exposes the sickness of the online ad market, followed by regulations to enforce transparency. Digital advertisers would have to make public, standardized statements to help buyers evaluate their wares. The goal would be to narrow the dangerous disconnect between perceived and actual value.

I like the idea of a “controlled demolition” but it feels to me that we are already deep into the publicity campaign (at least in Europe, see here for an example that some of my colleagues at IVIR are involved in) and that the focus really needs to be on regulation. In this context it will be key to see if the upcoming Digital Services Act will include regulatory interventions of the type that Hwang envisages. For me that is one of the most interesting questions about the DSA (instead of endlessly re-hasing discussions about liability and responsibility).


Monday, Sep 28, 2020

Julia Reda has a two part (1|2) post on the Kluwer Copyright blog in which she examines (and ultimately rejects) the claim made by rightholders that Article 17 of the DSM Directive is a mere clarification of existing Court of Justice case-law on communication to the public and intermediary liability. In the first part she examines the possible motivations rightholders could have for portraying Article 17 as a mere clarification of existing law that does not really change anything.

While all three of her theories have some merit, for me her third explanation is the most interesting one:

The third possibility is that rightsholders find themselves in the position of Goethe’s sorcerer’s apprentice. While lobbying for a new liability regime for hosting providers may have initially seemed like a good idea, they lost control of the legislation they had advocated for. Other interest groups, most notably internet users, became more vocal during the legislative process than initially expected. After the European Parliament rejected the Legal Affairs Committee’s version of the draft DSM Directive in the summer of 2018 over fundamental rights concerns, concessions had to be made and user rights had to be strengthened in order to secure a majority for the Directive in Parliament.

The end result, which for the first time establishes users’ rights to the use of copyrighted content and makes several exceptions related to freedom of expression mandatory, may cause some rightsholder groups to question whether they were better off under the old legal regime. […]

This observation aligns pretty well with an insight that has emerged more and more clearly over the past few months fn working on the implementation of Article 17:

By now it is pretty clear to me that during the final phase of the legislative battle over the directive (between January and March 2019) both sides remained stuck in their entrenched positions vis a vis Article 13, without really noticing that as a result of the fierce opposition by users, and the determination of rightholders to get the directive adopted at any cost, the internal balance of Article 13 had shifted more and more in the direction of codifying user rights.

In the end, the final version of the Article does is quite far removed from the original proposal1 and includes a surprising number of elements (mandatory exceptions for quotation, pastiche, parody and caricature, strong procedural safeguards against over filtering) that would have never made it into the law if not introduced as concessions for getting Article 13 adopted. Even with the benefit of hindsight people on both sides of the debate seem to prefer not to acknowledge this, because this outcome is hard to reconcile with the quasi-religious belief systems that animate most participants in copyright policy debates.

  1. It is telling that in their recent letter to Commissioner Breton, rightholders complain that “in its Consultation Paper, the Commission is going against its original objective”. At this stage the Commission is of course not supposed to act in line with the original objective of the legislative proposal but rather in line with the text of the directive as adopted by the European legislator, which is indeed quite different from the original objective. ↩︎

Humans vs. Upload filters

Friday, Sep 25, 2020

A series of conversations today made me realize that one of the least expected outcomes of the implementation of Article 17 will be a massive increase in work for off-shore content moderators. While most of the discussion around Article 17 focusses on the fact that it will require platforms to implement automated content recognition tools to filter user uploads (the dreaded #uploadfilters), there is much less attention to the fact that along with these filters will come small armies of content human moderators to do the (now legally required) “human review” of the inevitable mistakes that the filters will make. I find the idea that, by talking about human review, the directive carves out a niche that is explicitly protected from takehover by AI oddly satisfying.

Their problems are not our problems

Thursday, Sep 24, 2020

There is a notable presence of the idea of decolonialization in my rss feeds today. This morning Andres Guadamuz posted “Time to decolonialize the internet” followed by “The First Steps to Decolonise Digital Rights” published by the DFF around lunchtime. Both posts are thoughtful an well worth reading in full. And while both of them include references to The Social Dilemma, and both of them are illustrated by a south-up map, they use the term decolonialization as shorthand for entirely different issues.

Andres' post deals with what he calls digital cultural colonialism that finds its expression in an internet culture dominated by American cultural tropes, that exports the worst elements of American political culture1:

The underlying infrastructure of the tech industry is bad enough, but one of the most baffling aspects for me of the digital colonialism has been the entrenchment of US culture’s dominance. American cultural hegemony goes back to analogue media with the prevalence of its music, TV and film everywhere. Many of us who saw the dawn of the modern Internet believed that it would bring a more diverse cultural environment, people all over the world communicating with each other and sharing each other’s cultural expressions. What happened was that the infrastructure advantage translated into the continuing export of the US internet culture.

[…] This has had an interesting effect. Social media has spawned a global culture that speaks the same American Internet language of memes, streams, music and show references. And even when we get more representation and diversity, it tends to be entirely US-centric. […] The main effect has been the export through social media of the toxic US culture wars to the rest of the world. American culture has become extremely divided, and politicians have learned to use that division, encouraging the polarisation in order to maintain power.

I think this description is spot-on, and it reminded me of my initial reaction to a Deutschlandfunk push message i received last Friday informing me that the US would block TikTok as of last Sunday:

My initial reaction was to hope that the US would indeed make good on this threat. Not because i think the world would be a better place without TikTok but rather because i was looking forward to a unique natural experiment. With TikTok being banned in the US would it continue to be a dominant cultural vector in the rest of the world (thereby signalling the demise of the US cultural hegemony)? Or, would Internet culture move on to the next US-based replacement platform, resisting decolonialization? With the TikTok ban off the table (for now) an answer to this question will have to wait. In the meanwhile it is worth considering Andres' suggestion to…

…ask questions when we see another US-centric trend in our timelines. Is this relevant to me? Is this relevant to my society? Have I been consuming local culture? Have you helped to crowd-fund a local project?

But perhaps more importantly, be mindful about your own cultural consumption, and who you choose to centre in your advocacy. Remember, their problems are often not our problems.

  1. By contrast the DFF post discusses first steps of the (European) digital rights movement to adress forms of oppression that have their roots in a history of domination and colonisation and are maintained by structural forces. I found following passage discussing the shortcomings of individual rights based advocacy particularly resonating: “So, the mechanism works for the individual who is informed and in a position to make their individual rights actionable, but less so for others, who ‘data protection’ was not modelled for. Just as we speak about harmful technologies as a result of skewed design, this argument applies to our legal tools too.” This is probably because it strongly aligns with our analysis of the limitation of individual rights based approaches for digital policy making in our Vision for a Shared Digital Europe. ↩︎

Geopolitical drama

Wednesday, Sep 23, 2020

There are not many days where i feel more aligned with the US or tha UK than with the European Union, but today is such a day. Earlier this morning during the 53rd WIPO General Assembly, China blocked the Wikimedia Foundation from becoming a WIPO observer (supposedly because WM has a chapter in Taiwan, which goes against the PRC’s one China principle):

In reaction to this both the UK (on behalf of a group of countries that also includes the EU member states) and the US came out in support of Wikimedia’s application. The delegation of the EU remained silent.

Kowtowing to Chinese attempts to exclude an important civil society stakeholder with a strong track record of constructive contributions to IP policy discussions is shameful. Unfortunately it is also in line with the overall geopolitical approach that the EU is taking vis a vis China. It is disturbing that in its effort to appease China to protect trade relations, the EU is now willing abandon civil society actors.

What about the memes?

Monday, Sep 21, 2020

Today the Verge reports that Facebook will let people claim ownership of images and issue takedown requests and notes that “The days of reposting images on Instagram might be over”. The article describes a pretty run of the mill ContentID/Facebook rights manager type system that will allow select users to claim ownership of images across Facebook’s platforms. The fact that this emerges now is of course no coincidence but shows that Facebook is preparing for the entry into force of Article 17 of the copyright directive which will almost certainly require them to provide such filtering functionality in the EU.

The Verge is a bit light on details but the bit of info it does contain on the how the tool will deal with the inevitable collisions between ownership claims does not sound terribly sophisticated:

To claim their copyright, the image rights holder uploads a CSV file to Facebook’s Rights Manager that contains all the image’s metadata. They’ll also specify where the copyright applies and can leave certain territories out. Once the manager verifies that the metadata and image match, it’ll then process that image and monitor where it shows up. If another person tries to claim ownership of the same image, the two parties can go back and forth a couple times to dispute the claim, and Facebook will eventually yield it to whoever filed first. If they then want to appeal that decision, they can use Facebook’s IP reporting forms.

“Whoever filed first” is of course not at all relevant when it comes to copyright. Unfortunately there is no further elaboration how the new tool will deal with uses under exceptions / fair use or how it would interfact with Public Domain or freely licensed content, but it seems clear that Instagram’s current culture of freely reposting images is on a collision course with the the realities of automated copyright enforcement as mandated by Article 17.

A very technical paper

Tuesday, Sep 15, 2020

“This is a very technical paper. Unfortunately, it is so because copyright law has become too complicated.”

This is the first line in the new, updated version of João Quintais and Martin Husovec’s paper on licensing Article 17. The original version of this paper, published almost a year ago made a significant contribution to the discussion about Article 17 by arguing that Member States have in fact considerable policy options when implementing Article 17. Their argument has turned out to be extremely influential having found its way into the legal arguments made by the Commission in front of the CJEU in Case C-682/18 (YouTube), the German discussion draft for implementing Article 17 and the Commission’s consultation on the Article 17 guidance.

And while the paper is indeed very technical, Quintais and Husovec have done a really good job at making their argument more accessible in this new version. See for yourself here

"it turns out it’s much better to have an AI feeding you stuff"

Monday, Sep 14, 2020

I re-installed TikTok on my phone a few minutes ago (to the delight of the kids) because you never know. And while we are waiting for the whole banning TikTok saga to conclude (or more likely to peter out?) here is a perspective on how TikTok became what it is shaped by the constraints of operating in an authoritarian environment that i found quite interesting. From last week’s Stratechery interview with Paul Mozur on Technology in China:

The algorithm side is important and, and we just wouldn’t know and I think one thing that’s really important — I don’t know how much people agree with me on this, but I think it’s true — I think TikTok comes from censorship. I think the way you get a social network with a social feed that’s basically disconnected from friends and populated by an AI, that comes from a Chinese system basically because WeChat was created to make things not super viral, to be safe and not fall afoul of the Chinese government. So that created a space where there wasn’t a super viral really buzzy social media sort of territory or product, and that’s what ByteDance stepped in and created with a Toutiao and then with Douyin, and so to do it and make it in a way that wasn’t gonna freak out the government. Well, instead of having people, make it something you can control, and what better to do than a bunch of a series of algorithms that make things go viral and decide what goes viral, and can be cut off instantly for human review when you need to do it, and that’s the heart of where TikTok‘s recommendation engine and the design of how it’s a content delivery mechanism comes from. And it turns out it’s much better to have an AI feeding you stuff than your friends, because the AI will find way cooler stuff and be way more addictive and so lo and behold, it’s sort of unleashed on the world. Ultimately it does come from a sense of state control, but whether TikTok is actually being used in that way right now we have some smoke, there’s certainly indications, like a lot of videos about Xinjiang on TikTok seem to be very pro Xinjiang […]

Is it automatic?

Friday, Sep 11, 2020

I quite enjoyed reading through AG Szpunar’s opinion on in the the VG Bild Kunst v Stiftung Preußischer Kulturbesitz that was published Yesterday. Szpunar’s legal reasoning is a joy to read and while there are some hair-raising interpretation of core internet concepts overall the opinion shows that he does know what he is talking about.

As to his conclusion, this one is likely to be quite far reaching. He is effectively proposing a new scope of the communication to the public right in the context of hyper-linking, abandoning the all encompassing “new public” test with a more nunaced(?) “is it automatic?” test.

From an internet perspective this certainly looks quite silly. But from a copyright perspective (if one approaches copyright acknowledging that the purpose of copyright is to give creators some limited amount of control over how their creations are used) it is actually quite elegant.

I think what he proposes works well in the context of understanding the internet as people-browsing-webpages-and-looking-at-things. What i am concerned about is that this may have all kinds of limiting effects in the internet understood as machines-talking-to-machines context. It seems important to explore this notion further (which strikes me as being part of a much larger discussion about machines interacting with copyright without much human involvement, which also covers the AI and copyright discourse).

"a quid pro quo benefit"

Thursday, Sep 10, 2020

Thinking about platform regulation, i found this section from Wednesdays edition of The Interface newsletter insightful (it discusses the issue in the context of the debate of Section 230 in the US but this is equally relevant in the context of the Digital Services Act in the EU):

As it so happens, there’s a sharp new report today out on the subject. Paul Barrett at the NYU Stern Center for Business and Human Rights looks at the origins and evolution of Section 230, evaluates both partisan and nonpartisan critiques, and offers a handful of solutions.

To me there are two key takeaways from the report. One is that there are genuine, good-faith reasons to call for Section 230 reform, even though they’re often drowned out by bad tweets that misunderstand the law. To me the one that lands the hardest is that Section 230 has allowed platforms to under-invest in content moderation in basically every dimension, and the cost of the resulting externalities has been borne by society at large. Barrett writes (PDF):

Ellen P. Goodman, a law professor at Rutgers University specializing in information policy, approaches the problem from another angle. She suggests that Section 230 asks for too little — nothing, really — in return for the benefit it provides. “Lawmakers,” she writes, “could use Section 230 as leverage to encourage platforms to adopt a broader set of responsibilities.” A 2019 report Goodman co-authored for the Stigler Center for the Study of the Economy and the State at the University of Chicago’s Booth School of Business urges transforming Section 230 into “a quid pro quo benefit.” The idea is that platforms would have a choice: adopt additional duties related to content moderation or forgo some or all of the protections afforded by Section 230.

The Stigler Center report provides examples of quids that larger platforms could offer to receive the quo of continued Section 230 immunity. One, which has been considered in the U.K. as part of that country’s debate over proposed online-harm legislation, would “require platform companies to ensure that their algorithms do not skew toward extreme and unreliable material to boost user engagement.” Under a second, platforms would disclose data on what content is being promoted and to whom, on the process and policies of content moderation, and on advertising practices.

This approach continues to enable lots of speech on the internet — you could keep those Moscow Mitch tweets coming — while forcing companies to disclose what they’re promoting. Recommendation algorithms are the core difference between the big tech platforms and the open web that they have largely supplanted, and the world has a vested interest in understanding how they work and what results from their suggestions. I don’t care much about a bad video with 100 views. But I care very much about a bad video with 10 million. So whose job will it be to pay attention to all this? Barrett’s other suggestion is a kind of “digital regulatory agency” whose functions would mimic some combination of the Federal Trade Commission, the Federal Communications Commission, and similar agencies in other countries.

It envisions the digital regulatory body — whether governmental or industry-based — as requiring internet companies to clearly disclose their terms of service and how they are enforced, with the possibility of applying consumer protection laws if a platform fails to conform to its own rules. The TWG emphasizes that the new regulatory body would not seek to police content; it would impose disclosure requirements meant to improve indirectly the way content is handled. This is an important distinction, at least in the United States, because a regulator that tried to supervise content would run afoul of the First Amendment. […]

In a paper written with Professor Goodman, Karen Kornbluh, who heads the Digital Innovation and Democracy Initiative at the German Marshall Fund of the United States, makes the case for a Digital Democracy Agency devoted significantly to transparency. “Drug and airline companies disclose things like ingredients, testing results, and flight data when there is an accident,” Kornbluh and Goodman observe. “Platforms do not disclose, for example, the data they collect, the testing they do, how their algorithms order news feeds and recommendations, political ad information, or moderation rules and actions.” That’s a revealing comparison and one that should help guide reform efforts.

Nothing described here would really resolve the angry debate we have once or week or so in this country about a post that Facebook or Twitter or YouTube left up when they should have taken it down, or took down when they should have left it up. But it could pressure platforms to pay closer attention to what is going viral, what behaviors they are incentivising, what harms all of that may be doing to the rest of us.

On general monitoring - platforms being hypocrites

Wednesday, Sep 9, 2020

With the DSA consultation closed there is now the predicatble onslaught of statements, position papers and other writings (here is a highlights reel from Euractiv. While it will take some time to properly analyse all of the responses to the consultation this paid-for opinion piece by EDiMA caught my eye today.

In it EDiMA (the trade association representing internet platforms in the EU) is unsurprisingly arguing that everything is fine and that the EU legislator must leave the key principles of the e-Commerce Directive intact. Among the key pillars of the the ECD that EDIMA wants to preserve is the prohibition of general monitoring obligations in Article 15 ECD:

The Prohibition of a General Monitoring Obligation means that service providers cannot be forced to monitor every action of their users, protecting the fundamental rights of European citizens.

Now it is good thing that tech platforms care about the fundamental rights of European citizens, but this is a fairly hypocritical position for an organisation representing platforms whose entire business models are build on permanently monitoring the behaviour of their users.

While EDIMA is correct to point out that we would all be worse off should EU member states be allowed to require online services to monitor their users, the fact that they do so without being required is just as problematic for the fundamental rights of European citizens.

In the context of the discussion about a possible Digital Services Act, this means that in addition to preserving the prohibition on a general monitoring obligation we will also need to think about measures that prohibit general monitoring by platforms of their users (as part of their business models). I doubt, that once such measures are on the table we will hear EDiMA invoke the fundamental rights of European citizens.

is Paul Keller providing strategic advice and doing research at the intersection of technology, copyright, culture & public policy. Depending on the task, I can shape-shift between being a systems architect, a researcher, a lobbyist, an activist or a cyclist. Say hello!

Adress: Bonairestraat 58, 1058XC Amsterdam, the Netherlands KvK: 73684848 VAT: NL002395685B68 IBAN: NL15RABO0336834225