In parallel with this partial mission accomplished moment, Creative Commons also finds itself in the midst of a much broader discussion about the suitability of copyright as a regulatory instrument at the intersection of creativity and (Artificial Intelligence) technology. The emergence of powerful generative AI systems trained on copyrighted works has raised complicated questions about the ability of creators to control the use of their works in a new technological environment.
In response to the Internet, the Creative Commons licenses provided creators with a set of tools to navigate between the sharing opportunities created by the Internet and the realities of a copyright system that requires permission for almost every act of copying. They gave creators who wanted to share, tools designed to allow them to do so under certain conditions.
As generative AI emerges as a new paradigm that challenges the boundaries of copyright, many creators are looking for ways to signal their preferences for the use of their works by AI systems in ways similar to Creative Commons licenses.
At this moment, as Creative Commons faces a leadership transition, the organization would be wise to take a step back and assess this opportunity space: It seems increasingly clear that a new generation of legal tools is needed to provide legal certainty to creators and users in the contested space at the intersection of copyright and generative AI. What is needed to protect commons based projects? What do share alike and copyleft mean in a world where 1:1 copying of works is becoming less relevant? How can massive datasets be licensed to maximize their potential to serve the public interest?
At this pivotal moment for the organization, CC must now focus on understanging the fundamental changes in its operating environment before it can move onwards to redefine the next stage of its mission.
For a while now i have been wondering why there has - seemingly - been very little attention paid by political scientists to the legislative fight over the the EU copyright reform that played out between 2014 and 2019. Many consider the discussions surrounding the adoption of the 2019 Copyright in the Digital Single Market Directive to be one of the most controversial and hard fought policy battles that the EU has witnessed in recent times. There can be little doubt that the copyright reform has been extremely polarising. All of this should make this process worthy of attention from political scientists.
That in itself makes the (relatively short) chapter worth reading, but what makes it even more interesting to me is Bonnamy’s central thesis:
Indeed, copyright as an instrument both of cultural policy and economic regulation constitutes an ideal ground for the culture vs. market clash. The main thesis of this chapter is that the digital age brings in a new set of values, from the involvement of a third sphere of activities – the digital – that complexifies the debate: the open-access set of values. We are now dealing with a tripartite clash.
The other sets of values that Bonnamy identifies are “Market” and “Culture”. Seeing Bonnamy describe the debate about copyright as a tripartite clash is particularly interesting since – from my perspective – one of the most frustrating aspects of being involved in this clash has been the fact that it has generally been described as a binary conflict between “tech” and “culture”, ignoring the fact that there has indeed been a third group of stakeholders: individual and institutional users.
It is rewarding to see Bonnamy clearly identify this this third set of stakeholders and it is even more interesting to see that she conceptualises the underlying set of values as open access:
The market vs. culture debate does embrace a new shape in the digital age when it comes to debating a new legal framework for European copyright. Digital technologies, and more exactly, the Internet, as a borderless and immaterial space, brings in new issues along with a new set of values. Freedom of expression, democracy, freedom of information, are not new as such, but their medium is. They question the role of copyright as a balance between the public good and market logics, and between cultural and economic incentives, and are mobilized by political actors.
That being said, this analysis confirms the strength of economic values in the European arena. Indeed, in the first part, we have seen that the fundamental opposition between market and culture values was eventually overcome by the creative economy framing. And, as such, the EU cultural policy constitutes a typical case where culture is embedded in market values. The development of digital technologies could have challenged this domination of the market set of values over culture. Indeed, in theory, both open-access and culture sets of value condemn the pursuit of profit and commodification of the Internet for the first and the arts for the second. But this potential proximity between culture and open access sets of values on the issue of copyright is ultimately not translated into a policy solution defended by those who otherwise stress culture and open access values. The debate eventually turns into one about more or fewer barriers to entry, where proponents of more barriers refer to market and culture values, while opponents point mostly to open access and market values. In the spirit of Rodrik’s trilemma of the world economy (2008), in our case-study, culture, open-access, and market sets of value seem to be working as a trilemma: two can be combined but never all three together. Culture and open-access sets of values share the promotion of non-profit creations; open-access and market share the promotion of free access; culture and market share the promotion of the remuneration of the cultural value chain (see. Figure 1). And in the EP’s debates on copyright, the market set of value seems to be working as a common denominator, as free competition is compatible with free access and with the remuneration of the whole value chain.
These observations (which feel spot-on to me) point to an important failure on behalf of open access advocates during the copyright debates: Our inability to develop policy positions that could appeal to those actors predominantly by cultural values. There have been a lot of moments where it was noted that there are in fact strong overlaps between the interests of users and creators (and not only because these two categories are increasingly overlapping because as a result of new forms of cultural production enabled by digital technologies) but these never lead to any meaningful re-evaluation of policy positions. Instead of actively resisting the tendency of being subsumed under the more established set of market values, the open access movement seemed to have arranged itself with this reality early on in the process. This was also helped by the fact that a subset of the open access values is fairly strongly held by an influential set of tech companies who acted as eager allies for the open access advocates throughout the copyright reform discussion.
In this context, the section where Bonnamy analyses how both the cultural and open access sets of values were eventually subsumed by questions of economic regulation rooted in the market set of values really stands out to me:
The debate seems to have been dominated by the use of cultural and open-access values. We can find examples of both sets of values on each side of the debate, but the general picture shows the domination of cultural values to support the reform, against open-access values to contest it. Market values appear through the idea of fighting monopolies. They equally irrigate speeches from opponents as well as supporters of the directive. The different sets of values are mobilized to sustain a specific vision of market regulation: should there be more or fewer barriers to entry? In that sense, the 2018 debate is still set in the same framing as the one described by Annabelle Littoz-Monnet regarding the 1990s European debate on copyright (2006). Thus, cultural values are largely used to sustain more regulation, whereas open-access values are used to argue in favour of fewer regulation. Interestingly, free competition values are used to defend both, leading to what we called a discursive commodification. That is: the combination of values historically detached from, if not opposed to, economic concerns (here, culture and open-access) with market values, economic by their very-essence, to justify a policy solution. From that, three remarks can be made.
[…] All in all, we can see here the strength of what Antoine Vauchez called the Econo-polity of the EU that acts as the “original matrix” of the European decision-making process (2015). It is both an institutional – the internal market being the main competence of the EU – and a cognitive structure that forces the agents to adapt to it, visible here through this discursive commodification. This whole debate demonstrates the strength of this “matrix”, as it paradoxically manages to be a medium for sets of values opposed to marketisation that are culture and open-access.
It is this very insight (a market driven cognitive structure that forces us to embed our arguments in market logic) that led us in 2018 to start thinking about alternative frames for discussions about digital policy making in the European Union. This work later resulted in our Vision for a Shared Digital Europe, which as Bonnamy reminds us is still as relevant as it ever was.
Yesterday the Court of Justice of the European Union heard case C-401/19 Republic of Poland v European Parliament and Council of the European Union in which the Polish government asks the CJEU to annul the upload filtering provisions in Article 17 of the DSM directive. While we had not really taken this case seriously (it seemed more a domestic political gesture of the Polish government than a serious effort to protect fundamental rights) and did not pay much attention to it the case has recently become more interesting as it forces the parties involved to openly position themselves in the ongoing disputes about the correct implementation of Article 17). In that sense the hearing did not disappoint as i have written up here.
Writing this report turned out to be a little adventure. Given the stubborn refusal of the court to stream hearings and other public sessions, it required someone to be present at the hearing in Luxembourg. Given the pandemic related travel restrictions it became pretty clear that among the people working on Article 17 on our side i was the best positioned to go to Luxembourg, which still meant a 4,5 hour drive in each direction. In the end i drove down the evening before, stayed overnight in a hotel1 and drove straight back to Amsterdam after the end of the hearing.
While was quite a bit of trouble (i really do not like driving!), in the end it was worth the effort. Had i not been there to report on the hearing the only reporter would have been a writer for a subscription only business intelligence platform (plus a handfull of lawyers observing for corporate stakeholders on the public tribune). The CJEU would certainly do itself a big favour if it would stream such hearings and other public sessions. In the long run having the European public being represented by a hack writing for a business intelligence service and a civil society operative with a dog in the fight does not seem good for transparency of the judiciary.
Staying in a hotel for work is something i have not done for what feels like a very long time. And while the whole experience with every item that you could possibly touch being wrapped in protective paper enveloppes felt slightly surreal the mere act of sleeping in a hotel bed made me feel like a real human being again. ↩︎
Insofar as the development of digital commons is relatively absent from sovereignty policies at the European level, it is necessary to identify the resources likely to be jointly managed and exploited, while raising awareness among our partners, particularly European ones, of the strategic dimension of digital commons, in order to mobilize them accordingly.
The purpose of this article is therefore not to define the scope of digital commons in a technical, economic or political perspective, but rather to reflect on their strategic potential for Europe, within a digital world dominated by private monopolistic players, and driven by the structuring rivalry between China and the United States.
This is of course a much more pointed version of the argument that we have been making in our Vision for a Shared Digital Europe in which cultivating the Commons features as one of the core principles of building a shared digital Europe.
In this context it is intersting to see a paper published by the French Governement2 suddenly (and quite forcefully) position the digital commmons as a core element of a future EU digital strategy:
This logic of commons is perfectly aligned with the values and vision of the digital space defended by France and promoted to our European partners and beyond: a safe, open, unique and neutralspace. In addition,because they directly defend a model and priorities which are also those of the EU (preserving general interest, fair competition, net neutrality, personal data protection, environmental sustainability, etc.), digital commons should also become one of the pillars of a European sovereignty policy, from which they have so far been absent.
And it is even more welcome the paper also calls for investment into building a sustainable digital commons in Europe:
[…] This shows the urgent need to protect and therefore guarantee the sustainability, especially economic, of digital commons projects; their non-rival characteristic and lack of inclination to capital accumulation makes it difficult to finance them nor make them profitable. This would imply the creation of a support fund for existing digital commons,
along the lines of the EU-FOSSA project. This fund could be fueled by European private
and public players to start with, before being potentially extended to any other actor sharing our concerns.
[…]In addition, it may be possible to create a European foundation for the digital commons, an entity that would be responsible for managing the financing mentioned above, but which could also host and support new initiatives (through legal advice, labeling, hackathons and code sprints, calls for projects, etc.). In order to counter possible attempts at recapitalisation, looting or exclusive capture, it could ensure that licences are respected, but also establish possible transfers of ownership and therefore of responsibilities – financing, governance, optimisation, etc. – within itself.
Lastly, the European strategy in this field should include an international component. Our vision of digital sovereignty is non-hegemonic and this sovereignty must therefore show how it fits with a concept of international governance which guarantees a “free, open and safe” digital world through multilateralism – as a mutual and mutually accepted constraint. The commons are, here again, useful in guaranteeing open digital infrastructures – be it against attacks on confidence and security in cyberspace (according to the Paris Call wording) but also against risks created by political control, technological mastery or financial domination.
Unfortunately the paper feels a bit like a one-off effort to launch an idea which is a bit of a pity since the underlying idea is a sound one. Making a strong digital commons part of the EU digital policy would be a strategic choice that would set it appart from the current attempts that are not fundamentally different from the current (US dominated) approach to the digital space. And having the French government as an ally in this fight would certainly be welcome.
I really enjoyed reading this short essay by Salome Viljoen about moving beyond property or dignity claims about data production and towards democratising data governance. This is an excellent primer for anyone interested in understanding discussions about the governance of (personal) data that does a pretty good job at describing the two prevalent schools of thought. On the one side the data as property approach:
Propertarian reforms diagnose the source of datafication’s injustice in the absence of formal property (or alternatively, labor) rights regulating the process of production.
And on the other side a an individual rights based approach that she calls “dignitarian”:
The second type of reforms, which I call dignitarian, take a further step beyond asserting rights to data-as-property, and resist data’s commodification altogether, drawing on a framework of civil and human rights to advocate for increased protections. Proposed reforms along these lines grant individuals meaningful capacity to say no to forms of data collection they disagree with, to determine the fate of data collected about them, and to grant them rights against data about them being used in ways that violate their interests.
I am definitely more in the “dignitarian” camp here but i also share her analysis of the shortcomings of the this approach and her proposal to transcend these opposing approaches in favor of an approach that rooted in collective rights:
Rather than proposing individual rights of payment or exit, data governance should be envisioned as a project of collective democratic obligation that seeks to secure those of representation instead.
[…] What these shortcomings suggest is that alternative conceptions of the data political economy are needed. Such alternatives must be resistant to private market governance of the data political economy, attentive to the structural incentives at the root of data extraction, and responsive to the wealth accumulation, privacy erosion, and reproduction of social oppression it facilitates.
One path forward reconceives data about people as a democratic resource. Such proposals view data not as an expression of an inner self subject to private ordering and the individual will, but as a collective resource subject to democratic ordering.
The framework that she proposes here (data as a collective ressource subject to democratic ordering) makes a lot of sense (maybe even more that the personal data as a commons approach that is fairly popular in my circles at the moment).
In order to understand what this would mean in practice one does not need to look further than the Facebook controversy du jour, where facebook tries to prevent researchers from understadning the impact of political ads in the name of protecting the privacy of its users. This controversy perfectly illustrates the limitations of an individual rights based approach to data ownership and provides a case study why treating personal data as a collective ressource subject to democratic ordering would make a meaningful difference.
Turns out that i am not the only German who likes to make flowcharts about Article 17. Someone in the German Ministry for Justice and Consumer Protection (BMJV) has made a flowchart that depicts how the German implementation of Article 17 would work in practice:
As a flowchart I quite like this both in terms of execution (the use of ✋ and ⚙️ symbols to indicate automatic or human interventions) and in terms of the mechanism proposed, which comes pretty close to the ambition to avoid the use of automated blocking of uploads as much as possible. In line with the “vergütung über alles” principle that animates German copyright law the proposed mechanism seeks to avoid automated blocking via a cascade of remuneration mechanisms (licensing, remunerated minor uses, remunerated uses under the pastiche exception). While there remains the possibility for uploads to be blocked at the request of rightholders (if they are not covered by an exception) this mechanism is probably as close as a national legislator can get to turning the Article 17 right into a remuneration right.
Unfortunately there is a mayor problem with the accuracy of the flowchart. The flowchart published by the BMJV does not accurately depict the provisions of the implementation law proposal (Referentenentwurf) published by the same ministry. Contrary to what the flowchart suggests, in the Referentenentwurf the determination if a use qualifies as a minor use (§6) is not automatic and does not happen before the uploader can pre-flag the use of a matched work as legitimate. Without an automated minor use check, the whole proposal loses most of its appeal1.
So while it is clear that the mechanism depicted in the flowchart is much preferable to the one described in the Referentenentwurf, it seems prudent to assume that the text of the proposal is what eventually counts2. In the meanwhile the BMJV’s ongoing stakeholder consultation (open until 6/11) provides an opportunity to let them know that we are more impressed with their flowcharts drawing skills than with their legal drafting skills.
Even worse, without an automated minor uses check the problems of the “match and flag” approach become much more pronounced since uses that fall under the minor uses exception would also be affected by retroactive removals via upload filters. ↩︎
Wired has an short write-up of one of the books that i have on my reading list: Subprime Attention Crisis by Tim Hwang. The books central thesis is that targeted behavioural advertising ~is a scam~ does not really work any better than many other forms of advertising and as a result the whole ad-tech market has become a bubble akin to the housing bubble that lead to the last financial crisis:
So if Hwang is right that digital advertising is a bubble, then the pop would have to come from advertisers abandoning the platforms en masse, leading to a loss of investor confidence and a panicked stock sell-off. After months of watching Google and Facebook stock prices soar, even amid a pandemic-induced economic downturn and a high-profile Facebook advertiser boycott, it’s hard to imagine such a thing. But then, that’s probably what they said about tulips.
This is not something to be cheered. However much targeted advertising may have skewed the internet—prioritizing attention-grabbiness over quality, as Hwang suggests—that doesn’t mean we ought to let the system collapse on its own. We might hope instead for what Hwang calls a “controlled demolition” of the business model, in which it unravels gradually enough for us to manage the consequences.
How might that work? Hwang proposes a publicity campaign by researchers, activists, and whistleblowers that exposes the sickness of the online ad market, followed by regulations to enforce transparency. Digital advertisers would have to make public, standardized statements to help buyers evaluate their wares. The goal would be to narrow the dangerous disconnect between perceived and actual value.
I like the idea of a “controlled demolition” but it feels to me that we are already deep into the publicity campaign (at least in Europe, see here for an example that some of my colleagues at IVIR are involved in) and that the focus really needs to be on regulation. In this context it will be key to see if the upcoming Digital Services Act will include regulatory interventions of the type that Hwang envisages. For me that is one of the most interesting questions about the DSA (instead of endlessly re-hasing discussions about liability and responsibility).
Julia Reda has a two part (1|2) post on the Kluwer Copyright blog in which she examines (and ultimately rejects) the claim made by rightholders that Article 17 of the DSM Directive is a mere clarification of existing Court of Justice case-law on communication to the public and intermediary liability. In the first part she examines the possible motivations rightholders could have for portraying Article 17 as a mere clarification of existing law that does not really change anything.
While all three of her theories have some merit, for me her third explanation is the most interesting one:
The third possibility is that rightsholders find themselves in the position of Goethe’s sorcerer’s apprentice. While lobbying for a new liability regime for hosting providers may have initially seemed like a good idea, they lost control of the legislation they had advocated for. Other interest groups, most notably internet users, became more vocal during the legislative process than initially expected. After the European Parliament rejected the Legal Affairs Committee’s version of the draft DSM Directive in the summer of 2018 over fundamental rights concerns, concessions had to be made and user rights had to be strengthened in order to secure a majority for the Directive in Parliament.
The end result, which for the first time establishes users’ rights to the use of copyrighted content and makes several exceptions related to freedom of expression mandatory, may cause some rightsholder groups to question whether they were better off under the old legal regime. […]
This observation aligns pretty well with an insight that has emerged more and more clearly over the past few months fn working on the implementation of Article 17:
By now it is pretty clear to me that during the final phase of the legislative battle over the directive (between January and March 2019) both sides remained stuck in their entrenched positions vis a vis Article 13, without really noticing that as a result of the fierce opposition by users, and the determination of rightholders to get the directive adopted at any cost, the internal balance of Article 13 had shifted more and more in the direction of codifying user rights.
In the end, the final version of the Article does is quite far removed from the original proposal1 and includes a surprising number of elements (mandatory exceptions for quotation, pastiche, parody and caricature, strong procedural safeguards against over filtering) that would have never made it into the law if not introduced as concessions for getting Article 13 adopted. Even with the benefit of hindsight people on both sides of the debate seem to prefer not to acknowledge this, because this outcome is hard to reconcile with the quasi-religious belief systems that animate most participants in copyright policy debates.
It is telling that in their recent letter to Commissioner Breton, rightholders complain that “in its Consultation Paper, the Commission is going against its original objective”. At this stage the Commission is of course not supposed to act in line with the original objective of the legislative proposal but rather in line with the text of the directive as adopted by the European legislator, which is indeed quite different from the original objective. ↩︎
A series of conversations today made me realize that one of the least expected outcomes of the implementation of Article 17 will be a massive increase in work for off-shore content moderators. While most of the discussion around Article 17 focusses on the fact that it will require platforms to implement automated content recognition tools to filter user uploads (the dreaded #uploadfilters), there is much less attention to the fact that along with these filters will come small armies of content human moderators to do the (now legally required) “human review” of the inevitable mistakes that the filters will make. I find the idea that, by talking about human review, the directive carves out a niche that is explicitly protected from takehover by AI oddly satisfying.
Andres’ post deals with what he calls digital cultural colonialism that finds its expression in an internet culture dominated by American cultural tropes, that exports the worst elements of American political culture1:
The underlying infrastructure of the tech industry is bad enough, but one of the most baffling aspects for me of the digital colonialism has been the entrenchment of US culture’s dominance. American cultural hegemony goes back to analogue media with the prevalence of its music, TV and film everywhere. Many of us who saw the dawn of the modern Internet believed that it would bring a more diverse cultural environment, people all over the world communicating with each other and sharing each other’s cultural expressions. What happened was that the infrastructure advantage translated into the continuing export of the US internet culture.
[…] This has had an interesting effect. Social media has spawned a global culture that speaks the same American Internet language of memes, streams, music and show references. And even when we get more representation and diversity, it tends to be entirely US-centric. […] The main effect has been the export through social media of the toxic US culture wars to the rest of the world. American culture has become extremely divided, and politicians have learned to use that division, encouraging the polarisation in order to maintain power.
I think this description is spot-on, and it reminded me of my initial reaction to a Deutschlandfunk push message i received last Friday informing me that the US would block TikTok as of last Sunday:
My initial reaction was to hope that the US would indeed make good on this threat. Not because i think the world would be a better place without TikTok but rather because i was looking forward to a unique natural experiment. With TikTok being banned in the US would it continue to be a dominant cultural vector in the rest of the world (thereby signalling the demise of the US cultural hegemony)? Or, would Internet culture move on to the next US-based replacement platform, resisting decolonialization? With the TikTok ban off the table (for now) an answer to this question will have to wait. In the meanwhile it is worth considering Andres’ suggestion to…
…ask questions when we see another US-centric trend in our timelines. Is this relevant to me? Is this relevant to my society? Have I been consuming local culture? Have you helped to crowd-fund a local project?
But perhaps more importantly, be mindful about your own cultural consumption, and who you choose to centre in your advocacy. Remember, their problems are often not our problems.
By contrast the DFF post discusses first steps of the (European) digital rights movement to adress forms of oppression that have their roots in a history of domination and colonisation and are maintained by structural forces. I found following passage discussing the shortcomings of individual rights based advocacy particularly resonating: “So, the mechanism works for the individual who is informed and in a position to make their individual rights actionable, but less so for others, who ‘data protection’ was not modelled for. Just as we speak about harmful technologies as a result of skewed design, this argument applies to our legal tools too.” This is probably because it strongly aligns with our analysis of the limitation of individual rights based approaches for digital policy making in our Vision for a Shared Digital Europe. ↩︎
There are not many days where i feel more aligned with the US or tha UK than with the European Union, but today is such a day. Earlier this morning during the 53rd WIPO General Assembly, China blocked the Wikimedia Foundation from becoming a WIPO observer (supposedly because WM has a chapter in Taiwan, which goes against the PRC’s one China principle):
Let me try to clarify and give more details about this diplomatic incident. @Wikimedia applied to be an observer @WIPO because this is the intl forum that discusses issues like access to medicines and access to knowledge and agrees on intl laws on #copyright and patents. 1/ https://t.co/yCd22LdJxp
In reaction to this both the UK (on behalf of a group of countries that also includes the EU member states) and the US came out in support of Wikimedia’s application. The delegation of the EU remained silent.
Kowtowing to Chinese attempts to exclude an important civil society stakeholder with a strong track record of constructive contributions to IP policy discussions is shameful. Unfortunately it is also in line with the overall geopolitical approach that the EU is taking vis a vis China. It is disturbing that in its effort to appease China to protect trade relations, the EU is now willing abandon civil society actors.
Today the Verge reports that Facebook will let people claim ownership of images and issue takedown requests and notes that “The days of reposting images on Instagram might be over”. The article describes a pretty run of the mill ContentID/Facebook rights manager type system that will allow select users to claim ownership of images across Facebook’s platforms. The fact that this emerges now is of course no coincidence but shows that Facebook is preparing for the entry into force of Article 17 of the copyright directive which will almost certainly require them to provide such filtering functionality in the EU.
The Verge is a bit light on details but the bit of info it does contain on the how the tool will deal with the inevitable collisions between ownership claims does not sound terribly sophisticated:
To claim their copyright, the image rights holder uploads a CSV file to Facebook’s Rights Manager that contains all the image’s metadata. They’ll also specify where the copyright applies and can leave certain territories out. Once the manager verifies that the metadata and image match, it’ll then process that image and monitor where it shows up. If another person tries to claim ownership of the same image, the two parties can go back and forth a couple times to dispute the claim, and Facebook will eventually yield it to whoever filed first. If they then want to appeal that decision, they can use Facebook’s IP reporting forms.
“Whoever filed first” is of course not at all relevant when it comes to copyright. Unfortunately there is no further elaboration how the new tool will deal with uses under exceptions / fair use or how it would interfact with Public Domain or freely licensed content, but it seems clear that Instagram’s current culture of freely reposting images is on a collision course with the the realities of automated copyright enforcement as mandated by Article 17.
And while the paper is indeed very technical, Quintais and Husovec have done a really good job at making their argument more accessible in this new version. See for yourself here
I re-installed TikTok on my phone a few minutes ago (to the delight of the kids) because you never know. And while we are waiting for the whole banning TikTok saga to conclude (or more likely to peter out?) here is a perspective on how TikTok became what it is shaped by the constraints of operating in an authoritarian environment that i found quite interesting. From last week’s Stratechery interview with Paul Mozur on Technology in China:
The algorithm side is important and, and we just wouldn’t know and I think one thing that’s really important — I don’t know how much people agree with me on this, but I think it’s true — I think TikTok comes from censorship. I think the way you get a social network with a social feed that’s basically disconnected from friends and populated by an AI, that comes from a Chinese system basically because WeChat was created to make things not super viral, to be safe and not fall afoul of the Chinese government. So that created a space where there wasn’t a super viral really buzzy social media sort of territory or product, and that’s what ByteDance stepped in and created with a Toutiao and then with Douyin, and so to do it and make it in a way that wasn’t gonna freak out the government. Well, instead of having people, make it something you can control, and what better to do than a bunch of a series of algorithms that make things go viral and decide what goes viral, and can be cut off instantly for human review when you need to do it, and that’s the heart of where TikTok‘s recommendation engine and the design of how it’s a content delivery mechanism comes from. And it turns out it’s much better to have an AI feeding you stuff than your friends, because the AI will find way cooler stuff and be way more addictive and so lo and behold, it’s sort of unleashed on the world. Ultimately it does come from a sense of state control, but whether TikTok is actually being used in that way right now we have some smoke, there’s certainly indications, like a lot of videos about Xinjiang on TikTok seem to be very pro Xinjiang […]
I quite enjoyed reading through AG Szpunar’s opinion on in the the VG Bild Kunst v Stiftung Preußischer Kulturbesitz that was published Yesterday. Szpunar’s legal reasoning is a joy to read and while there are some hair-raising interpretation of core internet concepts overall the opinion shows that he does know what he is talking about.
As to his conclusion, this one is likely to be quite far reaching. He is effectively proposing a new scope of the communication to the public right in the context of hyper-linking, abandoning the all encompassing “new public” test with a more nunaced(?) “is it automatic?” test.
From an internet perspective this certainly looks quite silly. But from a copyright perspective (if one approaches copyright acknowledging that the purpose of copyright is to give creators some limited amount of control over how their creations are used) it is actually quite elegant.
I think what he proposes works well in the context of understanding the internet as people-browsing-webpages-and-looking-at-things. What i am concerned about is that this may have all kinds of limiting effects in the internet understood as machines-talking-to-machines context. It seems important to explore this notion further (which strikes me as being part of a much larger discussion about machines interacting with copyright without much human involvement, which also covers the AI and copyright discourse).
Paul Keller is Policy Director at Open Future, a European digital poliy think-tank. He is doing policy research and is providing strategic advice at the intersection of technology, open access, culture & public policy. Depending on the task, he can shape-shift between being a systems architect, a researcher, a lobbyist, an activist or a cyclist. Say hello!