Thinking about platform regulation, i found this section from Wednesdays edition of The Interface newsletter insightful (it discusses the issue in the context of the debate of Section 230 in the US but this is equally relevant in the context of the Digital Services Act in the EU):
As it so happens, there’s a sharp new report today out on the subject. Paul Barrett at the NYU Stern Center for Business and Human Rights looks at the origins and evolution of Section 230, evaluates both partisan and nonpartisan critiques, and offers a handful of solutions.
To me there are two key takeaways from the report. One is that there are genuine, good-faith reasons to call for Section 230 reform, even though they’re often drowned out by bad tweets that misunderstand the law. To me the one that lands the hardest is that Section 230 has allowed platforms to under-invest in content moderation in basically every dimension, and the cost of the resulting externalities has been borne by society at large. Barrett writes (PDF):
Ellen P. Goodman, a law professor at Rutgers University specializing in information policy, approaches the problem from another angle. She suggests that Section 230 asks for too little — nothing, really — in return for the benefit it provides. “Lawmakers,” she writes, “could use Section 230 as leverage to encourage platforms to adopt a broader set of responsibilities.” A 2019 report Goodman co-authored for the Stigler Center for the Study of the Economy and the State at the University of Chicago’s Booth School of Business urges transforming Section 230 into “a quid pro quo benefit.” The idea is that platforms would have a choice: adopt additional duties related to content moderation or forgo some or all of the protections afforded by Section 230.
The Stigler Center report provides examples of quids that larger platforms could offer to receive the quo of continued Section 230 immunity. One, which has been considered in the U.K. as part of that country’s debate over proposed online-harm legislation, would “require platform companies to ensure that their algorithms do not skew toward extreme and unreliable material to boost user engagement.” Under a second, platforms would disclose data on what content is being promoted and to whom, on the process and policies of content moderation, and on advertising practices.
This approach continues to enable lots of speech on the internet — you could keep those Moscow Mitch tweets coming — while forcing companies to disclose what they’re promoting. Recommendation algorithms are the core difference between the big tech platforms and the open web that they have largely supplanted, and the world has a vested interest in understanding how they work and what results from their suggestions. I don’t care much about a bad video with 100 views. But I care very much about a bad video with 10 million. So whose job will it be to pay attention to all this? Barrett’s other suggestion is a kind of “digital regulatory agency” whose functions would mimic some combination of the Federal Trade Commission, the Federal Communications Commission, and similar agencies in other countries.
It envisions the digital regulatory body — whether governmental or industry-based — as requiring internet companies to clearly disclose their terms of service and how they are enforced, with the possibility of applying consumer protection laws if a platform fails to conform to its own rules. The TWG emphasizes that the new regulatory body would not seek to police content; it would impose disclosure requirements meant to improve indirectly the way content is handled. This is an important distinction, at least in the United States, because a regulator that tried to supervise content would run afoul of the First Amendment. […]
In a paper written with Professor Goodman, Karen Kornbluh, who heads the Digital Innovation and Democracy Initiative at the German Marshall Fund of the United States, makes the case for a Digital Democracy Agency devoted significantly to transparency. “Drug and airline companies disclose things like ingredients, testing results, and flight data when there is an accident,” Kornbluh and Goodman observe. “Platforms do not disclose, for example, the data they collect, the testing they do, how their algorithms order news feeds and recommendations, political ad information, or moderation rules and actions.” That’s a revealing comparison and one that should help guide reform efforts.
Nothing described here would really resolve the angry debate we have once or week or so in this country about a post that Facebook or Twitter or YouTube left up when they should have taken it down, or took down when they should have left it up. But it could pressure platforms to pay closer attention to what is going viral, what behaviors they are incentivising, what harms all of that may be doing to the rest of us.