February was a busy month in the online content regulation debate, with important developments on several active processes.
In the UK, the Department for Digital, Culture, Media and Sport (DCMS) published its initial response to the Online Harms White Paper, which sets out a comprehensive approach to regulating content on online platforms. Much of the response is just a summary of the key points made by different stakeholders during the consultation process. However, it does indicate that certain aspects of the policy are going to shift.
Notably, the response emphasises—much more clearly than before—that the focus of regulation will be on ensuring that adequate systems and processes are in place to address harmful content, rather than mandating the removal of specific pieces of content. It also states that the legislation will set different standards for illegal and harmful forms of content—meaning that, for harmful but legal content (which includes cyberbullying and disinformation), platforms will be free to set their own rules, as long as they are clear and enforced consistently and transparently. Finally, the response underlines that users will be able to appeal decisions taken to remove content. These are all welcome shifts—but they only begin to address the many concerns we raised in our response to the White Paper. We look forward to seeing the government’s full response, which is expected in Spring.
Meanwhile, Facebook has just published its own White Paper on online content regulation. It sets out key questions for stakeholders to consider, as well as the principles which Facebook thinks should be considered when developing legislation: ensuring that regulation sets out the right incentives for companies; recognising the global nature of the internet; ensuring freedom of expression; understanding the limitations of technology in content moderation; and ensuring that proposals are proportionate to the level of harm caused.
EU policymakers have already dismissed Facebook’s White Paper, claiming that it doesn’t go far enough in what it proposes. It’s certainly true that it makes few proposals for what good regulation would look like, and the concrete measures that it does propose—like requiring users to be able to report illegal or harmful content, and periodic public reporting on content moderation enforcement—are largely things Facebook is already doing. Legal expert and commentator Evelyn Douek had a more positive take, calling it "a thoughtful document that raises serious questions that regulators, and the rest of us interested in the future of online content regulation, need to reckon with”.
Side note: Brussels is an important battleground for tech companies at the moment—with the EU Commission’s upcoming consultation on a new Digital Services Act (expected in March) potentially bringing significant changes to the operations of online platforms in Europe. We should have more updates on how discussions are going soon.
Finally, the Australian government has just concluded a consultation on its own proposed approach to online content regulation. In our response, we were broadly positive about many aspects, while highlighting a few areas where further thinking is needed. Read a summary of our recommendations here.
|