UK’s online safety bill threatens torch platforms

The UK proposal Online Security Bill it would require platforms to screen and monitor all activity and content as it is uploaded to predict whether it is illegal or harmful. This general monitoring obligation is prohibited in the European Union Digital Services Act.

Governments around the world are making laws and passing bills to address online safety challenges and hold technology platforms accountable for harmful content. This is good. But the UK is poised to play a leading role among Western democracies in imposing such extensive monitoring that it could dampen freedom of expression.

Here’s how: The proposed UK law imposes new duties to “bar” access to a wide variety of poorly defined categories of content. All under-18s would be banned from showing content that the British Parliament doesn’t even consider illegal. Such a blacklist requires an accurate prediction not only of each reader’s location and age, but also whether rapidly edited or uploaded content matches a dizzying array of criteria in dozens of other UK laws (mainly criminal statutes). This presents an impossible challenge for platforms.

Facing huge fines and “super claims” for insufficient blocking, platforms will be forced to adopt a conservative approach, stripping away legal material and excluding legitimate but hard-to-profile visitors.

For non-commercial public interest platforms like Wikipedia, the UK bill threatens to undermine their volunteer-led governance model. In contrast, the European DSA explicitly recognizes the difference between centralized employee-driven content moderation and community-driven content moderation systems.

Let’s be clear: The Wikimedia Foundation, which hosts Wikipedia and other volunteer-run free knowledge projects, supports efforts to make the Internet safe. When people are harassed or otherwise feel insecure communicating online, their ability to access, create, or share knowledge diminishes. But Wikimedia believes that online safety can only be achieved when adequate safeguards for privacy and freedom of expression are in place.

Get the latest

Receive regular emails and stay informed about our work

A key issue is to protect children. Unlike commercial services, Wikipedia does not target people of any age with paid advertising or profile them to amplify personalized content. But the UK’s proposal for mandatory age verification – “age-gating” – would force platforms, including Wikipedia, to know a reader’s age, exposing both adults and children to new health risks. security and privacy.

The precedent is alarming. Even the best age assurance tools they proved to be inaccurate. If the UK compels us to collect such data on UK users, we can expect many other governments around the world to impose similar requirements.

In our view, the UK should explicitly recognize and support community-governed content moderation systems, which are effective against harmful speech and in the protection of human rights. The obligations placed on non-profit and public interest platforms with decentralized, volunteer-run content moderation models such as Wikipedia should be differentiated from those required for for-profit platforms, which have top-down content moderation downward and supported by advertising-driven business models designed to maximize shareholder profit.

The UK’s online safety bill currently lacks the robust safeguards and clear definitions needed to ensure it does not cause the removal of educational material, medical information, including documentation of the COVID-19 pandemic. This encourages excessive blocking and could mean the loss of an accurate historical record and access to reliable information.

What is and what is not considered ‘harmful’, for example for children, depends on an individual’s viewpoint and preferences or, more worryingly, government views. From 1988 to 2003, UK banned teaching children ‘acceptability of homosexuality as an alleged family relationship’. Marginalized voices, in particular, are at risk of being marginalized silenced from top-down takedown and content suppression requirements for “harmful content”.

UK policy makers should make significant changes. They should narrow the scope of the bill to carve out “harmful” content. The duties of predicting and filtering out what is “content harmful to children” and “illegal content” should be strictly defined. Editors should remove criminal liability clauses and need to do more to preserve top-down, non-predictive moderation as the general foundation for online safety, just as the EU is doing. Any requirements related to the security of Internet users should also protect end-to-end encrypted communications.

Brexit supporters once argued that it would free the UK from excessive European regulation. Unfortunately, the online safety bill in its current form risks producing the opposite result.

Rebecca MacKinnon is Vice President, Global Advocacy and Phil Bradley-Schmieg is Lead Counsel at the Wikimedia Foundation.

Read More

CEPA’s online journal dedicated to advancing transatlantic cooperation on technology policy.

Read more