In Britain, Parliament is debating the oft-revised U.K. Online Safety Bill, which seeks to regulate harmful and illegal content on the internet for children and adults. Without further modifications, however, the bill could undermine that laudable goal.
Fashioned by Ofcom, the U.K.’s digital communications regulator, the bill creates a formal “duty of care” for online platforms to remove harmful content on their websites with additional responsibilities for websites that also serve young people. If passed, adult websites would need to establish age verification and commit to removing illegal content such as content depicting hate crimes, sexual abuse, and terrorism. Websites with youth users would need to prevent young people from seeing illegal content as well as potentially harmful, but not illegal, content such as that promoting eating disorders, self-harm, or suicide. Penalties for failure to comply with these requirements include fines and potential criminal liability for platforms and tech executives if they fail to comply with Ofcom’s information requests.
As I’ve written previously, managing harmful content online, and creating specific youth-protection schemes, raises a host of thorny questions. In a recent blog looking at a U.S. youth online safety bill, I wrote that content moderation is technically challenging, because content moderation systems aren’t perfectly accurate at flagging inappropriate content. The systems can overcorrect and potentially censor acceptable information. In another piece about children’s privacy, I noted that definitively identifying internet users’ age online is both technically challenging and undermines everyone’s privacy.
The latest version of the U.K. Online Safety Bill fails to strike the right balance on content moderation versus censorship, or on privacy.
Content moderation creates a host of challenges, like defining what content is harmful, censorship of content that is not illegal, and encoding all the rules into an algorithm. Even with clear definitions, content moderation is technically challenging.
Content moderation algorithms are not perfectly accurate at flagging harmful content. This means that if an algorithm is 98% effective, 2% will slip through the cracks. On popular websites, 2% could be millions of posts. It remains unclear if firms will be punished if some illegal content passes through the filters. Free speech advocates worry that firms will overcompensate content removal because of the harsh penalties and the technical compliance challenge.
In addition to challenges of censorship, the bill also threatens privacy. First, by requiring users to disclose more personal data to access websites. Second by requiring a backdoor on encrypted messaging platforms for Ofcom to scan private messages for illegal conversations or content.
There is no easy way currently to verify age online without the user voluntarily giving up personal information. While the main trend in privacy legislation is toward decreasing the amount of personal information consumers have to provide to apps and websites, the U.K. bill points in the opposite direction, requiring users to make accounts and verify age through technologies like facial recognition checks or uploading a government-issued ID card to a website.
If personal information disclosure for every website wasn’t enough, the bill also contains provisions to allow Ofcom to search private messages on encrypted platforms. The bill as currently written gives new, large scale, citizen surveillance abilities to Ofcom regardless of wrongdoing. Seventy organizations, cyber security experts, and elected officials signed a letter to warn against allowing Ofcom to search private messages. Their main message is that creating a backdoor for the government creates a backdoor for anyone and undermines the rights of British citizens to privacy.
The U.K. government isn’t the only entity searching for a better solution to make the online space safer for users. This bill, however, stifles free speech online while undermining too many peoples’ privacy.