The UK’s Online Safety Bill, a landmark piece of legislation that that aims to regulate the country’s online content, has just been introduced into Parliament after undergoing significant revisions.
The bill has been in progress for about five years and its main objective is to regulate online content in the UK to make it the safest in the world. It is perhaps most famous for legally requiring pornographic websites to verify users’ age, and, yes, that’s still in there.
According to The Independent, the government has strengthened several areas since the previous draft, one of which is shortening the time it takes for company executives to comply with requests for information from Ofcom, the UK’s communications regulator. The last draft proposed a time frame of two years after the bill is made law; the revised draft now proposes a time frame of two months before executives are held criminally liable.
What’s new and what was tweaked
There are other notable changes in the bill.
Company managers could also be held criminally liable by Ofcom if they (1) destroy evidence, (2) fail to attend interviews with the regulator, (3) provide false information in interviews with the regulator, and (4) obstruct Ofcom when it enters company offices.
Platforms that host user-generated content, such as social media platforms and search engines, would not only have a duty of care to protect users from scams and fraud conducted by other users, but also a duty to protect them from “pre-paid fraudulent ads,” which includes unlicensed financial promotions and ads from fake companies. To do this, the revised bill proposes that social media platforms and search engines must put in place “proportionate systems and processes to prevent the publication and/or hosting of fraudulent advertising on their service and remove it when they are made aware of it.”
“We want to protect people from online scams and have heard the calls to strengthen our new internet safety laws,” Culture secretary Nadine Dorries is quoted as saying in The Guardian, “These changes to the upcoming online safety bill will help stop fraudsters conning people out of their hard-earned cash using fake online adverts.”
Further into the list of changes, there is now a new requirement to report any incidents or encounters of child sexual abuse to the National Crime Agency (NCA).
News content will also be exempted from regulations to protect free speech.
Cyberflashing, or the act of sending unsolicited sexual images to receivers, who are usually girls and young women, would also be a crime. Users who cyberflashed would face the same maximum sentence as indecent exposure: A two-year stay in prison.
The bill also includes proposals to punish digital “pile-ons”, and the sending of threatening social media posts and hoax bomb threats.
Finally, arguably the most notable and controversial revision in the draft is how the Bill has changed its approach regarding “legal but harmful” content. As the phrase denotes, this refers to content that is not in itself illegal but could cause harm to whoever encounters it online.
The slippery slope of “legal but harmful” content
The updated bill demands that social media platforms address their approach to “legal but harmful” content in the terms of service (ToS) for their services. It also proposes that such platforms conduct a risk assessment of possible harms that users might encounter while using their service.
Many free speech advocates, including members of the UK’s governing Conservative party, have expressed concern over the possible removal or suppression of such content. In a post, Dorries reassures her readers: “Companies will only be required to remove ‘legal but harmful’ content if it is already banned in their own terms and conditions. This only applies to the biggest platforms carrying the highest risk, and we are updating the legislation to ensure platforms focus on priority categories of harm that are set out in secondary legislation.”
Judging by some of the comments on the post (highlighted in this Twitter entry), some readers at least were not moved by Dorries’ rhetoric. The Open Rights Group (ORG), a UK-based organization working to protect the digital rights and freedoms of individuals in the UK, discussed the harms of the Online Safety Bill in December 2021, calling for the “legal but harmful” clauses to be removed to “ensure that the focus of the legislation remains on its stated purpose—protecting the well being of individuals”.
Jim Killock, executive director of the ORG, describes “legal but harmful” as a censor’s charter. “Civil society groups have raised the warning, Parliament has raised the warning, the government’s own MPs have raised the warning but the government has ignored them all,” he said, “The online safety bill will outsource decisions about what we can see online from British courts, Parliament and police to the terms of service documents of social media platforms drafted by Silicon Valley lawyers.”