The Online Safety Bill returns to the House of Commons on 5th December

29th November 2022

The eagerly awaited Online Safety Bill (OSB) returns to the House of Commons on the 5th of December. In advance of the Lower House debating the Bill, the Government has gone on a public relations offensive to persuade the public and MPs that the new changes to the OSB strengthen rather than weaken it. Various important revisions were leaked or announced by the Government at the end of November. These include new offences intended to criminalise the creation of deepfake pornography and encouraging self-harm in social media messages. Now the amended Bill is being published, social media companies and campaigners can finally start to examine the detail and practicality of the new proposed online regulation. As former Ofcom executive and SMB consultant, Trevor Barnes, explains, however, many questions about how the new regime will actually work in practice remain unanswered.

At last, a timetable for the OSB has been revealed by the Government. Re-introduced to the Commons on the 5th of December, it is due to pass through the Lower House by the end of January 2023, giving the Lords just over two months to scrutinise the draft legislation before the parliamentary session ends. Already some peers are warning that this may not be enough time to analyse such a high profile Bill thoroughly and complete the legislative process.

With the publication of the re-introduced Bill, it is now possible to start assessing the latest major changes to the draft statute. The journey of the OSB through the Commons was paused back in July due to concerns that it gave insufficient protection to freedom of expression. These concerns focussed on the suggested new category of ‘legal but harmful’ content, which social media companies would be obliged to exclude from their platforms.

Ofcom, designated to be the powerful regulator of the new online rules, had well-founded worries about how it would enforce the difficult to define ‘legal but harmful’ category. The new Culture Secretary, Michelle Donelan, shared these apprehensions and it is now official that the ‘legal but harmful’ category has been excised from the amended Bill.

To counter opposition to this change (the Labour Party has slammed it as “a major weakening” of the Bill), the Culture Secretary, Michelle Donelan, is saying that it has been replaced by a “triple shield” to protect users. First, the duty on social media companies to protect users from illegal content is being bolstered by the creation of new crimes.

The revised Bill was of course always to retain the central duties on tech companies to have robust systems in place to stop illegal content and fraudulent advertising appearing online, and to protect children. But in the wake of the tragic suicide of 14-year-old Molly Russell, linked by the inquest into her death with the self-harm online content she viewed, the Government clearly understood it needed to strengthen the OSB in some specific areas.

One such area is to introduce a new crime intended to cover encouraging self-harm in social media messages. The maximum sentence and fines for the offence will be announced later. Because posting such messages will be a crime, it follows that tech companies will be obliged to have processes in place to prevent them from appearing online – whether to children or adults.

Under the amended Bill, said Donelan, social media companies will have to remove “posts, videos, images and other messages that encourage, for example, the self-infliction of significant wounds”. The “for example” is significant. The new offence seems widely drawn and apt to cover potentially harmful messages about many matters other than self-harm. Watch this space.

The amended OSB will also be freighted with other novel offences to respond to concerns about sexual harassment (principally of women) in the digital age. They are intended to cover and help prevent the creation and dissemination of deepfake pornography (when a face is mapped onto someone else’s body), taking and sharing intimate pictures of someone without consent, or taking surreptitious photos down a woman’s top (“downblousing”). Although the laudable aim is to make these activities unlawful, it is not yet clear what duties they will create for social media companies. For example, will they be obliged to create systems to stop intimate or downblousing photos from being shared?

The second part of the “triple shield” is a strengthened obligation of tech companies to enforce their terms and conditions – especially those about access for children. Many social media sites require children to be aged 13 and over to set up an account, but there are worries that “self-declaration” age checks used by numerous platforms are too weak. Many children simply lie about their age – as Ofcom revealed in research published in October.

There will therefore be a renewed regulatory focus on the risk assessments (in particular to protect children) which social media companies will be forced to carry out under the OSB and then have checked by Ofcom. It is far from clear how robust Ofcom will be in enforcing age checks to be for online companies. Will users be forced to prove their age by for example having a credit card (as with regulated pornography sites)?

The third “shield”, the Government says, is that under the OSB, users will be able to choose to filter out ‘legal but harmful’ content – without tech platforms being under a duty to remove it. From details of the revised OSB so far available, how this would work in practice and what (if any) duties this initiative would impose on the social media companies is rather mysterious.

These filters may look at first sight a clever compromise (some free speech platforms already have such a filter). But it is fraught with regulatory beartraps and Ofcom are wary. Not the least of these problems – if tech companies will be under a statutory duty to provide these filters – would be the need to define what content (legal but potentially harmful to adults) social media companies will be obliged to offer filters against. Remember the major reason for dropping the fluid  ‘legal but harmful’ category from the Bill would have been the necessity to pin down its meaning in legislation.

There is another important issue which Ofcom is starting to debate vigorously behind closed doors about how it will regulate online companies under the new legislation. To what extent will it need to view and assess individual pieces of content?

The OSB takes a compliance-based, rather than content-based approach. This is akin to Ofcom’s regulation for Video Sharing Platforms (VSPs) like TikTok and Snapchat: Ofcom sets certain standards to protect VSP users which regulated services must abide by through creating systems and processes to ensure the necessary level of protection is achieved. VSP regulation was not designed to involve the review of individual content.

If a user goes on some regulated VSP services that have been notified to Ofcom, however, they would be surprised by some of the material there and wonder (rightly in my view) if it complies with the regulator’s guidance. Take Bitchute for example – a VSP platform created by users who believed YouTube was too restrictive. It contains a lot of anti-vaccination material, by way of illustration, that many people would regard as harmful and would, in my view, probably breach the Ofcom Code if broadcast.

Some at Ofcom are arguing that the regulator must assess individual pieces of VSP content to evaluate a platform’s protection systems. Others, it seems, disagree and say this is not how the VSP regulation was designed to work. The same issue applies to Ofcom’s regulation of social media under the OSB.

Will Ofcom need to respond to complaints – or conduct its own investigations – regarding individual pieces of online material which may breach the new laws and standards enshrined in the OSB in order to fulfil its regulatory duties? This is just one of a host of unanswered questions about the forthcoming UK regulation of social media.