The UK Online Safety Bill: What service providers need to know
By David Varney, Partner, and Nicole Simpson, Trainee Solicitor, Burges Salmon
After almost four years of significant debate and media discussion in the UK, The Online Safety Act (“OSA”) is now officially law, having received Royal Assent on 26 October. The OSA aims to regulate online safety by placing legal responsibility on online service providers to prevent and remove specified types of harmful content, particularly content deemed harmful to children.
Ofcom will be the appointed regulator and will be enforcing the OSA. Providers who fail to comply will face significant fines, with Ofcom being able to fine platforms up to £18 million or 10% of their global annual revenue, whichever is higher.
Who does it apply to?
As well as UK service providers, OSA applies to providers of regulated services based outside the UK, who provide services to UK-based users. The UK Government expects that at least 25,000 companies will be in the scope of OSA, which includes:
- “User to User Services” – Providers of internet services that allow users to encounter content generated, uploaded, or shared by other users. This is likely to include social media platforms such as Tik Tok and Snapchat, as well as any platform with a user-to-user messaging feature; and
- “Search Services”- Providers of search engines which enable users to search multiple websites and databases.
Companies in scope will be categorised by Ofcom as either “Category 1” services or “Category 2A or 2B” services, with Category 1 services facing the more onerous obligations.
What does it cover?
OSA imposes new duties of care on services including:
- Duties to carry out suitable and sufficient illegal content risk assessments. This will involve providers of online services maintaining a clear understanding of harms that users might face, and implementing an effective risk management processes to mitigate these.
- Duties regarding illegal content. Online services will need to take proportionate measures to mitigate and manage risk in relation to illegal content, which importantly will involve preventing users from encountering such content on their services at the outset. This marks a significant change; online service providers were previously only required to act rapidly in removing unlawful content once they were put on notice of the presence of such content. Services must also include provisions in their terms of service to indicate how they are protecting users, and these provisions must be clear and accessible to users.
- Duties in regard to content reporting and complaints. Services will need to allow users methods of easily reporting illegal content, as well as operating an accessible complaints procedure for users. Notably, this complaints procedure will also have scope for the removal of content.
- Duties in regard to user empowerment. This duty involves a responsibility to include features within their service to permit users to control and manage harmful material they see online. Services must also carry out risk assessments in relation to this duty.
- Duties in regard to fraudulent advertising. This duty will require services to prevent individuals from encountering fraudulent advertisements, minimise the length of time for which fraudulent advertisements are visible and swiftly remove fraudulent adverts once reported.
Further rules apply where services are deemed likely to be accessed by children. The above duties are caveated by a measure of reasonableness for the size and capacity of the online services provider in question.
Photo by John Schnobrich on Unsplash