4 min read

The Online Safety Act: Moving from policy to practice (key developments and potential implications)

Read more

By Maia Crockford, Hannah Clements & Tim Ryan

|

Published 09 September 2025

Overview

The Online Safety Act 2023 (the "Act") represents a landmark shift in the UK’s approach to digital regulation. Designed to make the UK “the safest place in the world to be online,” the Act imposes sweeping duties on online service providers to protect users - particularly children - from harmful and illegal content. However, due to its broad reach, the act has divided opinion as to where the balance lies in protecting children and allowing users to access freely available, legal, content. Some platforms, including Reddit and Bluesky, have introduced age-related access controls, though it remains unclear whether these are directly in response to the act.

The Act applies to all services allowing user-generated content or interaction, including social media platforms, search engines, cloud-storage and data-sharing services, online forums and pornography sites.

Notably, the Act applies extraterritorially. Even companies based outside the UK must comply if their services are accessible to UK users and pose a material risk of harm.

Key obligations include:

  • Implementing robust age verification systems to prevent users accessing age-inappropriate content
  • Providing clear reporting mechanisms for users
  • Offering adults more control over the content they see
  • Conducting risk assessments and maintaining compliance records

 

Appetite and response to the Act

Despite its child protection aims, the Act has faced fierce opposition from civil liberties groups, tech experts, and political parties. Critics argue that the legislation:

  • Threatens freedom of expression
  • Undermines privacy rights, particularly through age verification requirements
  • Risks data breaches and misuse of sensitive personal information 

The Reform Party, led by Nigel Farage, has pledged to repeal the Act if elected, citing concerns over censorship and government overreach, and a petition to repeal the Act has garnered over 530,000 signatures, reflecting widespread public unease.

Whilst the government maintains that the Act is proportionate and necessary, and has actually gone as far as to recently announce a tightening of the Act's laws in respect of self-harm content online (read more about that here), the debate underscores the tension between safety and civil liberties in digital regulation.

 

Recent Ofcom activity

Ofcom’s recent enforcement actions illustrate the real-world impact of the Act and its expansive reach:

 

Wikipedia legal challenge

Wikimedia (the nonprofit owner of Wikipedia) sought a judicial review of regulations that could classify Wikipedia as a Category 1 service, subjecting it to the strictest compliance requirements. These include verifying user identities - an approach Wikimedia argues would compromise contributor privacy and undermine the platform’s open model and freely available content.

The court rejected Wikimedia’s challenge, affirming that ministers had reasonably considered and dismissed exemption arguments. However, we may witness further legal challenges if Ofcom follows suit and formally categorises Wikipedia as Category 1 meaning that its operations are materially affected. Users may also notice a change in how Wikipedia operates if it implements required safeguards.

 

4chan provisional notice

Ofcom issued a provisional notice of contravention to 4chan for failing to comply with information requests and risk assessment duties under the Act and Ofcom has provisionally decided to impose a £20,000 fine "with daily penalties thereafter" for as long as the site fails to comply with its request. In response to Ofcom's provisional notice of contravention, 4chan has publicly stated through its legal representatives that it will refuse to pay the proposed £20,000 fine and any subsequent daily penalties.

The platform, incorporated in the United States, argues that Ofcom's notices create no legal obligations under US law and has described the investigation as an 'illegal campaign of harassment'. Lawyers for 4chan have asserted that American businesses are protected by First Amendment rights and will seek relief in US federal court if necessary. So, whilst Ofcom’s enforcement powers include applying to courts to block access to non-compliant services, this development underscores the challenges Ofcom may face in enforcing the Act against offshore providers and raises questions about the effectiveness of cross-border regulatory mechanisms.

This case will be a key test of Ofcom’s ability to enforce the Act against offshore providers.

 

Implications

These developments carry important implications for technology businesses and those in the digital supply chain:

  1. Jurisdictional reach: Non-UK companies serving UK users must assess their exposure under the Act. This includes cloud providers, SaaS platforms, and content-sharing services.
  2. Compliance burden: Categorisation as a Category 1 service brings heightened obligations (risk assessments, transparency reports, and user empowerment tools). Businesses must prepare for potential classification and associated costs.
  3. Privacy vs. safety: Age verification and identity checks may conflict with privacy-centric business models. Companies must balance compliance with user trust and data protection obligations.
  4. Operational risk: Failure to comply can result in fines up to £18 million or 10% of global turnover, and even business disruption orders such as blocking access or cutting off payment services Ofcom recently published a statement in respect of fees and penalties under the Act, including how they are calculated and imposed – find it here.
  5. Strategic decisions: Some platforms may choose to disable features to reduce applicable compliance duties. Organisations should assess whether their suppliers or partners are making such decisions and plan accordingly.

Authors