You may have noticed something strange when visiting certain websites lately: requests for ID, popups asking for your age, or access denied unless you jump through some new hoop. This is all thanks to the UK’s Online Safety Act, which officially came into force in 2023 and has since begun its (let’s be honest) chaotic rollout in July 2025.

What's changed?
In short: a lot. The Act, spearheaded by the UK government and enforced by Ofcom, aims to make the internet a safer place (ostensibly for children) by holding online platforms to account for “harmful content.” However in practice, it has led to sweeping changes in how both users and website owners experience the web.
For users:
-
You’re increasingly being asked to prove your age, often via third-party ID verification firms.
-
Some sites have already limited access to UK visitors or shut off features altogether.
-
You might be seeing warnings, restricted content, or new barriers just to browse forums, comment sections, or media platforms.
For website owners:
-
If your site allows users to post content or interact (yes, even with that lovingly maintained fishkeeping forum from 2008) you may be legally required to implement age verification and content moderation protocols.
-
You’re now potentially liable for “harmful” content your users might post, even if it’s entirely legal.
-
The scope includes “user-to-user services” and “search services,” which, under the Act’s definition, covers a staggeringly broad swathe of the web.
Ofcom’s guidance lays out the requirements and timelines for compliance. Spoiler alert: it’s complex, resource-intensive, and not built with small site owners in mind.
The problems
While protecting children online is an admirable goal, the incredibly broad scope of the app suggests it’s a smokescreen to get people on board with the idea and execute increasing censorship and surveillance. The execution is deeply flawed.
1. Killing the Community Web
We’re already seeing small, independent site owners shutting down forums and community sites because they simply can’t afford to comply. Think niche hobby groups, special interest communities and fan sites. Places that make the internet rich and weird and wonderful. These are all now at risk because of vague, burdensome legal obligations.
What began as a bid to stop children from accessing explicit content has turned into a net cast so wide that almost anyone hosting a conversation online could fall under its scope.
2. Overreach
The Act’s wording is so broad that even Wikipedia and similar educational platforms have been caught up in its dragnet – a site maintained by volunteers that doesn’t even track your activity. If that’s not mission creep, it’s difficult to know what is.
The Wikimedia Foundation is currently challenging the Act legally.

3. Think of the Children™
It’s impossible not to mention the “protect the children” argument. It’s a good soundbite, but in practice, it often feels like a way to smuggle in draconian controls on speech, privacy, and encryption.
Furthermore, in one of the more stunning examples of a straw-man argument, Tech Secretary Peter Kyle posted on Twitter/X comparing opponents of the act to sexual predators. (Amazingly, this post is still public as of 01/08/2025.)
It seems unlikely many dissenters are on the side of predators, but merely suggesting that the implications and scope of the act go far beyond expectations and there may be better ways to achieve the goal.
Instead of encouraging digital literacy or helping parents use tools that already exist (parental controls, screen time limits, network filters), we’ve jumped straight to legislation that asks tech firms to break the internet for everyone.
It seems that between this and a staggering 1 in 4 children arriving at school not toilet-trained, it’s apparently too inconvenient to ask parents to, well… parent their children.
The kicker is, when asked about how to stop children circumventing verification checks, Oliver Griffiths from Ofcom suggested that: “Parents having a view in terms of whether their kids have got a VPN, and using parental controls and having conversations, feels a really important part of the solution.” 🤦🏻
Why not go one small step further and have parents check if their children are accessing “harmful” content too?
Or, better yet, ask social media platforms to put more effort into preventing “harmful” content appearing on their platforms in the first place? (Looking at you YouTube 👀)
4. Your data, up for grabs
To enforce age verification, many sites are unable to implement checks in a way that is affordable or compliant with other privacy laws, and are instead turning to third-party firms – often overseas and unregulated. That means your passport photo, driver’s license, or facial scan could end up in a database you’ve never heard of, in a jurisdiction with murky data protection laws.
This isn’t theoretical either. In July 2025, the Tea app (a women-only advice and dating platform), suffered a significant data breach. Over 72,000 images were leaked, including 13,000 selfies and government-issued IDs used for user verification. Some of the data was exploited to identify users’ approximate locations and was distributed on forums like 4chan.
This breach highlights the serious risk of requiring ID verification through third parties. Even platforms with security intentions can become targets. Without tight regulation or robust oversight, there’s little recourse when it goes wrong. Forcing websites to collect sensitive personal documents creates a high-value target for hackers and significantly increases the legal and ethical risks for operators.
We’re being asked to trade privacy for security and getting neither.
5. It doesn’t even work
In reality, even Ofcom have admitted that “There will be dedicated teenagers who want to find their way to porn, in the same way as people find ways to buy alcohol under 18. They will use VPNs.”
It’s worth noting that Ofcom officials are apparently free to publicly mention how to get around the legislation they are tasked to enforce, but if a platform falling under the scope of the Act does so, they could be fined.
Government sources insist they’re not planning to ban VPNs, but with MPs like Sarah Champion calling for it, and traffic to VPN sites skyrocketing since the Act’s rollout, it’s not a stretch to imagine the next move.
Remember when they said encryption wouldn’t be tampered with? Now Apple is suing the UK government over the demand to weaken its encryption for law enforcement access.
So yes, a VPN ban might sound absurd… until it isn’t.
6. Legal Action & Dissent Mounting
It’s not just Apple and Wikimedia. There’s growing resistance from tech firms, privacy advocates, and international observers.
The EFF has weighed in, calling the Act a “digital rights nightmare.”
Meanwhile, petitions against the bill such as this one calling for repeal quickly gathered hundreds of thousands of signatures, only to be met with shrugging indifference in official responses.
At what point do we admit that the public doesn’t have a voice in these laws?

So where do we stand?
Many have now said it plainly: this Act is a mess.
It’s heavy-handed, poorly scoped, and damaging to the open web. It demands impossible standards from small businesses while letting big platforms quietly pass the buck.
Worse still, it pushes us further toward authoritarian surveillance and censorship, cloaked in the language of safety.
If encryption is banned or backdoored, if VPNs are restricted, and if every UK internet user is tracked, what’s left of digital freedom?
Successive UK governments have tried over decades to undermine privacy and encryption, highlighting an agenda aimed at expanding state apparatus for surveillance as well as a shocking lack of understanding of how technology actually works and how people use it in the modern world.
The Online Safety Act is the latest excuse. How long before messaging apps are banned? Or browser extensions? Or Tor?
If this is the future of the internet, it’s one where convenience, creativity, and community are all sacrificed on the altar of control.
Ethical Pixels will comply where we must and help others meet their obligations, but we’ll also be calling it what it is.
What can you do?
If you feel passionately about this issue, there are still ways you can make your voice heard:
Despite having been given a (dismissive) response, it can be useful to have on record just how people feel about the Act.
Keep up the pressure on our representatives. This form from the Open Rights Group makes it easy.
You can donate to the nonprofit groups leading the charge, advocating for a free and open internet. Ethical Pixels donates to these organisations every year and encourages others to do the same.
This post was written by Larry Brangwyn, and does not necessarily reflect the formal views of Ethical Pixels, but it does reflect our collective frustration with poorly made laws that hurt the people they claim to protect.
