
The UK’s Child Safety Law Has a Moustache Problem
Dan Saltman- The UK’s Online Safety Act has required platforms to assess child-safety risks, reduce harmful content exposure, and use age verification where appropriate since July 2025.
- Internet Matters’ May 2026 research found that 46% of children say age checks are easy to bypass, while 32% admit they have already done so.
- Children described simple workarounds including false birth dates, using a parent’s or sibling’s ID, submitting someone else’s face, using game avatars, or even drawing on facial hair to fool age estimation.
- Parents are also part of the bypass problem: 26% have allowed their child to get around age checks, including 17% who actively helped and 9% who looked the other way.
- Age checks are not stopping harmful exposure inside platforms, with 49% of surveyed children saying they encountered harmful content online in the past month.
- The report highlights the unresolved trade-off between stronger age assurance and privacy risks created when platforms collect face scans, ID photos, or other sensitive verification data.
A mother recently discovered her 12-year-old son using an eyebrow pencil to draw a moustache on his face to by-pass the facial age estimation check on a social media platform and it WORKED.
Reported directly in a new study from Internet Matters, is one detail in a wider picture that raises serious questions about whether the UK’s Online Safety Act is delivering on its core promise: keeping children away from harmful online content. The short answer, based on the data, is not yet.
What Is the Online Safety Act and What Has It Required Since July 2025?
The Online Safety Act came into force in July 2025. Under its Protection of Children Codes, platforms operating in the UK are legally required to assess the risks their services pose to children, reduce exposure to harmful content, and implement age verification where appropriate. Ofcom, the UK communications regulator, is responsible for enforcement and has begun investigating non-compliant services.
The legislation was described at the time as one of the most ambitious attempts by any government to regulate children’s experiences online. It placed the UK alongside Australia, which introduced its own social media minimum age law in late 2024, at the leading edge of a global shift toward requiring platforms to verify user ages.
There have been measurable effects. Internet Matters reports that the ten most-visited pornography sites in the UK, including Pornhub, have introduced robust age checks for UK users since the Act took effect. According to UK regulator Ofcom, visits to pornography sites have reduced by a third since age verification rules came into force, and 58% of parents believe the measures are already improving children’s safety. Seven in ten children (68%) and parents (67%) report noticing more safety features on the platforms their children use.
But the same research shows the progress stops well short of what the legislation was designed to achieve.
32% of Children Have Bypassed Age Checks. Here Is How They Are Doing It.
In May 2026, Internet Matters published a report titled The Online Safety Act: Are Children Safer Online? as part of its annual Digital Wellbeing Index research programme. The study surveyed more than 1,000 UK children aged 9 to 16 and their parents, and supplemented the survey data with seven focus groups – four with children aged 11 to 16, and three with parents and guardians of the same age group.
46% of children say age checks are easy to bypass. Only 17% say they are difficult. A third – 32% – admit to having actually done it.
The methods children described vary in sophistication, but most require very little technical knowledge. The most common approach is simply entering a false date of birth. Platforms that rely on self-declaration rather than active verification have no mechanism to detect this. Children also reported using a parent’s or older sibling’s ID document when photo identification was required, and submitting videos of other people’s faces or video game character avatars to fool facial recognition systems.
And then there is the moustache. The mother’s account, quoted directly in the Internet Matters report, reads: “I did catch my son using an eyebrow pencil to draw a moustache on his face, and it verified him as 15 years old.”
The fact that drawn facial hair fools a facial age estimation system is not a trivial detail. It points to a real technical limitation: AI-powered age detection relies on visual signals that can be manipulated with everyday cosmetics. These systems are not assessing identity – they are reading surface-level cues. A 12-year-old who appears older is indistinguishable, to the algorithm, from a 15-year-old. The Register also notes that using video game characters to fool video selfie systems has been a documented workaround since age verification rollouts began in the UK.
A Quarter of Parents Are Enabling the Bypasses
The more complicated finding in the Internet Matters report is not that children are circumventing systems without their parents’ knowledge. It is that a significant proportion of parents are actively involved.
26% of parents have allowed their child to bypass age checks. Of those, 17% actively helped. A further 9% chose to look the other way.
The reasons given were largely situational. Parents said they made individual judgments about specific content, rather than blanket decisions. One mother of a 13-year-old is quoted in the report: “I have helped my son get around them. It was to play a game, and I knew the game, and I was happy and confident that I was fine with him playing it.”
This creates a structural gap in the OSA’s enforcement model. Legislation designed to restrict children’s access to harmful content cannot operate consistently when a quarter of parents are facilitating bypasses based on their own assessment of what is safe. The law was written primarily to protect children from harmful content encountered without parental awareness. It does not have a mechanism for the cases where the bypass happens with parental involvement and consent.
As The Register puts it, age-gated content online is only as restricted as parents allow it to be – and a quarter of UK parents are choosing not to enforce it.
Children Are Still Encountering Harmful Content Regardless
Perhaps the most significant finding is that bypassing age checks is not the only route to harmful content. Half of the children surveyed – 49% – said they had encountered harmful content online in the past month. This applies even to children who did not bypass any verification.
The types of content children described encountering include violent content (12% of respondents), material promoting unrealistic body standards (11%), and racist, homophobic or sexist content (10%). All of these categories are supposed to be restricted under the Act’s Children’s Safety Codes.
Euronews reports that children in focus groups also described seeing the assassination of Charlie Kirk on their social media feeds. One 14-year-old girl told researchers: “I saw it on Snapchat. I broke down into tears and then told my mum immediately.”
This finding points to a problem that sits upstream of age verification entirely: content moderation and algorithmic delivery are still serving restricted material into the feeds of children who accessed those platforms through legitimate means. Age checks at the door do not help if harmful content is already circulating inside.
Public Confidence in the Government’s Approach Is Low
Only 22% of parents and 31% of children believe the government is doing enough to protect children online.
Both groups said they want stronger enforcement of the Act, stricter age checks, and restrictions on harmful features as the next steps – with some parents and children saying a social media ban for younger users would be more effective than the current approach. Others said a ban would be ineffective and potentially harmful to children’s social development.
Internet Matters CEO Rachel Huggins called on both government and industry to go further. As quoted by The Register, she said: “Stronger action is needed from both government and industry to ensure that children can only access online services appropriate for their age and stage and where safety is built in from the outset, rather than added in response to harm.”
Huggins also pointed to recent discussions between the Prime Minister and social media companies about tackling online harms as a timely opportunity for change.
What Major Platforms Have Actually Implemented
Since the OSA came into force, several of the UK’s most-used platforms have introduced or tightened safety measures specifically for younger users. According to Internet Matters’ own analysis, TikTok rewrote its UK Terms of Service in plain English and now applies daily screen time limits for under-18s by default. Instagram defaults teen accounts to private and limits targeted advertising aimed at minors. YouTube offers a separate YouTube Kids app with curated content and parental controls.
Pornhub and the nine other most-visited adult sites in the UK have introduced robust age checks for UK users, which Internet Matters describes as one of the clearest early wins of the Act’s enforcement. The Internet Matters reporting states that ~53% of children have “recently been asked to verify their age”.
However, as the bypass data makes clear, encountering an age check and being stopped by one are two different things. A check that can be defeated by a drawn-on moustache or a false birthday is providing the appearance of protection rather than the substance of it.
What Internet Matters Recommends
The Internet Matters report sets out five principles it says are necessary for meaningful improvement. Safety-by-design means embedding child protection into platforms from the start, not adding it after harm has occurred. A risk-based approach means the level of restriction on a given service should reflect the actual risk its content and features pose to children, rather than applying a uniform standard across all platforms. Age-appropriate experiences means access to content and features should be calibrated to where a child is developmentally, not determined by a single age cutoff. Highly effective age assurance means the checks themselves need to be robust enough to actually work. And media literacy means both children and parents need more support to navigate online risk, built into platforms and supported by schools and government.
The gap between these principles and current implementation is substantial. Defining and enforcing age-appropriate access at scale, without creating the privacy risks that come with collecting biometric and identity data from millions of users, remains technically and politically unresolved.
Age Verification and Privacy: The Trade-Off That Has Not Been Resolved
The UK’s push for robust age verification does not exist in isolation. It is part of a global legislative trend that has created its own significant problems around privacy and data security. Every platform that collects facial scans, government ID photos, or behavioural profiling data to verify age is also creating a new store of sensitive personal information that can be targeted by criminals.
As covered in detail on the Redact blog in February 2026, Discord announced a global rollout of teen-by-default settings starting in March, requiring face scans or ID verification for access to adult content. The announcement triggered significant user backlash, and Discord subsequently delayed the global rollout to the second half of 2026. That rollout came against the backdrop of an October 2025 breach in which approximately 70,000 users had government ID photos exposed through a third-party support system – a direct consequence of collecting identity documents at scale.
The Electronic Frontier Foundation has argued consistently that age verification mandates create serious privacy and free speech risks, and that these laws often do more harm than good. As explored in the Redact analysis on age restrictions and social media, the core problem is that children are already bypassing restrictions by lying about their age during sign-up, and truly stopping a determined teenager requires the kind of invasive identity verification that carries its own serious risks for everyone.
The UK data now confirms that tension in practice. Age checks that are weak enough to be defeated with cosmetics do not justify the data collection they require. Age checks robust enough to be reliable raise legitimate concerns about what happens to that data when – not if – it is breached.
What This Means for Parents Right Now
For parents, the report is a reminder that legislation alone cannot do the work of ongoing conversations with children about what they are doing online, what they are encountering, and why certain content carries real risks. The data shows that some parents are actively helping their children bypass the protections the law is designed to provide. Where that is a considered decision made with full knowledge of the content involved, it is a parenting choice. Where it happens without that awareness, no verification system can close the gap.
One concrete step that sits alongside any platform-level protection is reducing the amount of personal information that is publicly linked to you and your family online. Data brokers routinely build profiles from publicly visible social media activity. The more information is out there, the greater the exposure when a platform or a verification vendor is compromised. Redact lets you bulk delete posts, comments, messages, and activity across more than 25 platforms. The deletion runs entirely on your own device – none of your data passes through Redact’s servers.
The Online Safety Act represents a genuine attempt to make the internet safer for children in the UK. The evidence from this report is that it is not yet working well enough, and that the methods being used to enforce it carry trade-offs that have not been fully accounted for. Both of those things can be true at the same time, and taking them seriously is the only honest starting point for what comes next.