Explore our campus, including our modern accommodation and innovative course facilities on our virtual tour.

Should under-16s be banned from Social Media? What the UK is considering

Date 21 January 2026

Should under-16s be banned from social media? As the UK opens a national consultation, this blog looks beyond the headlines to explore what’s really being proposed, why now, and what international examples reveal about the risks and opportunities of regulating children’s digital lives.

Ken Punter

The UK is considering a major change to how children use social media. The government has launched a national consultation asking whether under-16s should be banned from platforms such as TikTok, Instagram and Snapchat — a move that could reshape young people’s online lives and place new responsibilities on technology companies.

While headlines have focused on the word “ban”, the reality is more complex. The proposal sits alongside wider changes already under way to make the internet safer for children and young people.

What’s being proposed?

The consultation builds on the UK’s Online Safety Act, which already requires social media platforms to protect children from harmful content. The new proposals ask whether the UK should go further by introducing a minimum age for social media access, potentially set at 16.

The government is also exploring:

  • stronger age checks when people sign up to platforms
  • limits on addictive design features such as infinite scrolling
  • clearer expectations on how platforms protect children
  • guidance for schools and families on healthier digital habits

This marks a shift away from relying on parents and schools alone, towards holding technology companies directly responsible for children’s online safety.

When could this happen?

No decision has been made yet. The consultation will run through 2026, with the government expected to respond later in the year. If new laws are introduced, they would likely come into force from 2027 onwards.

In the meantime, the Online Safety Act is already being enforced, requiring platforms to introduce stronger age checks and child-safety measures.

Why now?

Concerns about the impact of social media on children’s wellbeing have been growing for years. Parents, teachers and health professionals have raised worries about:

  • exposure to harmful or distressing content
  • cyberbullying and online abuse
  • addictive design features that encourage endless scrolling
  • disruption to sleep, attention and mental health

Ministers argue that voluntary action by platforms has been too slow and inconsistent. The consultation reflects a clear change in direction: responsibility for online safety must sit with the companies that design and profit from these platforms.

Who supports the idea?

Many parents and child protection groups welcome stronger action. They argue that existing age limits are poorly enforced and that younger teenagers are routinely exposed to adult content and algorithmic systems designed to maximise engagement rather than wellbeing.

Campaigns such as the Molly Rose Foundation, set up after the death of 14-year-old Molly Russell, have played a key role in pushing the issue onto the political agenda. The Foundation argues that children should not be left to navigate powerful digital systems without meaningful protection.

There is also strong public support for better age verification, replacing the familiar “tick to confirm” age boxes with more reliable checks.

Any concerns about an outright ban?

Some of the most thoughtful criticism comes from people who strongly support child safety but worry about unintended consequences.

Ian Russell, Chair of the Molly Rose Foundation, has warned that a blanket ban could be a “sledgehammer approach” that distracts from the real issue — unsafe platform design. He argues that the priority should be fixing harmful algorithms and recommendation systems, not simply removing young people from online spaces.

Mental health organisations share similar concerns. The Mental Health Foundation has warned that bans could cut off vulnerable young people from online support and communities they rely on, particularly where offline help is limited.

Others raise privacy concerns about age verification that relies on ID, biometrics or facial scans. Evidence from the UK and US suggests such systems can drive people to use VPNs or other workarounds, creating new risks rather than removing them.

How would it work in practice?

If a ban were introduced, responsibility would sit with platforms, not families or young people. Companies would be required to prevent under-16s from creating or maintaining accounts, using age-checking tools at sign-up and possibly over time.

This could involve facial age estimation, secure data checks or automated systems to detect underage accounts. Platforms that fail to take reasonable steps could face substantial fines.

The challenge will be ensuring that these systems are accurate, privacy-friendly and inclusive — particularly for young people without passports, bank accounts or access to official ID.

What’s happening elsewhere?

The UK is not alone in exploring this idea.

  • Australia introduced a nationwide under-16 social media ban in December 2025. Millions of accounts have already been removed, though concerns remain about workarounds and migration to smaller platforms.
  • Malaysia plans to introduce a similar ban, linking age verification to national identity systems.
  • In the United States, age-verification laws are spreading at state level, prompting legal challenges and resistance from privacy groups.
  • UNICEF has cautioned that age bans alone will not keep children safe, arguing for a child-rights approach that combines regulation, safer design and digital education.

These examples show that while bans are possible, they are not simple — and results are mixed.

What happens next?

The government will review consultation responses and decide whether to legislate, strengthen existing enforcement, or focus on safer design rather than access restrictions. Schools are also expected to face clearer expectations around mobile phone use.

What this really means

At its heart, this debate is about how we protect children without cutting them off from the world they are growing up in.


Further reading
Ken Punter
Ken Punter

Ken is a Senior Lecturer in Marketing and Course Leader for BA (Hons) Advertising & Digital Marketing. He has worked in digital marketing and communications since the late 1990s, with experience both agency and client-side, leading digital teams and contributing to innovations in online marketing and social media.

He teaches digital marketing across undergraduate and master’s courses and speaks and writes on digital marketing and social media.