The European Parliament has taken a major step toward reshaping how minors access social media, backing a proposal that would effectively bar children under 16 from using platforms such as Instagram, TikTok, and Snapchat without strict parental consent. While not yet law, the move reflects growing concern across democracies about Big Tech’s impact on young minds, a debate that also resonates strongly in India, where millions of children are online, but protections remain patchy.
What exactly has the EU proposed?
In a recent vote, Members of the European Parliament (MEPs) overwhelmingly supported a resolution calling for a “digital minimum age” of 16 across the EU for social media, video-sharing platforms, and AI companions. The motion passed with 483 votes in favour, 92 against, and 86 abstentions, signalling strong cross-party backing.
Under the proposal, children aged 13–16 could access these platforms only with verified parental consent, while those under 13 would be completely barred. Platforms would be required to verify users’ ages and obtain confirmed parental approval before allowing teens to join.
However, this is a non-legislative resolution, a political signal rather than a binding law. It now urges the European Commission to draw up formal legislation, a step that appears likely given the decisive vote.
Why are European lawmakers worried?
The Parliament’s concerns reflect those of parents globally. Today’s social media platforms are designed to maximise engagement, often at the cost of young users’ wellbeing. Lawmakers warn that children face a “cocktail of risks” online, including addictive design features such as infinite scroll, autoplay videos, and reward loops that keep them glued to screens.
MEPs also highlighted rising cyberbullying, exposure to self-harm and eating-disorder content, sexual exploitation, and the growing misuse of deepfakes for harassment and blackmail. Another major worry is commercial manipulation, aggressive, data-driven advertising targeted at children who are unable to recognise persuasive tactics. Platforms routinely collect large volumes of data on young users’ emotions, behaviours, and vulnerabilities to fuel these targeted systems.
Lawmakers argue that existing regulations, including the Digital Services Act, are insufficient when platforms are fundamentally designed to maximise time spent online rather than safeguard minors. The resolution notes that parents increasingly struggle to manage their children’s digital lives because these services are “not designed for kids.”
For Brussels, the move is also about shaping global standards. Just as the GDPR became a benchmark for data protection worldwide, EU regulators hope their approach to children’s online safety will influence policies in other regions.

How would age verification actually work?
This is where things get technically complex. The Parliament supports creating an EU-level age-verification app and integrating these checks into the European Digital Identity (eID) wallet, a digital credential system being introduced across member states.
The idea is to let users prove they are above a certain age without handing over sensitive personal data each time they open an app. Privacy advocates have long warned that crude age-verification methods, such as uploading ID documents, could create new security risks and normalise the surveillance of children online.
Lawmakers insist any system must be accurate, privacy-preserving, and non-discriminatory. Crucially, they say platforms cannot simply outsource responsibility to automated tools; human oversight and accountability must remain.
Enforcement, if implemented, would be strict. The resolution proposes substantial fines and even operational bans for platforms that consistently fail to protect minors. In severe cases of non-compliance, senior company executives could face personal liability, an attempt to ensure responsibility reaches the highest levels of corporate leadership.
Regulators would also target “persuasive technologies” aimed at children, including engagement-based recommendation algorithms, dark patterns that push users to share data, loot boxes that mimic gambling, and influencer marketing directed at minors.
What changes for platforms and teenagers?
If turned into law, the proposal would create one of the toughest regulatory challenges yet for Big Tech in Europe. Platforms would be required to fundamentally redesign how they operate for younger users.
Key changes would include disabling addictive features by default for minors, meaning no infinite scroll and no autoplay. Engagement-based recommendation systems, which use algorithms to determine what content children see, would be banned for users under 16. Targeted advertising would face stricter limits, and the commercial use of “kidfluencers”, children promoting products online, would come under tighter regulation.
For teenagers, the impact would depend on how each EU member state enforces age verification and parental consent rules. In some countries, social media could become largely inaccessible until age 16; in others, teens may have to navigate parental consent systems that could be either straightforward or complex.
Even as a non-binding resolution, the move places significant pressure on both EU regulators and technology companies to adopt what lawmakers describe as a “safety-first” approach to protecting minors online.
What does this mean for India?
India faces similar challenges but has taken a different approach, at least for now. Under the Digital Personal Data Protection Act (DPDP) 2023, draft rules require verifiable parental consent for anyone under 18 before their personal data can be processed on social media platforms. This creates a gatekeeping mechanism without imposing fixed age bans like the EU’s proposed under-16 cutoff.
The Indian approach shifts responsibility to parents and guardians rather than creating hard barriers. But this raises significant enforcement challenges in a country with over 800 million internet users, many of them young, and where vast digital divides and varying levels of parental tech literacy complicate oversight.
Currently, most platforms operating in India self-impose a minimum age of 13, mirroring America’s COPPA law. But critics point out that India’s model may fail very young children, even toddlers could theoretically access apps with fraudulent consent, unlike Australia’s recent move to impose an outright 16-year prohibition.
Most platforms in India currently follow a self-imposed minimum age of 13, similar to U.S. COPPA norms. But critics warn that the consent-based system may fail very young children, as even toddlers could theoretically access apps using fraudulent or easily bypassed parental consent. This contrasts with Australia’s recently passed law introducing a strict under-16 ban, set to take effect from December 2025
The EU push could inspire tougher Indian measures. During recent Supreme Court hearings on exploitative online content, including cases linked to shows like India’s Got Latent, justices urged Aadhaar-linked age verification for platforms to curb minors’ access to obscene or inappropriate material. Legal experts suggest Delhi might harmonise with global trends like the EU’s DSA or Australia’s bans, as it finalises DPDP rules or updates IT regulations, though any changes would need to balance child protection against free speech in India’s youth-heavy democracy.
The broader question remains: can democracies protect children online without creating intrusive surveillance systems or restricting their digital freedoms?
With Europe pushing for stricter age barriers and India relying on parental consent, the global debate on children’s online safety is entering a new phase, one that could reshape how a generation experiences the internet.
Watch & Learn: Digital Personal Data Protection Act 2023 (DPDP Act) | India’s Data Protection Law
FAQs
What has the EU proposed for children’s social media use?
The EU has proposed setting a digital minimum age of 16, with strict parental consent required for ages 13–15 and a complete ban for children under 13.
Is the EU age limit for social media already a law?
Not yet. It is a non-binding resolution, but the European Commission is expected to draft legislation following strong support in Parliament.
How will age verification work under the EU plan?
The EU is considering an age-verification app and integration with the European Digital Identity (eID) wallet to confirm age without exposing sensitive data.
What does India’s DPDP Act require for minors on social media?
India’s draft DPDP rules mandate verifiable parental consent for all users under 18 before their data can be processed on social platforms.
Why no strict age cutoff for minors in India?
India focuses on parental responsibility rather than fixed bans, though enforcement challenges and calls for stronger verification continue to grow.
Could India introduce stricter age rules like the EU?
Yes. Courts and experts have suggested Aadhaar-linked verification and tighter regulations, and India may revise rules as global standards evolve.