New Delhi— Meta on Friday announced a new phase of its teen safety initiatives in India, introducing enhanced protections for Instagram accounts used by minors.
As part of the update, teenagers under 16 will no longer be able to go live or disable image filters that block unwanted content in direct messages—unless they have parental approval.
The company also revealed plans to expand these protections to Facebook and Messenger later this year. “These accounts will feature similar safeguards, including protection from unwanted contact, reduced exposure to sensitive content, and tools for managing time spent on the apps,” Meta said.
Parents will also gain increased oversight capabilities, enabling them to supervise how their teens interact across Meta platforms.
Instagram Teen Accounts are specifically designed to offer a safer, more age-appropriate online experience, while also providing parents with peace of mind. Since the feature’s initial rollout in September 2024, more than 54 million teens worldwide have adopted these accounts.
“Young people deserve safe, age-appropriate online experiences, and these updates are part of our long-term commitment to building platforms that prioritize their well-being,” said Tara Hopkins, Global Director of Public Policy at Instagram.
“When we launched Teen Accounts on Instagram last year, our goal was to create technology that balances self-expression with built-in protections. In India—home to one of the world’s largest youth populations and a vibrant creator community—we’ll continue listening to the needs of both teens and parents. We’re encouraged that 97 percent of teens aged 13 to 15 globally have remained within these protective settings,” she added.
Instagram Teen Accounts include several built-in safety features that limit who can interact with teens and what kind of content they can access. These settings are applied by default, and parental approval is required to make them less restrictive for users under 16.
Key features include default private account settings, restrictions on messaging, tagging, and mentions, and filters that reduce exposure to potentially harmful or sensitive material. (Source: IANS)